content
stringlengths
86
994k
meta
stringlengths
288
619
Acceleration and Ode's March 6th 2008, 05:07 PM Acceleration and Ode's ok this prob is a little trickier (x double dot) is given. in gen as a func of x, x dot and t, and here by expression -dot of x + 2e^(-t)+1 Find solution for x(t) which satisfies x(0)= dot of x(0)=0 I can do init conditions ok on this I think, I'd just like to see how the process of the equation works on this, its probz done in a similar way to the previous one... March 6th 2008, 08:21 PM mr fantastic ok this prob is a little trickier (x double dot) is given. in gen as a func of x, x dot and t, and here by expression -dot of x + 2e^(-t)+1 Find solution for x(t) which satisfies x(0)= dot of x(0)=0 I can do init conditions ok on this I think, I'd just like to see how the process of the equation works on this, its probz done in a similar way to the previous one... Substitute $v = \frac{dx}{dt}$ and solve the first order DE $\frac{dv}{dt} = -v + 2 e^{-t} + 1 \Rightarrow \frac{dv}{dt} + v = 2 e^{-t} + 1 \,$ subject to the initial condition v(0) = 0. I recommend the integrating factor technique. Then solve the first order DE $\frac{dx}{dt} = v(t)\,$ subject to the initial condition x(0) = 0.
{"url":"http://mathhelpforum.com/calculus/30215-acceleration-odes-print.html","timestamp":"2014-04-21T16:07:53Z","content_type":null,"content_length":"5407","record_id":"<urn:uuid:361a408b-d645-411c-8f26-4b83539aceda>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometry - Terminology To better understand certain problems involving rockets and propulsion it is necessary to use some mathematical ideas from trigonometry, the study of triangles. Most people are introduced to trigonometry in high school. There are many complex parts to trigonometry, but on this page we are concerned chiefly with definitions and terminology. We start with a general triangle. A triangle is a closed shape having three sides and three internal angles. The sum of the three angles of any triangle is 180 degrees. If we label the angles of a triangle c, d, and e, then: c + d + e = 180 degrees There are two ways to measure the angles inside a triangle. One way is to measure the angle in degrees, where 360 degrees equals a complete circle. The other way is measure the angle in radians, where 2 pi radians equals a complete circle. Therefore; 360 (degrees) = 2 * pi (radians) 1 degree = .01745 radians 1 radian = 57.2957 degrees A right triangle is a special case of the general triangle with one of its angles equal to 90 degrees. A 90 degree angle is called a right angle and that is where the right triangle gets its name. The right triangle has some special properties which are very useful for solving problems. The sum of the three angles of a right triangle is equal to 180 degrees and one of the angles equals 90 degrees. Then the sum of the other two angles is also 90 degrees. For a right triangle: c + d = 90 degrees = pi/2 radians The important factor here is that if we know (or measure) one angle of a right triangle,we automatically know the value of the other angle. If we know the value of d, then c = 90 - d To describe a triangle in general, we need to know the value of two angles; for a right triangle we only need to know (or measure) one angle. Another important piece of information relates the size of the sides of a right triangle. We call the side of the right triangle opposite from the right angle the hypotenuse. It is the longest side of the three sides of the right triangle. The word "hypotenuse" comes from two Greek words meaning "to stretch", since this is the longest side. We will label the hypotenuse with the symbol h and we will label the other two sides a and b. Regardless of the size of the hypotenuse, the ratio of the size of side a to the hypotenuse h depends only on the size of the angle between the side and the hypotenuse. The value of the ratio is a function of the angle and is given the name cosine of the angle. On the figure, cos(c) = a / h Because of the relations of the angles of a right triangle, we can define another function of the angle, called the sine of the angle, which relates side b and the hypotenuse: sin(c) = b / h The key point here is that if we measure one angle, we know the value of all three angles in a right triangle. And if we additionally measure one side, we can use these trigonometric functions to determine the length of all three sides. We can determine 5 pieces of information (2 angles and 3 sides) by making only two measurements. An additional relation exists between the sides of a right triangle. If we draw a square on the hypotenuse, and a square on each of the two sides, the area of the square on the hypotenuse is equal to the sum of the squares on the sides. This is called the Pythagorean Theorem and has been know since ancient times: h^2 = a^2 + b^2 The Pythagorean Theorem can be used with the trigonometric functions to determine the size of all the sides of a right triangle. Activities: Guided Tours • Trigonometry: • Flight Equations: • Maximum Altitude: Related Sites: Rocket Index Rocket Home Exploration Systems Mission Directorate Home
{"url":"http://microgravity.grc.nasa.gov/education/rocket/trig.html","timestamp":"2014-04-19T10:01:36Z","content_type":null,"content_length":"13023","record_id":"<urn:uuid:17999c4d-3bbf-4d0e-b4a9-ae2a8da2b14f>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
Burkill integral From Encyclopedia of Mathematics A concept which was introduced by J.C. Burkill to determine surface areas. The Burkill integral is introduced in its modern form for the integration of a non-additive function The concept of the Burkill integral can be generalized to include the case of a set function defined on some class of subsets of an abstract measure space. This class must meet a number of requirements; in particular, each set of the class must permit a subdivision into sets also of this class that have a measure as small as one pleases. The Burkill integral can then be defined for any set in the class in analogy with the Kolmogorov integral, which is also known as the Burkill–Kolmogorov integral. Any function that is Burkill-integrable is also Kolmogorov-integrable after a suitable ordering of the subdivisions. The converse statement is true only if certain additional conditions are satisfied. The Burkill integral is used in constructing the Denjoy integral in different spaces. The name of Burkill integral is also given to a number of generalizations of the Perron integral ( [1a] J.C. Burkill, "Functions of intervals" Proc. London Math. Soc. (2) , 22 (1924) pp. 275–310 [1b] J.C. Burkill, "The expression of area as an integral" Proc. London Math. Soc. (2) , 22 (1924) pp. 311–336 [2] S. Saks, "Theory of the integral" , Hafner (1952) (Translated from French) [3] P.I. Romanovskii, "l'Intégrale de Denjoy en espaces abstractes" Mat. Sb. , 9 (51) : 1 (1941) pp. 67–120 [4] J.C. Burkill, "Integrals and trigonometric series" Proc. London Math. Soc. (3) , 1 (1951) pp. 46–57 How to Cite This Entry: Burkill integral. V.A. Skvortsov (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Burkill_integral&oldid=17097 This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
{"url":"http://www.encyclopediaofmath.org/index.php/Burkill_integral","timestamp":"2014-04-20T03:11:34Z","content_type":null,"content_length":"20199","record_id":"<urn:uuid:ed8d6182-b7dc-48c9-8e36-6fb41487afc7>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Self-studying calculus? Replies: 5 Last Post: Feb 27, 2010 9:59 AM Messages: [ Previous | Next ] Re: Self-studying calculus? Posted: Feb 26, 2010 9:21 PM > May someone advice me for a good book about calculus ? Schaum's Outlines are helpful to a lot of people, and relatively Date Subject Author 2/25/10 Raziel 2/25/10 Re: Self-studying calculus? sadoc 2/26/10 Re: Self-studying calculus? Larry Hammick 2/26/10 Re: Self-studying calculus? J. Clarke 2/27/10 Re: Self-studying calculus? Raziel 2/27/10 Re: Self-studying calculus? Frederick Williams
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2046967&messageID=6995155","timestamp":"2014-04-17T21:29:46Z","content_type":null,"content_length":"22097","record_id":"<urn:uuid:d3aa36ee-c474-41f0-ac96-ea7b813da9c9>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Change Numbers Into and Out of Scientific Notation Edit Article Converting Numbers Into Scientific NotationConverting Numbers from Scientific Notation Edited by Trigtchr, Teresa, Eric, General Jackson and 14 others Scientific notation is commonly used in chemistry and physics to represent very large or very small numbers. Changing numbers into and out of scientific notation isn't as hard as it looks. Just follow these steps to find out how to do it. Method 1 of 2: Converting Numbers Into Scientific Notation 1. 1 Start with a very small or very large number. You'll need to start with a very small or a very large number if you want to successfully convert it into scientific notation. For example, 10,090,250,000,000 is very large; 0.00004205 is very small. 2. 2 Cross out the original number's decimal point. This is the first step to beginning to convert the number into scientific notation. If you're working with the number 0.00004205, just write an "x" over the decimal point. 3. 3 Add a new decimal point to the number so that there's only one non-zero digit in front of it. In this case, the first non-zero number is 4, so place the decimal point after the 4 so that the new number reads 000004.205. □ This works for large numbers, too. For example, 10,090,250,000,000 would become 1.0090250000000. 4. 4 Rewrite this number to drop any insignificant digits. Insignificant digits are any zeros that are not in between other, non-zero digits. □ For example, in the number 1.0090250000000, the zeroes at the end are insignificant, but the zeros between the 1 and then 9, and between the 9 and the 2, are significant. Rewrite this number as 1.009025. □ In the number 000004.205, the leading zeros are insignificant. Rewrite this number as 4.205. 5. 5 Write "x 10" after the rewritten number. Just write 4.205 x 10 for now. 6. 6 Count how many times you moved the original decimal point. In the case of 0.00004205 to 4.205, you moved the decimal point over 5 times. In the case of 10,090,250,000,000 to 1.0090250000000, you moved the decimal point 13 times. 7. 7 Write that number as the exponent over the number 10. For 1.0090250000000, write x 10^13. For 4.25, write x 10^5. 8. 8 Decide if the exponent should be negative or positive. If your original number was very large, the exponent should be positive. If your original number was very small, the exponent should be □ For example: the very large number 10,090,250,000,000 becomes 1.009025 x 10 ^13 while the very small number 0.00004205 becomes 4.205 x 10^-5. 9. 9 Round your number as much as necessary. This depends on how certain you need to be in your answer. For example, 1.009025 x 10^13 might be better off as 1.009 x 10^13 or even as 1.01 x 10^13, depending on how accurate you need to be. Method 2 of 2: Converting Numbers from Scientific Notation 1. 1 Decide if you will be moving the decimal point to the left or to the right. If the exponent on the "x 10" part of the number is positive, then you will be moving the decimal places to the right; if the exponent is negative, you will be moving the decimal places to the left. 2. 2 Write down how many places you would need to move the decimal. In the case of the number 5.2081 x 10^12, you will be moving the decimal point over five spaces to the right. If the exponent is a -7, you move left seven places; if the exponent is a 5, move right five places. 3. 3 Move the decimal point over, adding zeroes for every empty space. You may have to add them in front of or behind the number, depending on whether you are moving left or right. If you're moving the decimal point over 12 spaces to the right from the number 5.2081, then the new number becomes 5208100000000. 4. 4 Write the new decimal point after you've moved over the correct amount of spaces. 5. 5 Add commas to any number over 999. Go through the digits, from right to left, putting a comma in front of every group of three digits. For example, 5208100000000 becomes 5,208,100,000,000. Article Info Thanks to all authors for creating a page that has been read 29,412 times. Was this article accurate?
{"url":"http://www.wikihow.com/Change-Numbers-Into-and-Out-of-Scientific-Notation","timestamp":"2014-04-19T18:08:29Z","content_type":null,"content_length":"76050","record_id":"<urn:uuid:d3dabfb9-05e4-4483-972e-6f128cdccf97>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
Sponsored by the SIAM Activity Group on Optimization (SIAG/OP) In honor of... Leonid Khachiyan (1952-2005) passed away Friday April 29 at the age of 52. Khachiyan was best known for his 1979 use of the ellipsoid algorithm, originally developed for convex programming, to give the first polynomial-time algorithm to solve linear programming problems. While the simplex algorithm solved linear programs well in practice, Khachiyan gave the first formal proof of an efficient algorithm in the worst case. Khachiyan's analysis led to broad applications of the ellipsoid algorithm as a method of obtaining complexity results for discrete optimization problems. Khachiyan and co-authors also developed polynomial-time algorithms for convex quadratic programming, studied the complexity of polynomial programming over the reals and the integers, and devised the method of inscribed ellipsoids for general convex programming. A brief remembrance for Leonid Khachiyan will be held at the start of the Award and Presentation session in the Auditorium on Tuesday at 1:45PM, May 17, 2005. About the Conference Stockhom City Conference Centre/Norra Latin Barnhusgatan 7B SE-111 23 Stockholm, Sweden See item "A" on the attached map (Norra Latin). http://www.siam.org/meetings/op05/hotelmap.pdf The Eighth SIAM Conference on Optimization will feature the latest research in theory, algorithms, and applications in optimization problems. In particular, it will emphasize large-scale problems and will feature important applications in networks, manufacturing, medicine, biology, finance, aeronautics, control, operations research, and other areas of science and engineering. The conference brings together mathematicians, operations researchers, computer scientists, engineers, and software developers; thus it provides an excellent opportunity for sharing ideas and problems among specialists and users of optimization in academia, government, and industry. Conference Themes The themes of the conference include, but are not limited to: • Large-scale nonlinear programming • Large-scale linear programming • Simulation-based optimization • Optimization in medicine and biology • Stochastic programming • Optimization in finance • Semidefinite programming • Computational optimization frameworks Organizing Committee (Co-chair), Anders Forsgren , Royal Institute of Technology (KTH) (Co-chair), Henry Wolkowicz , University of Waterloo Natalia Alexandrov , NASA Langley Research Center Kurt Anstreicher , The University of Iowa Robert M. Freund , MIT Florian Jarre , Heinrich Heine Universität Düsseldorf Franz Rendl , University of Klagenfurt Trond Steihaug , University of Bergen Michael J. Todd , Cornell University Philippe Toint , The University of Namur (FUNDP) Luis N. Vicente , Universidade de Coimbra Ya-xiang Yuan , Chinese Academy of Sciences SIAM would like to thank KTH, Royal Institute of Technology, Vetenskapsrådet, and the U. S. National Science Foundation for additional support for the conference.
{"url":"http://www.siam.org/meetings/op05/index.htm","timestamp":"2014-04-16T15:59:06Z","content_type":null,"content_length":"8459","record_id":"<urn:uuid:b8fb1ea0-82be-4fee-be93-8a83d1c2dfb2>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Transactions of the American Mathematical Society ISSN 1088-6850(online) ISSN 0002-9947(print) Bifurcation of critical periods for plane vector fields Authors: Carmen Chicone and Marc Jacobs Journal: Trans. Amer. Math. Soc. 312 (1989), 433-486 MSC: Primary 58F14; Secondary 34C25, 58F05, 58F30 MathSciNet review: 930075 Full-text PDF Free Access Abstract | References | Similar Articles | Additional Information Abstract: A bifurcation problem in families of plane analytic vector fields which have a nondegenerate center at the origin for all values of a parameter Generally, if the function Aside from their intrinsic interest, monotonicity properties of the period function are important in the question of existence and uniqueness of autonomous boundary value problems, in the study of subharmonic bifurcation of periodic oscillations, and in the analysis of the problem of linearization. In this regard it is shown that a Hamiltonian system with a polynomial potential of degree larger than two cannot be linearized. However, while these topics are the subject of a large literature, the spirit of this paper is more akin to that of N. Bautin's treatment of the multiple Hopf bifurcation for quadratic systems and the work on various forms of the weakened Hilbert's 16th problem first posed by V. Arnold. • [1] A. A. Andronov, Theory of bifurcations of dynamical systems on a plane, Wiley, New York, 1973. • [2] V. I. Arnol′d, Ordinary differential equations, MIT Press, Cambridge, Mass., 1978. Translated from the Russian and edited by Richard A. Silverman. MR 0508209 (58 #22707) • [3] V. I. Arnol′d, Geometrical methods in the theory of ordinary differential equations, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Science], vol. 250, Springer-Verlag, New York, 1983. Translated from the Russian by Joseph Szücs; Translation edited by Mark Levi. MR 695786 (84d:58023) • [4] Alberto Baider and Richard Churchill, Unique normal forms for planar vector fields, Math. Z. 199 (1988), no. 3, 303–310. MR 961812 (90k:58146), http://dx.doi.org/10.1007/BF01159780 • [5] N. N. Bautin, On the number of limit cycles which appear with the variation of coefficients from an equilibrium position of focus or center type, American Math. Soc. Translation 1954 (1954), no. 100, 19. MR 0059426 (15,527h) • [6] Piotr Biler, On the stationary solutions of Burgers’ equation, Colloq. Math. 52 (1987), no. 2, 305–312. MR 893547 (88h:35098) • [7] T. R. Blows and N. G. Lloyd, The number of limit cycles of certain polynomial differential equations, Proc. Roy. Soc. Edinburgh Sect. A 98 (1984), no. 3-4, 215–239. MR 768345 (86g:34030), • [8] N. Bourbaki, Commutative algebra, Addison-Wesley, Reading, Mass., 1969. • [9] Egbert Brieskorn and Horst Knörrer, Plane algebraic curves, Birkhäuser Verlag, Basel, 1986. Translated from the German by John Stillwell. MR 886476 (88a:14001) • [10] B. Buchberger, Gröbner bases: An algorithmic method in polynomial ideal theory, Multidimensional Systems Theory (N. K. Bose, ed.), Reidel, Boston, Mass., 1985. • [11] Carmen Chicone, The monotonicity of the period function for planar Hamiltonian vector fields, J. Differential Equations 69 (1987), no. 3, 310–321. MR 903390 (88i:58050), http://dx.doi.org/ • [12] -, Geometric methods for nonlinear two point boundary value problems, J. Differential Equations (to appear). • [13] Carmen Chicone and Freddy Dumortier, A quadratic system with a nonmonotonic period function, Proc. Amer. Math. Soc. 102 (1988), no. 3, 706–710. MR 929007 (89d:58106), http://dx.doi.org/ • [14] Shui Nee Chow and Jack K. Hale, Methods of bifurcation theory, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Science], vol. 251, Springer-Verlag, New York, 1982. MR 660633 (84e:58019) • [15] S.-N. Chow and J. A. Sanders, On the number of critical points of the period, J. Differential Equations 64 (1986), no. 1, 51–66. MR 849664 (87j:34075), http://dx.doi.org/10.1016/0022-0396 • [16] Shui-Nee Chow and Duo Wang, On the monotonicity of the period function of some second order equations, Časopis Pěst. Mat. 111 (1986), no. 1, 14–25, 89 (English, with Russian and Czech summaries). MR 833153 (87e:34069) • [17] R. Conti, About centers of quadratic planar systems, Universita Degli Studi di Firenze, 1986. • [18] -, About centers of planar cubic systems, Universita Degli Studi di Firenze, 1986. • [19] W. A. Coppel, A survey of quadratic systems, J. Differential Equations 2 (1966), 293–304. MR 0196182 (33 #4374) • [20] J.-P. Françoise, Cycles limites études locale, Report /83/M/13, Inst. Hautes Études Sci., 1983. • [21] J.-P. Françoise and C. C. Pugh, Keeping track of limit cycles, J. Differential Equations 65 (1986), no. 2, 139–157. MR 861513 (88a:58162), http://dx.doi.org/10.1016/0022-0396(86)90030-6 • [22] William Fulton, Algebraic curves. An introduction to algebraic geometry, W. A. Benjamin, Inc., New York-Amsterdam, 1969. Notes written with the collaboration of Richard Weiss; Mathematics Lecture Notes Series. MR 0313252 (47 #1807) • [23] J. Guckenheimer, R. Rand, and D. Schlomink, Degenerate homoclinic cycles in perturbation of quadratic Hamiltonian systems, Preprint, 1987. • [24] M. Hervé, Several complex variables, Oxford Univ. Press, 1963. • [25] Peter Henrici, Applied and computational complex analysis, Wiley-Interscience [John Wiley & Sons], New York, 1974. Volume 1: Power series—integration—conformal mapping—location of zeros; Pure and Applied Mathematics. MR 0372162 (51 #8378) • [26] Donald E. Knuth, The art of computer programming, 2nd ed., Addison-Wesley Publishing Co., Reading, Mass.-London-Amsterdam, 1975. Volume 1: Fundamental algorithms; Addison-Wesley Series in Computer Science and Information Processing. MR 0378456 (51 #14624) • [27] W. S. Loud, Behavior of the period of solutions of certain plane autonomous systems near centers, Contributions to Differential Equations 3 (1964), 21–36. MR 0159985 (28 #3199) • [28] V. A. Lunkevich and K. S. Sibirskiĭ, Integrals of a general quadratic differential system in cases of the center, Differentsial′nye Uravneniya 18 (1982), no. 5, 786–792, 915 (Russian). MR 661356 (83i:34005) • [29] A. Lyapunov, Problème général de la stabilité du mouvement, Ann. of Math. Studies, No. 17, Princeton Univ. Press, Princeton, N. J., 1949. • [30] Francis J. Murray and Kenneth S. Miller, Existence theorems for ordinary differential equations, New York University Press, New York, 1954. MR 0064934 (16,358b) • [31] L. M. Perko, On the accumulation of limit cycles, Proc. Amer. Math. Soc. 99 (1987), no. 3, 515–526. MR 875391 (88b:34040), http://dx.doi.org/10.1090/S0002-9939-1987-0875391-1 • [32] I. Pleshkan, A new method of investigating the isochronicity of a system of two differential equations, Differential Equations 5 (1969), 796-802. • [33] G. S. Petrov, The number of zeros of complete elliptic integrals, Funktsional. Anal. i Prilozhen. 18 (1984), no. 2, 73–74 (Russian). MR 745710 (85j:33002) • [34] G. S. Petrov, Elliptic integrals and their nonoscillation, Funktsional. Anal. i Prilozhen. 20 (1986), no. 1, 46–49, 96 (Russian). MR 831048 (87f:58031) • [35] Tim Poston and Ian Stewart, Catastrophe theory and its applications, Pitman, London, 1978. With an appendix by D. R. Olsen, S. R. Carter and A. Rockwood; Surveys and Reference Works in Mathematics, No. 2. MR 0501079 (58 #18535) • [36] R. Roussarie, private communication, 1987. • [37] F. Rothe, Periods of oscillation, nondegeneracy and specific heat of Hamiltonian systems in the plane, Proc. Internat. Conf. on Differential Equations and Math. Physics, Birmingham, Alabama, • [38] G. Sansone and R. Conti, Non-linear differential equations, Revised edition. Translated from the Italian by Ainsley H. Diamond. International Series of Monographs in Pure and Applied Mathematics, Vol. 67, A Pergamon Press Book. The Macmillan Co., New York, 1964. MR 0177153 (31 #1417) • [39] K. S. Sibirskiĭ, On the number of limit cycles in the neighborhood of a singular point, Differencial′nye Uravnenija 1 (1965), 53–66 (Russian). MR 0188542 (32 #5980) • [40] Carl Ludwig Siegel and Jürgen K. Moser, Lectures on celestial mechanics, Springer-Verlag, New York, 1971. Translation by Charles I. Kalme; Die Grundlehren der mathematischen Wissenschaften, Band 187. MR 0502448 (58 #19464) • [41] Renate Schaaf, A class of Hamiltonian systems with increasing periods, J. Reine Angew. Math. 363 (1985), 96–109. MR 814016 (87b:58029), http://dx.doi.org/10.1515/crll.1985.363.96 • [42] A. Seidenberg, Elements of the theory of algebraic curves, Addison-Wesley Publishing Co., Reading, Mass.-London-Don Mills, Ont., 1968. MR 0248139 (40 #1393) • [43] J. Smoller and A. Wasserman, Global bifurcation of steady-state solutions, J. Differential Equations 39 (1981), no. 2, 269–290. MR 607786 (82d:58056), http://dx.doi.org/10.1016/0022-0396(81) • [44] J. Sotomayor and R. Paterlini, Quadratic vector fields with finitely many periodic orbits, Geometric dynamics (Rio de Janeiro, 1981) Lecture Notes in Math., vol. 1007, Springer, Berlin, 1983, pp. 753–766. MR 730297 (85b:58107), http://dx.doi.org/10.1007/BFb0061444 • [45] Minoru Urabe, Potential forces which yield periodic motions of a fixed period, J. Math. Mech. 10 (1961), 569–578. MR 0123060 (23 #A391) • [46] Minoru Urabe, The potential force yielding a periodic motion whose period is an arbitrary continuous function of the amplitude of the velocity, Arch. Rational Mech. Anal. 11 (1962), 27–33. MR 0141834 (25 #5231) • [47] A. N. Varchenko, Estimation of the number of zeros of an abelian integral depending on a parameter, and limit cycles, Funktsional. Anal. i Prilozhen. 18 (1984), no. 2, 14–25 (Russian). MR 745696 (85g:32033) • [48] W. Vasconcelos, private communication, 1987. • [49] B. L. van der Waerden, Algebra, Vol. II, Ungar, New York, 1950. • [50] -, Algebra, Vol. II, Ungar, New York, 1970. • [51] Jörg Waldvogel, The period in the Lotka-Volterra system is monotonic, J. Math. Anal. Appl. 114 (1986), no. 1, 178–184. MR 829122 (87j:92034), http://dx.doi.org/10.1016/0022-247X(86)90076-4 • [52] Yan-Qian Ye, et al. Theory of limit cycles, Transl. Math. Monographs, Vol. 66, Amer. Math. Soc., Providence, R.I., 1984. • [53] Oscar Zariski and Pierre Samuel, Commutative algebra. Vol. II, The University Series in Higher Mathematics, D. Van Nostrand Co., Inc., Princeton, N. J.-Toronto-London-New York, 1960. MR 0120249 (22 #11006) Similar Articles Retrieve articles in Transactions of the American Mathematical Society with MSC: 58F14, 34C25, 58F05, 58F30 Retrieve articles in all journals with MSC: 58F14, 34C25, 58F05, 58F30 Additional Information DOI: http://dx.doi.org/10.1090/S0002-9947-1989-0930075-2 PII: S 0002-9947(1989)0930075-2 Keywords: Period function, center, bifurcation, quadratic system, Hamiltonian system, linearization Article copyright: © Copyright 1989 American Mathematical Society
{"url":"http://www.ams.org/journals/tran/1989-312-02/S0002-9947-1989-0930075-2/home.html","timestamp":"2014-04-20T16:02:40Z","content_type":null,"content_length":"66876","record_id":"<urn:uuid:917e86f0-c9d0-4e24-8037-c2e63c8526a2>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
Sponsored by the SIAM Activity Group on Optimization (SIAG/OP) In honor of... Leonid Khachiyan (1952-2005) passed away Friday April 29 at the age of 52. Khachiyan was best known for his 1979 use of the ellipsoid algorithm, originally developed for convex programming, to give the first polynomial-time algorithm to solve linear programming problems. While the simplex algorithm solved linear programs well in practice, Khachiyan gave the first formal proof of an efficient algorithm in the worst case. Khachiyan's analysis led to broad applications of the ellipsoid algorithm as a method of obtaining complexity results for discrete optimization problems. Khachiyan and co-authors also developed polynomial-time algorithms for convex quadratic programming, studied the complexity of polynomial programming over the reals and the integers, and devised the method of inscribed ellipsoids for general convex programming. A brief remembrance for Leonid Khachiyan will be held at the start of the Award and Presentation session in the Auditorium on Tuesday at 1:45PM, May 17, 2005. About the Conference Stockhom City Conference Centre/Norra Latin Barnhusgatan 7B SE-111 23 Stockholm, Sweden See item "A" on the attached map (Norra Latin). http://www.siam.org/meetings/op05/hotelmap.pdf The Eighth SIAM Conference on Optimization will feature the latest research in theory, algorithms, and applications in optimization problems. In particular, it will emphasize large-scale problems and will feature important applications in networks, manufacturing, medicine, biology, finance, aeronautics, control, operations research, and other areas of science and engineering. The conference brings together mathematicians, operations researchers, computer scientists, engineers, and software developers; thus it provides an excellent opportunity for sharing ideas and problems among specialists and users of optimization in academia, government, and industry. Conference Themes The themes of the conference include, but are not limited to: • Large-scale nonlinear programming • Large-scale linear programming • Simulation-based optimization • Optimization in medicine and biology • Stochastic programming • Optimization in finance • Semidefinite programming • Computational optimization frameworks Organizing Committee (Co-chair), Anders Forsgren , Royal Institute of Technology (KTH) (Co-chair), Henry Wolkowicz , University of Waterloo Natalia Alexandrov , NASA Langley Research Center Kurt Anstreicher , The University of Iowa Robert M. Freund , MIT Florian Jarre , Heinrich Heine Universität Düsseldorf Franz Rendl , University of Klagenfurt Trond Steihaug , University of Bergen Michael J. Todd , Cornell University Philippe Toint , The University of Namur (FUNDP) Luis N. Vicente , Universidade de Coimbra Ya-xiang Yuan , Chinese Academy of Sciences SIAM would like to thank KTH, Royal Institute of Technology, Vetenskapsrådet, and the U. S. National Science Foundation for additional support for the conference.
{"url":"http://www.siam.org/meetings/op05/index.htm","timestamp":"2014-04-16T15:59:06Z","content_type":null,"content_length":"8459","record_id":"<urn:uuid:b8fb1ea0-82be-4fee-be93-8a83d1c2dfb2>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Whiteboard Tools Carroll diagram is a more complex sorting program. The screen shows a Carroll Diagram. The matrix is labelled 'red', 'not red', 'rectangles' and 'not rectangles'. There are 12 shapes to sort using two criteria. Counter is a very flexible counting program which allows you to set up either one or two counters to count in different ways. Function Machine is a simple program which allows users to choose functions to carry out on the numbers they type into an 'Input' box. When they press the 'Activate' button their chosen function is applied to their number and shown in the 'Output' box. Handy Graph is a simple program that draws block graphs. Pupils can enter their own graph title and label the axes. As they enter their data they see the block graph created. Sorting 2D shapes is a simple sorting program. The screen shows three boxes labelled 'all right angles', 'some right angles', 'no right angles', and eight different shapes. The object is to classify the shapes according to their properties and place each in the correct box. Venn Diagram is a simple sorting program. The screen shows a non-intersecting Venn diagram labelled 'triangles' and 'other shapes', and ten 2D shapes, five of them triangles. The object is to place each shape in the correct section of the Venn diagram. What's my angle? is a program that allows the user to practise skills of estimating and measuring angles. The introduction demonstrates the correct way to use a protractor to measure angles. Acute, obtuse and reflex angles are explained. Minimax is a program which helps you to understand the effects of large and small digits on the operations addition, subtraction, multiplication and division. Monty is a program based around the exploration of various 10 x 10 number grids. There are 9 different grids which can be selected and some of these can be used in different orientations on the screen. ‘Start’ begins a new game and/or changes the orientation of the grid. Play Train is a number puzzle program. A train is standing in a station waiting for passengers to board. The task is displayed on the screen, telling you how many passengers are needed and the number of carriages you have to fill. The displays also shows which numbers can be used to complete the operation. Take Part consists of three on-screen films which show shapes being divided into halves, thirds or quarters. The transitions of the shapes are made mainly through, rotation, reflection or shears. Toy shop is a game of strategy for two players (or groups of players). Players take turns to select a coin to pay towards the cost of the displayed toy. The winner is the player who lays down the coin to make up the exact cost of the toy. Measuring Scales is an interactive set of scales. Click on the weights to see the scales register. Explore Area and change the grid size, shape, and colour. Explore Coordinates: add axes, plot the position of coordinates and more. Counting On and Back –partition the beads up to 100 in tens and ones. Data Handling –enter the data then see the data displayed. Change the categories by overtyping. Division Grid – an interactive division grid. Fractions – an extremely useful tool to support the teaching of fractions, percentages and decimals ratio. Grouping – click on a counter shape, use the arrow keys above the numerals to set number then click on the numerals to show the counters. Line Graph - there are a variety of ways to enter and display the data. Measuring Cylinder – an interactive cylinder; set the scales and turn on the tap! Multiplication Grid – multiply by two digit or three digit numbers. Number Facts – addition and subtraction facts to twenty. Number Grid - check out the multiples and prime numbers on this interactive grid. Number Line - the interactive number allows teachers to change the intervals between numbers, set their own lines and extend between positive and negative numbers. Polygon - creates scaleable polygons up to 10 sides. A protractor and rule can also be overlaid to carry out virtual measurements. Ruler - the rule allows you to measure straight lines or shapes which you can draw using the pencil tool. Symmetry - a clickable grid allows you to create and investigate lines of symmetry and reflection. Tell the time - this 'live' analogue clock also includes a digital companion. The time can be advanced or retarded by incremental units from 1 minute to 1 hour. Thermometer - this fully scaleable thermometer can be used to read maximum, minimum, difference and change.
{"url":"http://www.bgfl.org/bgfl/custom/resources_ftp/client_ftp/teacher/maths/whiteboard_tools/index.htm","timestamp":"2014-04-20T03:18:57Z","content_type":null,"content_length":"24541","record_id":"<urn:uuid:642c2e20-f599-4ff3-a41e-0238d599b486>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
Exponential bounds and tails for additive random recursive sequences Ludger Rüschendorf, Eva-Maria Schopp Exponential bounds and tail estimates are derived for additive random recursive sequences, which typically arise as functionals of recursive structures, of random trees or in recursive algorithms. In particular they arise as parameters of divide and conquer type algorithms. We derive tail bounds from estimates of the Laplace transforms and of the moment sequences. For the proof we use some classical exponential bounds and some variants of the induction method. The paper generalizes results of R\"osler (% \citeyearNP{Roesler:91}, % \citeyearNP{Roesler:92}) and % \citeN{Neininger:05} on subgaussian tails to more general classes of additive random recursive sequences. It also gives sufficient conditions for tail bounds of the form $\exp(-a t^p)$ which are based on a characterization of \citeN{Kasahara:78}. Full Text: PDF PostScript
{"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/dmtcs/article/viewArticle/662","timestamp":"2014-04-17T00:51:47Z","content_type":null,"content_length":"11656","record_id":"<urn:uuid:40b915a3-726c-41e9-b62a-8930145fabcb>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebraic Properties of Lossless Junctions Next | Prev | Top | JOS Index | JOS Pubs | JOS Home | Search Algebraic Properties of Lossless Junctions Comparing (38) and (42) shows that the elements of the normalized scattering matrix 42) are where scattering matrix 1, it is easy to verify by direct computation that eigenvalues are on the unit circle. Since all eigenvalues of a real, symmetric matrix are real [66], 32) similarity An elementary eigenvector analysis can be conducted using physical analogies. It is well known that a symmetric matrix has orthogonal eigenvectors. For the equal-impedance case, one eigenvector is always pressure waves at the junction, so the return scatter must be identical. This corresponds to the eigenvalue 1. For the -1 eigenvalues, a similar interpretation can be found: inject a unit pressure wave at all the branches but one, and ``pull out'' a pressure wave having magnitude For unequal impedances, a similar physical interpretation can be found for the eigenvectors. If we supply equal pressure waves to all branches at the junction, the reflected waves must be equal by symmetry, since forced to zero by construction and the return scatter at any branch is the negative of the incoming wave on that branch. In this way we can find The foregoing is an example of how physical intuition can help in finding algebraic properties of a given matrix in physical applications. Another property of the scattering matrix lossless scattering networks can be run in reverse, i.e., by changing the directions of all the delay lines and computing the junctions as dictated by the wave impedances, the network will compute its own inverse. If there are inputs and outputs, they must be interchanged. Subsections Next | Prev | Top | JOS Index | JOS Pubs | JOS Home | Search Download wgj.pdf
{"url":"https://ccrma.stanford.edu/~jos/wgj/Algebraic_Properties_Lossless_Junctions.html","timestamp":"2014-04-17T22:14:06Z","content_type":null,"content_length":"16485","record_id":"<urn:uuid:8b895ce7-255b-497f-85a6-f783e7b0258a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
Teaching Number Recognition 11-20: Preschool Math Activities Preschool Math Activities for Numbers 11-20 These math activities will help preschool children recognize the numbers 11-20 and count sets with up to twenty objects in them. Use the activities with the whole class, in small groups, or even as part of your math center. Counting and Number Recognition Activities for 11-20 Provide students with numbers lines that go up to 20 or with the first two rows of a hundred chart. Take a couple of minutes to practice counting to twenty each day. Have the children point with their fingers and look at each number as they say it. Make it fun by letting them use different voices when they count like whispering voices, loud voices, or squeaky voices. After you practice counting, say "Show me 17" and have all of the children find the number seventeen on the number line and point to it. Call out several numbers for the students to find, focusing on eleven to twenty, to help them learn to recognize these larger numbers. Make an eleven through twenty book. Give each child a small blank book with the numbers 11 to 20 written on the pages. Then give him or her a sheet of small stickers and have him put the appropriate numbers of sticker on each page. For an easy matching game, write the numbers from 11 to 20 on index cards and then use stickers to make sets of each number on another card. Spread the cards out face up and have the children work in groups of two or three to pair up the numeral with the set that matches. To make the game harder, have the children place all of the cards face down and turn over two at a time, looking for matches, like the game of Memory. Place small containers filled with different objects in groups of numbers from eleven to twenty in your math center for the children to practice counting. Some ideas for what to put in the containers are eleven buttons, fourteen cotton balls, and twenty pennies. Place sticky notes with the correct number on the bottom of each container, so that the children can check their counting. Ten Frame Activities Use a ten frame to help students visualize the numbers eleven to twenty. • Give each child a double ten frame organizer and about twenty counters. The first time you use the double ten frame show the children how to use the counters to make the numbers 11-20. For eleven the first ten frame will be full, with only one counter in the second. For twelve there will be two in the second ten frame, etc. Build each number together. Then call out numbers in random order for the children to make with their counters. • For another ten frame activity, give each pair of children a paper sack filled with cards or slips of paper with the numbers 11-20 written on them. Have each child draw a number and make it with counters on her double ten frame. Next the two children compare the numbers and decide which is larger. Then they can clear their ten frames and draw two more numbers until all of the numbers have been made. Counting Books Read lots of picture books that deal with counting to twenty. Choose books that have objects that can be easily counted to help your preschoolers learn to count these larger groups. Make the books available to students after you have shared them with the class so that they can practice counting in them. Here are a few that your preschool students will enjoy. • Counting Wildflowers by Bruce McMillan • Bears At The Beach by Niki Yektai • One Moose, Twenty Mice by Clare Beaton • Counting Is for the Birds by Frank Mazzola, Jr. With these preschool math activities, counting 11-20 will be a snap for your students!
{"url":"http://m.brighthubeducation.com/preschool-lesson-plans/108490-teaching-number-recognition-11-20-math-activities/","timestamp":"2014-04-16T18:57:40Z","content_type":null,"content_length":"11098","record_id":"<urn:uuid:1071028d-aa8d-4354-a96d-78f7e0d7ce6b>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
acking - Created: 15-OCT-2009. Last Modified: 19-MAY-2013 Bin Packing is a common type of problem. It comes in all kinds of shapes and forms. One example is a transporting company that ships containers oversea. The containers have a fixed capacity. The goods ships have different dimensions. Therefore the allocation of the goods to the containers is very important, because additional containers incur (high) additional cost. Many packages in many sizes. Ship all of them in as few bins as possible! Another example is to cut cable to the right size. Any remainder of a cable is waste and (depending on the cable) can be very costly. There are many other examples that requires a solution to the same problem: to reduce waste. The solutions This series of articles shows different approaches to the bin packing problem, in search of the "best" solution: The setup The example used here resembles the cable cutting problem. The example is, that there are many packages of different sizes, that have to be assigned to bins that have a fixed size. The goal is to minimize the number of bins. Or - to put it differently - to minimize waste. The bins have a maximum capacity of 100. Each package has a size of anywhere between 2 and 60. Here is the definition of the bins table. Since this series of articles discussed several different solutions, it has a column called Solution to indicate the solution that was used. CREATE TABLE dbo.bins (solution tinyint NOT NULL ,bin_no int NOT NULL ,space_left smallint NOT NULL ,CONSTRAINT PK_bins PRIMARY KEY (solution,bin_no) ) -- helper index CREATE INDEX IX_bins ON dbo.bins(space_left) Below is the definition of the packages table, and a script to populate it with 100,000 entries. The script that populates the packages table uses a numbers table. So the code snippet starts off with the creation and population of this helper table. CREATE TABLE dbo.numbers (n int PRIMARY KEY) INSERT INTO dbo.numbers VALUES (1) Declare @i int Set @i=1 While 100000 > @i INSERT INTO dbo.numbers SELECT n+@i FROM dbo.numbers Set @i=@i*2 CREATE TABLE dbo.packages (package_no int IDENTITY PRIMARY KEY ,size smallint NOT NULL ,bin_no int NULL ) -- helper index CREATE INDEX IX_packages_binno_size ON dbo.packages(bin_no,size) INSERT INTO dbo.packages (size) SELECT (ABS(CHECKSUM(NEWID()))%30) + (ABS(CHECKSUM(NEWID()))%30) + 2 FROM dbo.numbers WHERE n BETWEEN 1 AND 100000
{"url":"http://gertjans.home.xs4all.nl/sql/binpacking/intro.html","timestamp":"2014-04-21T14:40:16Z","content_type":null,"content_length":"5365","record_id":"<urn:uuid:196677ea-e48c-422f-ae67-85d9730fb0c9>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
NAG Library NAG Library Routine Document 1 Purpose E04LYF is an easy-to-use modified-Newton algorithm for finding a minimum of a function, $F\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)$ subject to fixed upper and lower bounds on the independent variables, ${x}_{1},{x}_{2},\dots ,{x}_{n}$ when first and second derivatives of $F$ are available. It is intended for functions which are continuous and which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities). 2 Specification SUBROUTINE E04LYF ( N, IBOUND, FUNCT2, HESS2, BL, BU, X, F, G, IW, LIW, W, LW, IUSER, RUSER, IFAIL) INTEGER N, IBOUND, IW(LIW), LIW, LW, IUSER(*), IFAIL REAL (KIND=nag_wp) BL(N), BU(N), X(N), F, G(N), W(LW), RUSER(*) EXTERNAL FUNCT2, HESS2 3 Description E04LYF is applicable to problems of the form: $Minimize⁡Fx1,x2,…,xn subject to lj≤xj≤uj, j=1,2,…,n$ when first and second derivatives of are available. Special provision is made for problems which actually have no bounds on the ${x}_{j}$, problems which have only non-negativity bounds and problems in which ${l}_{1}={l}_{2}=\cdots ={l}_{n}$ and ${u}_ {1}={u}_{2}=\cdots ={u}_{n}$. You must supply a subroutine to calculate the values of $F\left(x\right)$ and its first derivatives at any point $x$ and a subroutine to calculate the second From a starting point you supplied there is generated, on the basis of estimates of the curvature of $F\left(x\right)$, a sequence of feasible points which is intended to converge to a local minimum of the constrained function. 4 References Gill P E and Murray W (1976) Minimization subject to bounds on the variables NPL Report NAC 72 National Physical Laboratory 5 Parameters 1: N – INTEGERInput On entry: the number $n$ of independent variables. Constraint: ${\mathbf{N}}\ge 1$. 2: IBOUND – INTEGERInput On entry : indicates whether the facility for dealing with bounds of special forms is to be used. It must be set to one of the following values: If you are supplying all the ${l}_{j}$ and ${u}_{j}$ individually. If there are no bounds on any ${x}_{j}$. If all the bounds are of the form $0\le {x}_{j}$. If ${l}_{1}={l}_{2}=\cdots ={l}_{n}$ and ${u}_{1}={u}_{2}=\cdots ={u}_{n}$. Constraint: $0\le {\mathbf{IBOUND}}\le 3$. 3: FUNCT2 – SUBROUTINE, supplied by the user.External Procedure You must supply this routine to calculate the values of the function and its first derivatives $\frac{\partial F}{\partial {x}_{j}}$ at any point . It should be tested separately before being used in conjunction with E04LYF (see the E04 Chapter Introduction The specification of SUBROUTINE FUNCT2 ( N, XC, FC, GC, IUSER, RUSER) INTEGER N, IUSER(*) REAL (KIND=nag_wp) XC(N), FC, GC(N), RUSER(*) 1: N – INTEGERInput On entry: the number $n$ of variables. 2: XC(N) – REAL (KIND=nag_wp) arrayInput On entry: the point $x$ at which the function and its derivatives are required. 3: FC – REAL (KIND=nag_wp)Output On exit: the value of the function $F$ at the current point $x$. 4: GC(N) – REAL (KIND=nag_wp) arrayOutput On exit: ${\mathbf{GC}}\left(\mathit{j}\right)$ must be set to the value of the first derivative $\frac{\partial F}{\partial {x}_{\mathit{j}}}$ at the point $x$, for $\mathit{j}=1,2,\dots ,n$ 5: IUSER($*$) – INTEGER arrayUser Workspace 6: RUSER($*$) – REAL (KIND=nag_wp) arrayUser Workspace is called with the parameters as supplied to E04LYF. You are free to use the arrays to supply information to as an alternative to using COMMON global variables. must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which E04LYF is called. Parameters denoted as be changed by this procedure. 4: HESS2 – SUBROUTINE, supplied by the user.External Procedure You must supply this routine to evaluate the elements ${H}_{ij}=\frac{{\partial }^{2}F}{\partial {x}_{i}\partial {x}_{j}}$ of the matrix of second derivatives of at any point . It should be tested separately before being used in conjunction with E04LYF (see the E04 Chapter Introduction The specification of SUBROUTINE HESS2 ( N, XC, HESLC, LH, HESDC, IUSER, RUSER) INTEGER N, LH, IUSER(*) REAL (KIND=nag_wp) XC(N), HESLC(LH), HESDC(N), RUSER(*) 1: N – INTEGERInput On entry: the number $n$ of variables. 2: XC(N) – REAL (KIND=nag_wp) arrayInput On entry: the point $x$ at which the derivatives are required. 3: HESLC(LH) – REAL (KIND=nag_wp) arrayOutput On exit must place the strict lower triangle of the second derivative matrix , stored by rows, i.e., set ${\mathbf{HESLC}}\left(\left(\mathit{i}-1\right)\left(\mathit{i}-2\right)/2+\mathit{j}\right)=\frac{{\partial }^{2}F}{\partial {x}_{\mathit{i}}\partial {x}_{\mathit{j}}}$ , for $\mathit{i}=2,3,\dots ,n$ $\mathit{j}=1,2,\dots ,\mathit{i}-1$ . (The upper triangle is not required because the matrix is symmetric.) 4: LH – INTEGERInput On entry : the length of the array 5: HESDC(N) – REAL (KIND=nag_wp) arrayOutput On exit: must contain the diagonal elements of the second derivative matrix, i.e., set ${\mathbf{HESDC}}\left(\mathit{j}\right)=\frac{{\partial }^{2}F}{\partial {x}_{\mathit{j}}^{2}}$, for $\ mathit{j}=1,2,\dots ,n$. 6: IUSER($*$) – INTEGER arrayUser Workspace 7: RUSER($*$) – REAL (KIND=nag_wp) arrayUser Workspace is called with the parameters as supplied to E04LYF. You are free to use the arrays to supply information to as an alternative to using COMMON global variables. must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which E04LYF is called. Parameters denoted as be changed by this procedure. 5: BL(N) – REAL (KIND=nag_wp) arrayInput/Output On entry : the lower bounds is set to must be set to , for $\mathit{j}=1,2,\dots ,n$ . (If a lower bound is not specified for any , the corresponding should be set to is set to , you must set ; E04LYF will then set the remaining elements of equal to On exit: the lower bounds actually used by E04LYF. 6: BU(N) – REAL (KIND=nag_wp) arrayInput/Output On entry : the upper bounds is set to must be set to , for $\mathit{j}=1,2,\dots ,n$ . (If an upper bound is not specified for any the corresponding should be set to is set to , you must set ; E04LYF will then set the remaining elements of equal to On exit: the upper bounds actually used by E04LYF. 7: X(N) – REAL (KIND=nag_wp) arrayInput/Output On entry: ${\mathbf{X}}\left(\mathit{j}\right)$ must be set to a guess at the $\mathit{j}$th component of the position of the minimum, for $\mathit{j}=1,2,\dots ,n$. The routine checks the gradient and the Hessian matrix at the starting point, and is more likely to detect any error in your programming if the initial ${\mathbf{X}}\left(j\right)$ are nonzero and mutually distinct. On exit: the lowest point found during the calculations. Thus, if ${\mathbf{IFAIL}}={\mathbf{0}}$ on exit, ${\mathbf{X}}\left(j\right)$ is the $j$th component of the position of the minimum. 8: F – REAL (KIND=nag_wp)Output On exit : the value of corresponding to the final point stored in 9: G(N) – REAL (KIND=nag_wp) arrayOutput On exit : the value of $\frac{\partial F}{\partial {x}_{\mathit{j}}}$ corresponding to the final point stored in , for $\mathit{j}=1,2,\dots ,n$ ; the value of for variables not on a bound should normally be close to zero. 10: IW(LIW) – INTEGER arrayWorkspace 11: LIW – INTEGERInput On entry : the dimension of the array as declared in the (sub)program from which E04LYF is called. Constraint: ${\mathbf{LIW}}\ge {\mathbf{N}}+2$. 12: W(LW) – REAL (KIND=nag_wp) arrayWorkspace 13: LW – INTEGERInput On entry : the dimension of the array as declared in the (sub)program from which E04LYF is called. Constraint: ${\mathbf{LW}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{N}}×\left({\mathbf{N}}+7\right),10\right)$. 14: IUSER($*$) – INTEGER arrayUser Workspace 15: RUSER($*$) – REAL (KIND=nag_wp) arrayUser Workspace are not used by E04LYF, but are passed directly to and may be used to pass information to these routines as an alternative to using COMMON global variables. 16: IFAIL – INTEGERInput/Output On entry must be set to $-1\text{ or }1$ . If you are unfamiliar with this parameter you should refer to Section 3.3 in the Essential Introduction for details. For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{ or }1$ is recommended. If the output of error messages is undesirable, then the value is recommended. Otherwise, because for this routine the values of the output parameters may be useful even if ${\mathbf{IFAIL}}e {\mathbf{0}}$ on exit, the recommended value is When the value $-\mathbf{1}\text{ or }1$ is used it is essential to test the value of IFAIL on exit. On exit unless the routine detects an error or a warning has been flagged (see Section 6 6 Error Indicators and Warnings If on entry , explanatory error messages are output on the current error message unit (as defined by Note: E04LYF may return useful information for one or more of the following detected errors or warnings. Errors or warnings detected by the routine: On entry, ${\mathbf{N}}<1$, or ${\mathbf{IBOUND}}<0$, or ${\mathbf{IBOUND}}>3$, or ${\mathbf{IBOUND}}=0$ and ${\mathbf{BL}}\left(j\right)>{\mathbf{BU}}\left(j\right)$ for some $j$, or ${\mathbf{IBOUND}}=3$ and ${\mathbf{BL}}\left(1\right)>{\mathbf{BU}}\left(1\right)$, or ${\mathbf{LIW}}<{\mathbf{N}}+2$, or ${\mathbf{LW}}<\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(10,{\mathbf{N}}×\left({\mathbf{N}}+7\right)\right)$. There have been function evaluations, yet the algorithm does not seem to be converging. The calculations can be restarted from the final point held in . The error may also indicate that has no minimum. The conditions for a minimum have not all been met but a lower point could not be found and the algorithm has failed. Not used. (This value of the parameter is included so as to make the significance of ${\mathbf{IFAIL}}={\mathbf{5}}$ etc. consistent in the easy-to-use routines.) There is some doubt about whether the point found by E04LYF is a minimum. The degree of confidence in the result decreases as increases. Thus, when it is probable that the final gives a good estimate of the position of a minimum, but when it is very unlikely that the routine has found a minimum. In the search for a minimum, the modulus of one of the variables has become very large $\left(\sim {10}^{6}\right)$ . This indicates that there is a mistake in user-supplied subroutines , that your problem has no finite solution, or that the problem needs rescaling (see Section 8 It is very likely that you have made an error in forming the gradient. It is very likely that you have made an error in forming the second derivatives. If you are dissatisfied with the result (e.g., because ${\mathbf{IFAIL}}={\mathbf{5}}$, ${\mathbf{6}}$, ${\mathbf{7}}$ or ${\mathbf{8}}$), it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure. 7 Accuracy When a successful exit is made then, for a computer with a mantissa of $t$ decimals, one would expect to get about $t/2-1$ decimals accuracy in $x$, and about $t-1$ decimals accuracy in $F$, provided the problem is reasonably well scaled. The number of iterations required depends on the number of variables, the behaviour of and the distance of the starting point from the solution. The number of operations performed in an iteration of E04LYF is roughly proportional to . In addition, each iteration makes one call of and at least one call of . So, unless , the gradient vector and the matrix of second derivatives can be evaluated very quickly, the run time will be dominated by the time spent in user-supplied subroutines Ideally the problem should be scaled so that at the solution the value of $F\left(x\right)$ and the corresponding values of ${x}_{1},{x}_{2},\dots {x}_{n}$ are each in the range $\left(-1,+1\right)$, and so that at points a unit distance away from the solution, $F$ is approximately a unit value greater than at the minimum. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that E04LYF will take less computer time. 9 Example A program to minimize $F= x1+10x2 2+5⁢ x3-x4 2+ x2-2x3 4+10⁢ x1-x4 4$ subject to $1 ≤ x1 ≤ 3 -2 ≤ x2 ≤ 0 1 ≤ x4 ≤ 3.$ starting from the initial guess . (In practice, it is worth trying to make user-supplied subroutines as efficient as possible. This has not been done in the example program for reasons of clarity.) 9.1 Program Text 9.2 Program Data 9.3 Program Results
{"url":"http://www.nag.com/numeric/FL/nagdoc_fl24/html/E04/e04lyf.html","timestamp":"2014-04-19T22:58:46Z","content_type":null,"content_length":"46110","record_id":"<urn:uuid:1629d8a3-2d91-4145-a02c-fb699cdf4301>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
Squares and circles. Problem was just posed by ganesh. ganesh wrote: Find the ratio of the areas of the incircle and circumcircle of a square. I know there is a lot of ways to do this but supposing you did not have any idea how to solve this. Geogebra to the rescue! 1) Make a 4 sided regular polygon. ( a square ) 2) Use the 3 point circle option to draw the outer circle using 3 of the vertices of the square. 3) Draw the diagonal line segments to get the center of the square. 4) Put a point F on the square. Make that line segment from the center to F parallel with the x axis. 5) use the point and radius circle option to make a point from the center of the square to f. 6) Get areas of both circles. 7) Take the ratio: 8) use one of the free vertices to expand the inner circle. Find the new areas. What do you deduce? Looks like the ratio of the areas is 1 / 2. Not rigorous but definitely enough to go to war with!
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=16247","timestamp":"2014-04-21T15:00:37Z","content_type":null,"content_length":"12651","record_id":"<urn:uuid:5255b649-e76e-493a-946d-af32f32e2172>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
Winning Solution Jennifer Burrows, Amanda Taplett Greenwich Academy, Greenwich, CT When you connect the consecutive midpoints of a quadrilateral, another quadrilateral is formed inside of it. We will call the original figure the mother figure and this new figure the daughter figure. To determine what the daughter is, you must examine the diagonals of the mother. It is possible to use the diagonals of the mother because the diagonals are parallel to the sides of the The type of quadrilateral that is formed can either be a rhombus, a rectangle, or a square, but it will always be a parallelogram. This is because when the midpoints are connected to form the sides of the daughter figure, each side of the mother figure is bisected. Each newly formed side will be parallel to a diagonal of the mother. Two of the newly formed sides are parallel to the same diagonal and therefore are parallel to each other. Along with the other two sides of the daughter that are parallel to the other diagonal of the mother, a parallelogram is formed. Whether the daughter quadrilateral is a regular parallelogram or a rhombus, rectangle, or square is dependent on the diagonals of the mother quadrilateral. If the diagonals of the mother are congruent, then the daughter will be a rhombus. If the diagonals are perpendicular, the product will be a rectangle. Finally, if the diagonals are both equal and perpendicular, the daughter will be a square. If the mother's diagonals are neither perpendicular nor equal, the daughter is an ordinary parallelogram. The midpoints, when joined, create four triangles. These triangles determine the diagonals of the figure. The diagonals are always parallel to the sides of the daughter figure. This is because the diagonals make similar triangles to those of the ones created by the midpoints. For example, if the triangles created by the midpoints are isosceles, the triangles formed by the diagonals will be as well. This is why, if we know about the diagonals of the mother figure, we are able to discover the kind of quadrilateral the daughter will be. If the diagonal forms the altitude of the triangle, then the daughter is a square. If it forms a median, then it is a rhombus. If the diagonal forms both the median and the altitude in the same place, then the daughter will be a Therefore, an isosceles trapezoid or a rectangle would have a rhombus for a daughter. A rhombus or kite's daughter would be a rectangle. A square's daughter would always be a square. By the diagonals, the daughters of a quadrilateral are able to be calculated. In conclusion, we discovered that the daughter of the mother figure formed similar triangles which enabled us to determine the characteristics of the daughter. With this information, we are able to decide what kind of quadrilateral the daughter will be and save ourselves a great deal of work. COMMENTS: Jennifer and Amanda stated early on that the resulting quadrilateral was always a parallelogram, and explained why. They could have talked about the theorem that says the same thing, but it's certainly not wrong the way it is. Where they went beyond the second submission, however, is that they explored the special cases, and studied what it would be if it were a square, parallelogram, rhombus, etc., and talked about why. Very thorough! Honorable Mention Hello, our names are Susie Taylor and Colleen Cunningham. We are sophomores, in Mr. Detzel's Honors Geometry class, at Shaler High School. Here is our solution to the Problem of the Month. In a quadrilateral, (convex or nonconvex), when you connect the midpoints (consecutively), you get a shape in the middle. Using the Midline Theorem, the segment between the midpoints of two sides of a triangle is parallel to the third side and half as long. To make this theorem work, you must draw in one diagonal of the quadrilateral. This would form two triangles. Thus, since both segments are parallel to the diagonal, the segments are parallel to each other by Transitivity. The shape that you would end up with will always be a parallelogram. We found this to be true with specific types of quadrilaterals: squares, Rectangles, Trapezoids, Parallelograms, Rhombuses, Kites, Diamonds, Arrows, and other different quadrilaterals. Thanks for your time, and we would really appreciate any suggestions you may have.
{"url":"http://mathforum.org/pom/pomsol2.html","timestamp":"2014-04-17T01:38:53Z","content_type":null,"content_length":"6975","record_id":"<urn:uuid:00d9e965-da2c-42fe-bac9-4bcc80e873a5>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Capacitors in Series and in Parallel In this article, we will go over how capacitors add in series and how they add in parallel. We will go over the mathematical formulas for calculating series and parallel capacitance so that we can compute the total capacitance values of actual circuits. Capacitors in Series Capacitors in series are capacitors that are placed back-to-back with the negative electrode of one capacitor connecting to the positive electrode of the other. Below is a circuit where 3 capacitors are placed in series. You can see the capacitors are in series because they are back-to-back against each other, and each negative electrode is connected to the successive capacitor's positive electrode. The best way to think of a series circuit is that if current flows through the circuit, the current can only take one path. Formula for Adding Capacitors in Series The formula to calculate the total series capacitance is: So to calculate the total capacitance of the circuit above, the total capacitance, CT would be: So using the above formula, the total capacitance is 0.75µF. Note- When capacitors are in series, the total capacitance value is always less than the smallest capacitance of the circuit. In other words, when capacitors are in series, the total capicitance decreases. It's always less than any of the values of the capacitors in the circuit. The capacitance doesn't increase in series; it decreases. Capacitors in Parallel Capacitors in parallel are capacitors that are connected with the two electrodes in a common plane, meaning that the positive electrodes of the capacitors are all connected together and the negative electrodes of the capacitors are connected together. Below is a circuit where 3 capacitors are in parallel: You can see that the capacitors are in parallel because all the positive electrodes are connected (common) together and all the negative electrodes are connected (common) together. The best way to think about parallel circuits is by thinking of the path that current can take. When current is travelling through a parallel circuit, the current can take various paths through the circuits, such as go through any of the branches of the capacitors. In series, this is not the case. Current can only take one path. Formula for Adding Capacitors in Parallel The formula to calculate the total parallel capacitance is: So to calculate the total capacitance of the circuit above, the total capacitance, CT would be: So using the above formula, the total capacitance is 13µF. In parallel, capacitors simply add together. So adding up the total capacitance in parallel is much simpler than adding them in series. In fact, since capacitors simply add in parallel, in many circuits, capacitors are placed in parallel to increase the capacitance. For example, if a circuit designer wants 0.44µF in a certain part of the circuit, he may not have a 0.44µF capacitor or one may not exist. So what he can do and what is done many times in professional circuits is that 2 0.22µF capacitors would be placed in parallel to give the equivalent 0.44µF capacitance. This is done often in circuits. Capacitor Circuit in Series and In Parallel We'll now do a capacitor circuit in which capacitors are both in series and in parallel in the same circuit. Below is a circuit which has capacitors in both series and parallel: So how do we add them to find the total capacitance value? First, we can start by finding the series capacitance of the capacitors in series. In the first branch, containing the 4µF and 2µF capacitors, the series capacitance is 1.33µF. And in the second branch, containing the 3µF and 1µF capaictors, the series capacitance is 0.75µF. Now in total, the circuit has 3 capacitances in parallel, 1.33µF, 0.75µF, and 6µF. Now, these 3 values just simply add together for a total capacitance of 8.08µF. If you want to test the above series and parallel connections out practically, get 2 1µF or whatever capacitors you have, but let them be of the same value. In this example, I'll stick with 1µF capacitors. Now take the capacitors and place them in series. Now take a multimeter and place in the capacitance meter setting and place the probes over the positive electrode of the first capacitor and the negative electrode of the second capacitor. You should read just about 0.5µF, which is half the value. This proves that capacitance is lower when capacitors are connected in series. Now place the capacitors in parallel. Take the multimeter probes and place one end on the positive side and one end on the negative. You should now read 2µF, or double the value, because capacitors in parallel add together. This is a practical, real-life test you can do to show how capacitors work. Note-The above formulas for measuring the total capacitance works for all types of capacitors, including polar and nonpolar capacitors. However, for polar capacitors, such as electrolytic and tantalum, the capacitors must be oriented in the circuit in the correct way. Polar capacitors, in series, must be placed so that the negative electrode of the first capacitor connects to the positive electrode of the second capacitor, and so forth for all capacitors in series. In parallel, the capacitor electrodes must all be common, all positive electrodes connect together on a common plane and all negative electrodes connect together on a common plane, which is normally ground. For nonpolar capacitors, including ceramic capacitors, orientation does not matter, since the capacitor isn't Related Resources Series and Parallel Capacitor Calculator Capacitor Equations Capacitance Calculator Capacitor Charge Calculator Capacitor Voltage Calculator Capacitor Impedance Calculator Capacitor Charging Calculator Capacitor Discharge Calculator Capacitor Energy Calculator Capacitor Voltage Divider Calculator How to Calculate the Current Through a Capacitor How to Calculate the Voltage Across a Capacitor
{"url":"http://www.learningaboutelectronics.com/Articles/Capacitors-in-series-and-in-parallel.php","timestamp":"2014-04-16T21:52:30Z","content_type":null,"content_length":"14566","record_id":"<urn:uuid:654a8e04-974d-4ccc-a76e-01985181d1a2>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
Histogram in Excel First, the Data Analysis "toolpak" must be installed. To do this, pull down the Tools menu, and choose Add-Ins. You need to have a column of numbers in the spreadsheet that you wish to create the histogram from, AND you need to have a column of intervals or "Bin" to be the upper boundary category labels on the X-axis of the histogram. See example of spreadsheet below: Pull Down the Tools Menu and Choose Data Analysis, and then choose Histogram and click OK. Enter the Input Range of the data you want (In the example above it would be C5:C29) and enter the Bin Range (E5:E14 in example above). Choose whether you want the output in a new worksheet ply, or in a defined output range on the same spreadsheet. If you choose in the example above to have the output range H5, and you clicked OK, the spreadsheet would look like this: The next step is to make a bar chart of the Frequency column (I6:I15 in the example above). Block the frequency range, click on the graph Wizard, and choose Column Graph, click on Finish. Delete the Series legend, right click on the edge of the graph and choose Source Data , and enter the Bin frequencies (H6:H15) for the X-Axis Category labels. Notice the labels have been manually altered to represnet a range (54-56) instead of just the upper boundary (56). Right click on any of the bars and choose Format Data Series . Choose the Options tab, reduce the Gap Width to zero and click OK. Dress up the graph by right clicking on the edge of the graph and choosing Chart Options. Enter a complete descriptive title with data source, perhaps data labels, and axes labels. You may also right click and format the color of the bars and background. The completed Histogram should look something like this:
{"url":"http://www.uvm.edu/~agri99/spring2004/Histogram_in_Excel.html","timestamp":"2014-04-19T10:33:49Z","content_type":null,"content_length":"2776","record_id":"<urn:uuid:cad844fd-8b4c-43b5-9afb-a6942a84a768>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
East Camden, NJ Prealgebra Tutor Find an East Camden, NJ Prealgebra Tutor ...So I try to break down each concept, mechanism and problem down to its bare parts and build an understanding so that each concept, mechanism and problem can be solved logically on their own without memorization. I am a graduate student getting my Ph.D. in organic chemistry. I have taught both organic chemistry 1 and 2 recitation and lab. 6 Subjects: including prealgebra, chemistry, algebra 1, algebra 2 ...The Detailed Version of the Details: You might be asking yourself, "Why would a mechanical engineer be a good tutor for me or my child?" While I was working as a writing tutor, I was being trained to think critically and solve complex problems using calculus and differential equations. I bring... 37 Subjects: including prealgebra, reading, English, physics My name is Sarah and I've been teaching 7th-12th grade English for five years. For nine years I have also been a tutor for all grade levels in math, reading, writing and study skills. I enjoy tutoring elementary students just as much as middle and high school students. 24 Subjects: including prealgebra, reading, grammar, geometry Hello, my name is Amanze I graduated from Temple University with a B.S in Biology, and I offer tutoring in biology, chemistry and math. I currently work as a development scientist developmenting analytical test for cancer detection. I have experiencing tutoring college level math and chemistry and... 6 Subjects: including prealgebra, chemistry, geometry, algebra 1 ...Whether it is bow arm technique or proper posture or learning to read different clefs, the foundation of musical training starts at a very early age. I enjoy working with children of all ages and have spent a great amount of time working with children. Over the past three summers, I have been a... 3 Subjects: including prealgebra, algebra 1, violin Related East Camden, NJ Tutors East Camden, NJ Accounting Tutors East Camden, NJ ACT Tutors East Camden, NJ Algebra Tutors East Camden, NJ Algebra 2 Tutors East Camden, NJ Calculus Tutors East Camden, NJ Geometry Tutors East Camden, NJ Math Tutors East Camden, NJ Prealgebra Tutors East Camden, NJ Precalculus Tutors East Camden, NJ SAT Tutors East Camden, NJ SAT Math Tutors East Camden, NJ Science Tutors East Camden, NJ Statistics Tutors East Camden, NJ Trigonometry Tutors Nearby Cities With prealgebra Tutor Ashland, NJ prealgebra Tutors Briarcliff, PA prealgebra Tutors Camden, NJ prealgebra Tutors Center City, PA prealgebra Tutors East Haddonfield, NJ prealgebra Tutors Eastwick, PA prealgebra Tutors Edgewater Park, NJ prealgebra Tutors Ellisburg, NJ prealgebra Tutors Erlton, NJ prealgebra Tutors Middle City East, PA prealgebra Tutors Middle City West, PA prealgebra Tutors South Camden, NJ prealgebra Tutors West Collingswood Heights, NJ prealgebra Tutors West Collingswood, NJ prealgebra Tutors Westmont, NJ prealgebra Tutors
{"url":"http://www.purplemath.com/east_camden_nj_prealgebra_tutors.php","timestamp":"2014-04-20T07:02:02Z","content_type":null,"content_length":"24557","record_id":"<urn:uuid:910d6ec4-4710-46ec-bb8e-26c78fe85c08>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
Andy Nguyen tanonev at stanford dot edu personal page Research interests I'm interested in the general application of optimization techniques to computational geometry problems. Recent publications J. Solomon, A. Nguyen, A. Butscher, M. Ben-Chen, and L. Guibas, Soft maps between surfaces, Proc. Symposium on Geometry Processing (2012). A. Nguyen, M. Ben-Chen, K. Welnicka, Y. Ye, and L. Guibas, An Optimization Approach to Improving Collections of Shape Maps, Computer Graphics Forum (Proceedings of SGP 2011), 2011, 30(5): D. Chen, A. Driemel, L. Guibas, A. Nguyen, and C. Wenk, Approximate Map Matching with respect to the Fréchet Distance, In Proceedings of the 13th Workshop on Algorithm Engineering and Experiments (ALENEX '11).
{"url":"http://graphics.stanford.edu/projects/lgl/person.php?id=tanonev","timestamp":"2014-04-20T00:59:54Z","content_type":null,"content_length":"5271","record_id":"<urn:uuid:8b73ad30-f793-4ebb-a72c-9174909b0d8d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
Bodega Bay Math Tutor ...I finished an MS in chemistry in June of 2010, and have been working as a Teacher's assistant and tutor since then. During this program I taught chem lab to groups of 15 to 20 students, and led review sessions to over 70 students at a time. I now specialize in tutoring inorganic, organic, and biochemistry at the university level, as well as high school and middle school level chemistry. 50 Subjects: including algebra 2, linear algebra, algebra 1, differential equations ...I started tutoring my friends more formally about 4 years ago and have experience in tutoring math from first year algebra and physics, all the way to advanced calculus and college level physics classes. As someone who has very recently been through the California public school system, I know wh... 11 Subjects: including algebra 1, algebra 2, calculus, geometry ...I am California Multiple Subjects Credentialed and have also provided instruction as an Independent Study Home School Teacher for over 10 years.I have taught many students Algebra in small groups and individually. I am confident in my abilities to help students with a variety of learning styles ... 17 Subjects: including algebra 2, English, geometry, algebra 1 ...I then received my Masters in Applied Mathematics from Colorado State University. My GPA has always been high as I am a dedicated student and loved the material! After graduation I went into telecommunications but, after 15 years doing that, I knew I wanted to talk to people and not computers which is when I went into education. 26 Subjects: including differential equations, discrete math, chess, elementary math ...This can be a fun subject when the student has confidence about the basics. Understanding Precalculus and Trigonometry concepts need not be daunting. They can be explained in simple terms and there are ways to remember key concepts. 18 Subjects: including statistics, algebra 1, algebra 2, American history
{"url":"http://www.purplemath.com/Bodega_Bay_Math_tutors.php","timestamp":"2014-04-16T16:48:00Z","content_type":null,"content_length":"23829","record_id":"<urn:uuid:d283d540-5e0d-41c1-b112-731b72ba652d>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
Locally integrable function From Encyclopedia of Mathematics at a point A function that is integrable in some sense or other in a neighbourhood of [2]) there is a real-valued function [1] S. Saks, "Theory of the integral" , Hafner (1952) (Translated from French) [2] G.P. Tolstov, "On the curvilinear and iterated integral" Trudy Mat. Inst. Steklov. , 35 (1950) pp. 1–101 (In Russian) How to Cite This Entry: Locally integrable function. I.A. Vinogradova (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Locally_integrable_function&oldid=16313 This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
{"url":"http://www.encyclopediaofmath.org/index.php/Locally_integrable_function","timestamp":"2014-04-18T05:45:10Z","content_type":null,"content_length":"17001","record_id":"<urn:uuid:bb9d19d4-b609-41a8-ac77-b4c4954b454f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
anipulating Programs Analyzing Heap Manipulating Programs Verifying properties of programs manipulating the heap is an undecidable problem in general. In this talk, we will look at a few automata-theoretic approaches to analyzing properties of programs manipulating commonly used dynamically alloted data structures. Specifically, we will discuss regular model checking as a technique for reasoning about parameterized list-like data structures. We will also look at counter automata based techniques for reasoning about lists and tree-like data structures. Finally, we will look at automata based techniques for verifying that methods operating on certain classes of data structures maintain some invariant of the data structure.
{"url":"http://www.imsc.res.in/~kamal/satlects/supratik.html","timestamp":"2014-04-16T20:10:10Z","content_type":null,"content_length":"1646","record_id":"<urn:uuid:e3c500ba-6e16-489b-8c3f-1541427ce44c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
TI Program May 10th 2008, 03:56 AM TI Program This is driving me mad... Can anyone tell me how to do the following equation on a calculator? 1 / [1 + e-(1*0.5+1*0.4-1*0.8)] May 10th 2008, 04:00 AM May 10th 2008, 04:03 AM May 10th 2008, 04:13 AM Oh, I see e has a value approaching 2.7. Actually it's e^(...). It's the exponential function : Exponential function - Wikipedia, the free encyclopedia So if you want to enter the function in your calculator, it should be : May 10th 2008, 04:17 AM Oh, I see e has a value approaching 2.7. Actually it's e^(...). It's the exponential function : Exponential function - Wikipedia, the free encyclopedia So if you want to enter the function in your calculator, it should be : I know the result should be 0.5250 according to the textbook I am using. But doing the above in my calculator returns 0.5 for some reason. Another similar example that is given is: This should return 0.8808 but also returns 0.5 on my calculator. If it helps the calculator is a Sharp EL-530L. May 10th 2008, 04:26 AM I know the result should be 0.5250 according to the textbook I am using. But doing the above in my calculator returns 0.5 for some reason. Another similar example that is given is: This should return 0.8808 but also returns 0.5 on my calculator. If it helps the calculator is a Sharp EL-530L. I don't know this calculator, so I don't know what e represents for them... Try typing : (because e ~ 2,71828183) May 10th 2008, 04:40 AM Thanks for the help. I have tried a diffferent calculator and it returns the correct results. Not sure what is the matter with mine but I will get a replacement for the exam :)
{"url":"http://mathhelpforum.com/calculators/37846-ti-program-print.html","timestamp":"2014-04-20T11:17:58Z","content_type":null,"content_length":"9801","record_id":"<urn:uuid:d3c00dfb-550b-447a-9a91-18e36817e04a>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Consistency strength order pax0@seznam.cz pax0 at seznam.cz Wed Apr 30 13:22:48 EDT 2008 Can someone of Dear FOMers make the notion of "Consistency strength order" sufficiently clear to me? For first order theories S, T (in the same language) we define that T has higher consistency strength then S: S \le_{Cons} T iff P_S \subseteq P_T, where P_S and P_T are Pi^0_1 consequences of S or T, respectively. Second definition could be S \le_{Cons} T iff U |-- Con(T) --> Con(S), where U is some weak base theory. What is the relation between these two definitions, and how should the right definition read? Thank you, J.P. More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2008-April/012843.html","timestamp":"2014-04-18T10:36:36Z","content_type":null,"content_length":"2777","record_id":"<urn:uuid:bd36fa6c-72e4-4432-8866-3377e65f8fa1>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00357-ip-10-147-4-33.ec2.internal.warc.gz"}
Normally distributed random variables September 11th 2011, 03:05 AM #1 Junior Member Apr 2010 Normally distributed random variables Not sure about this question, i think the use of letters instead of numbers confuses me. If anyone could provide me with some help, it would be much appreciated X is a normally distributed random variable. If a<µ<b and Pr(X>a)=p and Pr(X<b)=q, find: a) Pr(a<X<b) b) Pr(X<a/X<b) Re: Normally distributed random variables For part a), $\mathcal{P}(X\le a)=1-\mathcal{P}(X>a)$. Do you know how to rewrite $\mathcal{P}(a<X<b)~?$ For part b, is that notation suppose to be $\mathcal{P}(X<a|X<B)~?$ OR you you mean $\mathcal{P}(X\color{red}>a|X<B)~?$ Re: Normally distributed random variables [QUOTE=Plato;679274]For part a), $\mathcal{P}(X\le a)=1-\mathcal{P}(X>a)$. Do you know how to rewrite $\mathcal{P}(a<X<b)~?$ no I can't think of it, I might have learned it, but I just can't remember it at the moment For part b, is that notation suppose to be $\mathcal{P}(X<a|X<B)~?$ OR you you mean $\mathcal{P}(X\color{red}>a|X<B)~?$[/QUOTE sorry about the confusion, i meant the first one Re: Normally distributed random variables For part a), $\mathcal{P}(X\le a)=1-\mathcal{P}(X>a)$. Do you know how to rewrite $\mathcal{P}(a<X<b)~?$ no I can't think of it, I might have learned it, but I just can't remember it at the moment For part b, is that notation suppose to be $\mathcal{P}(X<a|X<B)~?$ OR you you mean $\mathcal{P}(X\color{red}>a|X<B)~?$ sorry about the confusion, i meant the first one In that case note that $(-\infty,a]\cap(-\infty,b]=(-\infty,a]$ September 11th 2011, 03:22 AM #2 September 11th 2011, 04:13 AM #3 Junior Member Apr 2010 September 11th 2011, 04:19 AM #4
{"url":"http://mathhelpforum.com/statistics/187757-normally-distributed-random-variables.html","timestamp":"2014-04-18T21:10:38Z","content_type":null,"content_length":"44460","record_id":"<urn:uuid:e4df0fa2-bb45-4579-8b3b-36eb42b1ca4f>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
s = std(X) s = std(X,flag) s = std(X,flag,dim) There are two common textbook definitions for the standard deviation s of a data vector X. and n is the number of elements in the sample. The two forms of the equation differ only in n – 1 versus n in the divisor. s = std(X), where X is a vector, returns the standard deviation using (1) above. The result s is the square root of an unbiased estimator of the variance of the population from which X is drawn, as long as X consists of independent, identically distributed samples. If X is a matrix, std(X) returns a row vector containing the standard deviation of the elements of each column of X. If X is a multidimensional array, std(X) is the standard deviation of the elements along the first nonsingleton dimension of X. s = std(X,flag) for flag = 0, is the same as std(X). For flag = 1, std(X,1) returns the standard deviation using (2) above, producing the second moment of the set of values about their mean. s = std(X,flag,dim) computes the standard deviations along the dimension of X specified by scalar dim. Set flag to 0 to normalize Y by n-1; set flag to 1 to normalize by n. The input array, X, must be of type double or single for all syntaxes. For matrix X X = s = std(X,0,1) s = 4.2426 7.0711 9.1924 s = std(X,0,2) s = See Also corrcoef | cov | mean | median | var
{"url":"http://www.mathworks.nl/help/matlab/ref/std.html?nocookie=true","timestamp":"2014-04-24T05:25:37Z","content_type":null,"content_length":"37924","record_id":"<urn:uuid:3c553294-47d2-4d2b-9961-47ed4f812cfa>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
Arc of a Circle Date: 08/16/2003 at 21:37:31 From: Frank Anderson Subject: Arc of a circle I am planning a model railroad, using Lionel section track. Lionel's wide radius curve track forms a 6' diameter circle when the 16 individual pieces are assembled. I would like a formula to calculate the distance between parallel lines if I use one right-hand curve section to turn away from a straight line followed by one left-hand curve section to return to a straight line, running parallel to the original line. I understand there are 360 degrees in a circle, and each of the 16 pieces of curve track forming the 6' diameter circle will represent a 22.5 degree turn. My goal is to calculate the distance between straight lines after I turn 22.5 degrees to the right of the first line followed by a 22.5 degree turn back to the left. Date: 08/16/2003 at 22:40:37 From: Doctor Peterson Subject: Re: Arc of a circle Hi, Frank. You can solve this and related problems using formulas found in our Segments of circles But that takes a little ingenuity. I'll show you how to attack it directly with a basic knowledge of trigonometry. Here's a picture: ooooo ooooo ooo ooo oo oo oo oo o o o o o o o o o o o +---------------------+ o |\ o o | \ o o | \ o o x| \ o o | \ r o oo |22.5\ oo oo | deg \ oo | ooo | \ ooo v ooooo +-------+ooo --------- Consider the 22.5 degree arc at the bottom. The vertical rise indicated will be r-x; and x=rcos(22.5). When you put two opposite curves together, the total vertical rise will be 2(r-x). So the distance between the two parallel tracks will be d = 2(r-x) = 2r(1 - cos(22.5)) = 0.1522r = 0.1522 * 3 ft = 0.4567 ft = 5.48 in If you have any further questions, feel free to write back. - Doctor Peterson, The Math Forum Date: 08/18/2003 at 23:15:54 From: Frank Anderson Subject: Arc of a circle I appreciate your fast reply. I lack the basic knowledge of trigonometry this requires, but I am determined to understand this material, and would like to dissect your reply for a better It appears you are using a right triangle as a basis for your calculations. Is this accurate? If so, 'r' looks like the radius of the circle, as well as the hypotenuse of the right triangle. In the right triangle, how are you identifying the two non-90-degree angles of the right triangle, as well as the length of the undefined side and side x? Also, in the text of your response, you supplied r-x as the formula for the vertical rise of the first curve track section, with x being equal to rcos(22.5). In r-x, does 'r' represent the radius of the circle? Also, in layman's terms, does 'rcos(22.5)' represent 'r' multiplied by the result of 'cos' multiplied by 22.5? I greatly appreciate your assistance, Date: 08/18/2003 at 23:36:28 From: Doctor Peterson Subject: Re: Arc of a circle Hi, Frank. I'll repeat the picture with more labels: ooooo ooooo ooo ooo oo oo oo oo o o o o o o o o o o o O---------------------+ o |\ o o | \ o o | \ o o x| \ o o | \ r o oo |22.5\ oo oo | deg \ oo | ooo | \ ooo v ooooo C-------Booo --------- You are interpreting things right. In the big right triangle COB in the picture, the hypotenuse OB is r, the radius of the circle; and I have called the vertical leg OC x. The angle AOB at the top of the right triangle is your 22.5 degree angle, and the arc AB at the bottom is your track section, which represents 1/16 of a circle. The key to trigonometry is the definition of certain "trigonometric functions", one of which is the cosine. When we write we mean the cosine of the angle AOB, which is defined as the ratio of the "adjacent side" OC to the hypotenuse OB. So cos(AOB) = x/r Solving for x, we multiply both sides by r: r cos(AOB) = x That's the formula I gave: to find x, you multiply r times the cosine of 22.5 degrees. And the value of that cosine (which you can find using a scientific calculator, such as the one supplied with Windows) is 0.9239, which you can then use to find the value of r-x. The basics of trig are really not too hard to understand, but it's certainly mystifying if you've never seen it. We have a brief introduction here: Trigonometry in a Nutshell - Doctor Peterson, The Math Forum Date: 08/21/2003 at 12:16:08 From: Frank Anderson Subject: Arc of a circle Your new explanation and labels were enough to get me going, and I have been able to identify the 'Vertical rise' of all of the diameter track circles I need. Thank you very much for breaking this down into an understandable format. Now that I can calculate the distances between parallel lines offset by arcs, I would like to expand my capability to include straight lines in the formula. In plain words, from a straight line, I will use a 22.5 degree right hand turn, followed by a 6", 10", or 14" piece of straight track, followed by the offsetting 22.5 degree turn back to the left. I assume you are going to introduce another right triangle to calculate the new distance, with the 90-degree angle at the bottom right, the 22.5 degree angle at the bottom left, the 67.5 degree angle at the top, and the straight track being the hypogenous. This will leave the bottom side of the right triangle as my new extension to the distance between the track. Am I on the right track? Again, I appreciate your assistance with my elementary problem. I am sure you had more interesting issues, unless you are also into trains. Date: 08/21/2003 at 12:31:46 From: Doctor Peterson Subject: Re: Arc of a circle Hi, Frank. Yes, you are on the right track, and I'll pardon the pun. You start with a 22.5 degree curve, and then continue in that direction for a distance d before turning again: ... |a | \ | \ d h| \ | \ | 22.5\ b| ... We've calculated distances a and b (which are equal); now we use even simpler trig to find h: h/d = sin(22.5) h = d * sin(22.5) So id d=10 inches, h is h = 10 * sin(22.5) = 10 * 0.38268 = 3.8268 in Then the total offset is the sum of a, h, and b. - Doctor Peterson, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/63885.html","timestamp":"2014-04-20T01:34:05Z","content_type":null,"content_length":"13466","record_id":"<urn:uuid:822c9771-9677-4052-a2f2-644515b3860f>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
Some Thoughts on the Common Core State Standards I’ve been waist deep in K-5 Common Core State Standards for a year now. Building a parent involvement site to be aligned with CCSS will do that to you. So I’m pretty well qualified to share some thoughts on the Common Core. What are the Common Core State Standards? The CCSS in math are a set of standards that outline what prepared students know, do or understand in math. In 2009 by the National Governors Association hired Student Achievement Partners to write the CCSS. It appears they’re research based and the writers “collaborated with teachers, researchers, and leading experts” to create the standards. (ref) What do I think about the CCSS? As a whole, the Common Core State Standards promote flexible, novel and critical thinking. But the way they’re presented lead educators to want to play “check the boxes” with them. The language and sentence structure used to express the standards tend to be confusing and verbose – just the opposite of the (assumed) intention of the writers. When reading through the CCSS, I find myself looking terms up, thinking about possible meanings and struggling through what they really mean. Why are the CCSS good? The Standards for Mathematical Practices are pretty awesome. (But those are just the first and smallest part of the math CCSS.) They say that prepared students think, imagine, create and reason. They say students should be flexible in their thinking, try various ways to solve problems and question the right answers. I couldn’t have written a better treatise on what math students should look like. What’s wrong with the Common Core State Standards? The rest of the standards look like a checkbox outline. If you understand the CCSS to be guidelines, this isn’t a problem. But most people don’t. And the wording of the content standards can be a little… out of hand. For example: Find whole-number quotients and remainders with up to four-digit dividends and one-digit divisors, using strategies based on place value, the properties of operations, and/or the relationship between multiplication and division. Illustrate and explain the calculation by using equations, rectangular arrays, and/or area models. CCSS 4.NBT.B.6 That’s one standard. One. But again, if you go with the “guidelines” and not checkboxes viewpoint, this might not be bad. But some of these standards gets really verbose and confusing. What do you think? I love the idea of thinking instead of memorizing. But I can see how the list of standards can be overwhelming. What do you think? Do you love the Common Core? Hate it? Are you frustrated, excited or ambivalent? And how does it impact your teaching? Share your thoughts in the comments and tweet it out! This post may contain affiliate links. When you use them, you support us so we can continue to provide free content! 4 Responses to Some Thoughts on the Common Core State Standards 1. I am old. 64 years old. I have seen an awful lot of education or the lack of it, in my time. my own, my son’s, my daughter-in-law’s, my grand daughter who is homeschooled… one thing has remained the same through out and that is people talking over other people’s heads. if what is truly wanted here in this “common core change for the better”, is a real education for ALL children and to me after all these years that means teaching children to learn how to learn, then might it not make sense to put any new rules into language that is simple enough for all of us to understand? now I love a big high fallutin’ word as well as the next person. they can be fun and once in a while even useful. but when we are talking about educating OUR children.. ALL of OUR children, then it is about time the educators in this country came down off their high horses and addressed these issues and printed the rules that are supposed to be for the good of ALL OUR children in a language that the general public has a better chance of understanding. after all the parents of today’s students were students themselves not long ago. they should all have had enough education to read and write at a basic level. being educators, the people who are producing the common core rules ought to know what that basic level is and speak to it. that way pretty much anyone can see for themselves what their child should be learning. they should not have to spend hour upon hour referring to a dictionary. then if they are really lucky and finally guess correctly they might get the right meaning correct . right now the “educatoreze” that is being used is not appropriate. if I have still not made myself clear, it means that people who are working with the general public and their children for the good of all really should know better than to talk/write over the heads of the very people these “common core rules” are supposed to be helping. it is a darned shame that this is supposed to be a major change in how education works. yet it is still couched in the mystique of a language most people (parents) who need to know how the changes concern them cannot understand without a translator….just a thought I have had running around my head for the last 15 years….nothing new here. same old same old… □ Thanks for your thoughts, Nancy. You make a great point! They don’t seem to be connecting with the audience for whom they are writing – and it’s because of the complexity of their writing. 2. My wife and I homeschool our two boys ( Grade 2 & 4). I generally teach the math and science portions of their curriculum. I’ve been using various math curriculum products over the years. Recently, I stumbled across TenMarks. Since “word problems” are generally where my boys stuble, I thought TenMarks would be a good way to challenge them. We started with TenMarks a few days ago. Overall I like it quite a bit. However, it conforms to the Common Core Math standards. So … It uses lots of big words. In the first assignment I gave my second grader he wathced a helper video for the assignment. It babbled on and on about minuends, addends, etc. I’ve taught my son all of those terms. However, the video threw so many out it was like operand salad. My son turned to me and said, “this video is useless to me.” So it was. Funnily enough, after a barrage of mathematical terms, the questions say, “Write an addition sentence…”. Addition sentence? Really? After throwing all those terms in a 7 year old’s face they can’t use the term “equation”? I will stick with TenMarks for now. I think it is very useful as a teaching aid. The CCSS has some good aspects to it. However, as with any standard created by a government body or for a government, it is too verbose and speaks over the heads its intended audience. □ Thanks for your thoughts, Justin. That is kinda weird. I had to look up summand, addend and minuend too! Nobody I’ve ever known uses those. Quotient, product and sum, yes. But the bits to get you there? Nope. I’m glad you’re able to use it as a teaching tool in some way, though. Very clever of you! Leave a reply
{"url":"http://mathfour.com/commentary/some-thoughts-on-the-common-core-state-standards","timestamp":"2014-04-16T04:11:59Z","content_type":null,"content_length":"35429","record_id":"<urn:uuid:a8877b5e-6666-4b57-84c9-a0315f84c556>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
A categorical model for the geometry of interaction. Theoretical Computer Science - Order "... This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial inject ..." Cited by 9 (9 self) Add to MetaCart This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial injections, Hilbert spaces (also modulo phase), and Boolean algebras, and (2) have interesting categorical/logical/order-theoretic properties, in terms of kernel fibrations, such as existence of pullbacks, factorisation, orthomodularity, atomicity and completeness. For instance, the Sasaki hook and and-then connectives are obtained, as adjoints, via the existential-pullback adjunction between fibres. 1 , 2005 "... We introduce a typed version of Girard’s Geometry of Interaction, called Multiobject GoI (MGoI) semantics. We give an MGoI interpretation for multiplicative linear logic (MLL) without units which applies to new kinds of models, including finite dimensional vector spaces. For MGoI (i) we develop a v ..." Cited by 4 (1 self) Add to MetaCart We introduce a typed version of Girard’s Geometry of Interaction, called Multiobject GoI (MGoI) semantics. We give an MGoI interpretation for multiplicative linear logic (MLL) without units which applies to new kinds of models, including finite dimensional vector spaces. For MGoI (i) we develop a version of partial traces and trace ideals (related to previous work of Abramsky, Blute, and Panangaden); (ii) we do not require the existence of a reflexive object for our interpretation (the original GoI 1 and 2 were untyped and hence involved a bureaucracy of domain equation isomorphisms); (iii) we introduce an abstract notion of orthogonality (related to work of Hyland and Schalk) and use this to develop a version of Girard’s theory of types, datum and algorithms in our setting, (iv) we prove appropriate Soundness and Completeness Theorems for our interpretations in partially traced categories with orthogonality; (v) we end with an application to completeness of (the original) untyped GoI in a unique decomposition category. "... Replace this file with prentcsmacro.sty for your meeting, or with entcsmacro.sty for your meeting. Both can be ..." "... This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial inject ..." Add to MetaCart This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial injections, Hilbert spaces (also modulo phase), and Boolean algebras, and (2) have interesting categorical/logical properties, in terms of kernel fibrations, such as existence of pullbacks, factorisation, and orthomodularity. For instance, the Sasaki hook and and-then connectives are obtained, as adjoints, via the existential-pullback adjunction between fibres. 1 , 902 "... This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial inject ..." Add to MetaCart This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial injections, Hilbert spaces (also modulo phase), and Boolean algebras, and (2) have interesting categorical/logical properties, in terms of kernel fibrations, such as existence of pullbacks, factorisation, and orthomodularity. For instance, the Sasaki hook and and-then connectives are obtained, as adjoints, via the existential-pullback adjunction between fibres. 1 "... This paper deals with questions relating to Haghverdi and Scott’s notion of partially traced categories. The main result is a representationtheorem for such categories: we provethat everypartiallytraced categorycan be faithfully embedded in a totally traced category. Also conversely, every symmetric ..." Add to MetaCart This paper deals with questions relating to Haghverdi and Scott’s notion of partially traced categories. The main result is a representationtheorem for such categories: we provethat everypartiallytraced categorycan be faithfully embedded in a totally traced category. Also conversely, every symmetric monoidal subcategory ofatotallytracedcategoryispartiallytraced, sothis characterizesthe partiallytracedcategoriescompletely. The main technique we use is based on Freyd’s paracategories, along with a partial version of Joyal, Street, and Verity’s Int-construction.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=4635397","timestamp":"2014-04-20T22:22:09Z","content_type":null,"content_length":"23923","record_id":"<urn:uuid:c4a74a39-3903-4494-a3fc-7ed2e75b0f89>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
Regularized hypergeometric function 2F1: Series representations (subsection 06/01) Generalized power series Expansions at generic point z==z[0] For the function itself Expansions on branch cuts For the function itself Expansions at z==0 For the function itself General case Special cases Generic formulas for main term Expansions at z==1 For the function itself General case Logarithmic cases Generic formulas for main term Expansions at z==infinity For the function itself The general formulas Case of simple poles Case of double poles Case of canceled double poles Generic formulas for main term Expansions at z==infinity for polynomial cases
{"url":"http://functions.wolfram.com/HypergeometricFunctions/Hypergeometric2F1Regularized/06/01/ShowAll.html","timestamp":"2014-04-18T23:33:44Z","content_type":null,"content_length":"80218","record_id":"<urn:uuid:e2d42224-eada-4462-a0c9-272ec9b8095f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
Next: Conclusion Up: No Title Previous: Fuzzy Composition Once the rules have been composed the solution, as has been seen, is a fuzzy set. However, for most applications (in particular in control) there is a need for a single action or `crisp' solution to emanate from the inferencing process. This will involve the `defuzzification' of the solution set. There are various techniques available. Lee [20] describes the three main approaches as the max criterion, mean of maximum and the centre of area. The max criterion method finds the point at which the membership function is a maximum. The mean of maximum takes the mean of those points where the membership function is at a maximum. The most common method is the centre of area method which finds the centre of gravity of the solution fuzzy sets. For a discrete fuzzy set this is where 20] states that ``Unfortunately, there is no systematic procedure for choosing a defuzzification strategy.''. Although the process of reducing the final fuzzy set to a crisp value does seem appropriate for control problems much information is lost by doing this and further work needs to be done on how to use the information available in the solution fuzzy set. Bob John Fri Oct 25 14:41:29 BST 1996
{"url":"http://www.cse.dmu.ac.uk/~rij/newrep/node16.html","timestamp":"2014-04-17T09:33:47Z","content_type":null,"content_length":"3220","record_id":"<urn:uuid:528337d1-a33c-4280-8600-8ee02c60bb0a>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
A System of Logic, Ratiocinative and Inductive A System of Logic, Ratiocinative and Inductive: Being a Connected View of the Principles of Evidence and the Methods of Scientific Investigation, Volume 1 (Google eBook) John Stuart Mill We haven't found any reviews in the usual places. II 17 III 19 IV 27 V 59 VI 103 VII 116 VIII 145 IX 159 XVII 328 XVIII 343 XIX 345 XX 352 XXI 370 XXII 381 XXIII 392 XXIV 425 X 182 XI 213 XII 215 XIII 226 XIV 244 XV 275 XVI 296 XXV 437 XXVI 450 XXVII 480 XXVIII 506 XXIX 534 XXX 548 XXXI 562 Popular passages The cause, then, philosophically speaking, is the sum total of the conditions, positive and negative, taken together; the whole of the contingencies of every description, which being realized, the consequent invariably follows. If an instance in which the phenomenon under investigation occurs, and an instance in which it does not occur, have every circumstance in common save one, that one occurring only in the former; the circumstance in which alone the two instances differ is the effect, or the cause, or an indispensable part of the cause, of the phenomenon. Why is a single instance, in some cases, sufficient for a complete induction ; while in others, myriads of concurring instances, without a single exception known or presumed, go such a very little way towards establishing a universal proposition ? Whoever can answer this question, knows more of the philosophy of logic than the wisest of the ancients, and has solved the problem of induction. Whatever be the most proper mode of expressing it, the proposition that the course of nature is uniform is the fundamental principle, or general axiom, of Induction. I conceive, be found, if we advert to one of the characteristic properties of geometrical forms — their capacity of being painted in the imagination with a distinctness equal to reality : in other words, the exact resemblance of our ideas of form to the sensations which suggest them. All men are mortal, Socrates is a man, therefore Socrates is mortal; it is unanswerably urged by tlie adversaries of the syllogistic theory, that the proposition, Socrates is mortal, is presupposed in the more general assumption, All men are mortal... Subduct from any phenomenon such part as is known by previous inductions to be the effect of certain antecedents, and the residue of the phenomenon is the effect of the remaining antecedents. The only notion of a cause, which the theory of induction requires, is such a notion as can be gained from experience. The Law of Causation, the recognition of which is the main pillar of inductive science, is but the familiar truth, that invariability of succession is found by observation to obtain between every fact in nature and some other fact which has preceded it... ... first law of motion, viz., that all bodies in motion continue to move in a straight line with uniform velocity until acted upon by some new force. This assertion is in open opposition to first appearances; all terrestrial objects, when in motion, gradually abate their velocity and at last stop, which, accordingly, the ancients, with their inductio per enumerationem simplicem, imagined to be the law. A name is a word taken at pleasure to serve for a mark which may raise in our mind a thought like to some thought we had before, and which being pronounced to others may be to them a sign of what thought the speaker had before in his mind. References from web pages JSTOR: A System of Logic Ratiocinative and Inductive. Collected ... A System of Logic Ratiocinative and Inductive. Collected Works of John Stuart Mill. Edited by jm ROBSON. Routledge and Kegan Paul for University of Toronto ... links.jstor.org/ sici?sici=0013-0427(197611)2%3A43%3A172%3C446%3AASOLRA%3E2.0.CO%3B2-B A System of Logic: Ratiocinative and Inductive; Being a Connected ... Read A System of Logic: Ratiocinative and Inductive; Being a Connected View of the Principles of Evidence and the Methods of Scientific Investigation Vol. www.questia.com/ PM.qst?a=o& d=5774540 Online Library of Liberty - CHAPTER VI: Fallacies of a ... CHAPTER VI: Fallacies of a Ratiocination - The Collected Works of John Stuart Mill, Volume VIII - A System of Logic Ratiocinative and Inductive Part II ... oll.libertyfund.org/ ?option=com_staticxt& staticfile=show.php%3Ftitle=247& chapter=40039& layout=html& Itemid=27 A System of Logic - Wikipedia, the free encyclopedia A System of Logic, Ratiocinative and Inductive is an 1843 book by English philosopher John Stuart Mill. In this work, he formulated the five principles of ... en.wikipedia.org/ wiki/ A_System_of_Logic Causality - Mill In his monumental A System of Logic Ratiocinative and Inductive (1843), John Stuart Mill (1806–1873) defended the Regularity View of Causality, ... science.jrank.org/ pages/ 8541/ Causality-Mill.html A Science of Human Nature by John stuartmill Explain your answer. Notes. [1]. John Stuart Mill. A System of Logic: Ratiocinative and Inductive. New York: Longmans, Green, and Co., 1893, Bk. VI, Ch. IV. ... philosophy.lander.edu/ intro/ introbook2.1/ c7737.html Ethology: Definition with Ethology Pictures and Photos A System of Logic, Ratiocinative and Inductive: Being a Connected View of by John Stuart Mill (1906) "CHAPTER V. OK ethology, OR THE SCIENCE OF THE ... www.lexic.us/ definition-of/ ethology ABC of Referencing - ABC of Citation Mill, js 1843 A System of Logic Ratiocinative and Inductive, Longmans Green, London. Mill, js 1869 The Subjection of Women, Dent/Everyman edition 1985, ... www.mdx.ac.uk/ WWW/ STUDY/ Refer.htm politivi's Shelf of logic Books - Shelfari {"blisttype":0,"books":[{"editionid":2275961,"bookid":1869077,"rating":0,"title":"A system of logic, ratiocinative and inductive;: Being... www.shelfari.com/ politivi/ tags/ logic 19th Century Logic Between Philosophy And Mathematics Although Mill called his logic A System of Logic Ratiocinative and Inductive, the deductive parts played only a minor rôle, used only to show that all ... meta-religion.com/ Mathematics/ Philosophy_of_mathematics/ 19_century_logic.htm Bibliographic information
{"url":"http://books.google.com/books?id=y4MEAAAAQAAJ","timestamp":"2014-04-20T03:30:07Z","content_type":null,"content_length":"165060","record_id":"<urn:uuid:7c65493e-5020-4257-b64b-d7d3087ddc47>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: "definability theory" Stephen G Simpson simpson at math.psu.edu Sun Aug 30 19:17:07 EDT 1998 Wayne Richter writes: > As Joe has mentioned, some form of definability is at the heart of > recursion theory, and quite similar notions of definability seem to > be at the heart of complexity theory as well. This reminds me of an episode in the 1970's. I'll relate it here for what it's worth. What happened was that a group of people including Jon Barwise and Yiannis Moschovakis proposed "definability theory" as an umbrella term for a collection of topics which included inductive definitions and at least parts of generalized recursion theory. For several years Barwise's plan as editor of the Handbook of Mathematical Logic was for "definability theory" to be one of the principal divisions of that book. In the end, this plan didn't fly. "Definability theory" was abandoned, and the Handbook stayed with the traditional 4-fold scheme whereby mathematical logic is partitioned into model theory, proof theory, recursion theory, and set theory. I never learned exactly what went wrong, but I suspect at least part of the problem was that people couldn't agree on the boundaries of "definability theory". Does it include descriptive set theory? degrees of unsolvability? r.e. sets? complexity theory? Beth's definability theorem? To be sure, definability is a theme in many parts of mathematical logic, and there are many analogies. But maybe it's not so easy to turn these analogies into a subject. -- Steve More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-August/002030.html","timestamp":"2014-04-16T19:27:26Z","content_type":null,"content_length":"3843","record_id":"<urn:uuid:f42bb39f-cbfa-43cc-81c1-a7b378f55e38>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
The Natural Numbers Next: Natural Numbers: Basic Definitions Up: The LEGO library Previous: Inductive Sigma The directory lib_nat is the biggest section of the library so far. It contains a number of files, each concerning a particular function or set of functions, so users can load just the theorems relating to the functions they want. Some files contain only a few theorems in their own right but load many theorems from other files on which they depend. An example of this is the file lib_nat_rels which loads all the files about relations on the natural numbers and also most of the files with theorems about the algebraic structure of nat. Fri May 24 19:01:27 BST 1996
{"url":"http://www.dcs.ed.ac.uk/home/lego/html/release-1.2/library/node24.html","timestamp":"2014-04-16T18:56:48Z","content_type":null,"content_length":"3960","record_id":"<urn:uuid:4422afe7-b45d-469f-b1f7-68c6124e4af2>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
Source-Level Proof Reconstruction for Interactive Theorem Proving Results 1 - 10 of 15 "... Sledgehammer, a component of the interactive theorem prover Isabelle, finds proofs in higher-order logic by calling the automated provers for first-order logic E, SPASS and Vampire. This paper is the largest and most detailed empirical evaluation of such a link to date. Our test data consists of 12 ..." Cited by 23 (3 self) Add to MetaCart Sledgehammer, a component of the interactive theorem prover Isabelle, finds proofs in higher-order logic by calling the automated provers for first-order logic E, SPASS and Vampire. This paper is the largest and most detailed empirical evaluation of such a link to date. Our test data consists of 1240 proof goals arising in 7 diverse Isabelle theories, thus representing typical Isabelle proof obligations. We measure the effectiveness of Sledgehammer and many other parameters such as run time and complexity of proofs. A facility for minimizing the number of facts needed to prove a goal is presented and analyzed. "... Abstract. Sledgehammer is a component of Isabelle/HOL that employs firstorder automatic theorem provers (ATPs) to discharge goals arising in interactive proofs. It heuristically selects relevant facts and, if an ATP is successful, produces a snippet that replays the proof in Isabelle. We extended Sl ..." Cited by 15 (6 self) Add to MetaCart Abstract. Sledgehammer is a component of Isabelle/HOL that employs firstorder automatic theorem provers (ATPs) to discharge goals arising in interactive proofs. It heuristically selects relevant facts and, if an ATP is successful, produces a snippet that replays the proof in Isabelle. We extended Sledgehammer to invoke satisfiability modulo theories (SMT) solvers as well, exploiting its relevance filter and parallel architecture. Isabelle users are now pleasantly surprised by SMT proofs for problems beyond the ATPs ’ reach. Remarkably, the best SMT solver performs better than the best ATP on most of our benchmarks. 1 "... Abstract. Most automatic theorem provers are restricted to untyped or monomorphic logics, and existing translations from polymorphic logics are bulky or unsound. Recent research shows how to exploit monotonicity to encode ground types efficiently: monotonic types can be safely erased, while nonmonot ..." Cited by 11 (8 self) Add to MetaCart Abstract. Most automatic theorem provers are restricted to untyped or monomorphic logics, and existing translations from polymorphic logics are bulky or unsound. Recent research shows how to exploit monotonicity to encode ground types efficiently: monotonic types can be safely erased, while nonmonotonic types must generally be encoded. We extend this work to rank-1 polymorphism and show how to eliminate even more clutter. We also present alternative schemes that lighten the translation of polymorphic symbols, based on the novel notion of “cover”. The new encodings are implemented, and partly proved correct, in Isabelle/HOL. Our evaluation finds them vastly superior to previous schemes. 1 "... Abstract. We present a new methodology for exchanging unsatisfiability proofs between an untrusted SMT solver and a sceptical proof assistant with computation capabilities like Coq. We advocate modular SMT proofs that separate boolean reasoning and theory reasoning; and structure the communication b ..." Cited by 8 (2 self) Add to MetaCart Abstract. We present a new methodology for exchanging unsatisfiability proofs between an untrusted SMT solver and a sceptical proof assistant with computation capabilities like Coq. We advocate modular SMT proofs that separate boolean reasoning and theory reasoning; and structure the communication between theories using Nelson-Oppen combination scheme. We present the design and implementation of a Coq reflexive verifier that is modular and allows for fine-tuned theory-specific verifiers. The current verifier is able to verify proofs for quantifier-free formulae mixing linear arithmetic and uninterpreted functions. Our proof generation scheme benefits from the efficiency of state-of-the-art SMT solvers while being independent from a specific SMT solver proof format. Our only requirement for the SMT solver is the ability to extract unsat cores and generate boolean models. In practice, unsat cores are relatively small and their proof is obtained with a modest overhead by our proof-producing prover. We present experiments assessing the feasibility of the approach for benchmarks obtained from the SMT competition. 1 - J AUTOM REASONING "... Extended Static Checking (ESC) is a fully automated formal verification technique. Verification in ESC is achieved by translating programs and their specifications into verification conditions (VCs). Proof of a VC establishes the correctness of the program. The implementations of many seemingly simp ..." Cited by 3 (0 self) Add to MetaCart Extended Static Checking (ESC) is a fully automated formal verification technique. Verification in ESC is achieved by translating programs and their specifications into verification conditions (VCs). Proof of a VC establishes the correctness of the program. The implementations of many seemingly simple algorithms are beyond the ability of traditional Extended Static Checking (ESC) tools to verify. Not being able to verify toy examples is often enough to turn users off of the idea of using formal methods. ESC4, the ESC component of the JML4 project, is able to verify many more kinds of methods in part because of its use of novel techniques which apply multiple theorem provers. In particular, we present Offline User-Assisted ESC (OUA-ESC), a new form of verification that lies between ESC and Full Static Program Verification (FSPV). ESC is generally quite efficient, as far as verification tools go, but it is still orders of magnitude slower than simple compilation. As can be imagined, proving VCs is computationally expensive: While small classes can be verified in seconds, verifying larger programs of 50 KLOC can take hours. To help address the added cost of using multiple provers and this lack of scalability, we present the multi-threaded version of ESC4 and its distributed prover back-end. , 2011 "... Isabelle/HOL is a popular interactive theorem prover based on higherorder logic. It owes its success to its ease of use and powerful automation. Much of the automation is performed by external tools: The metaprover Sledgehammer relies on resolution provers and SMT solvers for its proof search, the c ..." Cited by 3 (1 self) Add to MetaCart Isabelle/HOL is a popular interactive theorem prover based on higherorder logic. It owes its success to its ease of use and powerful automation. Much of the automation is performed by external tools: The metaprover Sledgehammer relies on resolution provers and SMT solvers for its proof search, the counterexample generator Quickcheck uses the ML compiler as a fast evaluator for ground formulas, and its rival Nitpick is based on the model finder Kodkod, which performs a reduction to SAT. Together with the Isar structured proof format and a new asynchronous user interface, these tools have radically transformed the Isabelle user experience. This paper provides an overview of the main automatic proof and disproof tools. "... This paper presents an algorithm that redirects proofs by contradiction. The input is a refutation graph, as produced by an automatic theorem prover (e.g., E, SPASS, Vampire, Z3); the output is a direct proof expressed in natural deduction extended with case analyses and nested subproofs. The algori ..." Cited by 2 (2 self) Add to MetaCart This paper presents an algorithm that redirects proofs by contradiction. The input is a refutation graph, as produced by an automatic theorem prover (e.g., E, SPASS, Vampire, Z3); the output is a direct proof expressed in natural deduction extended with case analyses and nested subproofs. The algorithm is implemented in Isabelle’s Sledgehammer, where it enhances the legibility of machine-generated proofs. 1 "... Abstract. Sledgehammer for Isabelle/HOL integrates automatic theorem provers to discharge interactive proof obligations. This paper considers a tighter integration of the superposition prover SPASS to increase Sledgehammer’s success rate. The main enhancements are native support for hard sorts (simp ..." Cited by 2 (2 self) Add to MetaCart Abstract. Sledgehammer for Isabelle/HOL integrates automatic theorem provers to discharge interactive proof obligations. This paper considers a tighter integration of the superposition prover SPASS to increase Sledgehammer’s success rate. The main enhancements are native support for hard sorts (simple types) in SPASS, simplification that honors the orientation of Isabelle simp rules, and a pair of clause-selection strategies targeted at large lemma libraries. The usefulness of this integration is confirmed by an evaluation on a vast benchmark suite and by a case study featuring a formalization of language-based security. 1 "... interpretation techniques and other over-approximations of the set of reachable states or traces. The protocol models that these tools employ are shaped by the needs of automated verification and require subtle assumptions. Also, a complex verification tool may suffer from implementation bugs so tha ..." Cited by 1 (0 self) Add to MetaCart interpretation techniques and other over-approximations of the set of reachable states or traces. The protocol models that these tools employ are shaped by the needs of automated verification and require subtle assumptions. Also, a complex verification tool may suffer from implementation bugs so that in the worst case the tool could accept some incorrect protocols as being correct. These risks of errors are also present, but considerably smaller, when using an LCF-style theorem prover like Isabelle. The interactive security proof, however, requires a lot of expertise and time. We combine the advantages of both worlds by using the representation of the over-approximated search space computed by the automated tools as a “proof idea ” in Isabelle. Thus, we devise proof tactics for Isabelle that generate the correctness proof of the protocol from the output of the automated tools. In the worst case, these tactics fail to construct a proof, namely when the representation of the search space is for some reason incorrect. However, when they succeed, the correctness only relies on the basic model and the Isabelle core. 1 "... Sledgehammer integrates external automatic theorem provers (ATPs) in the Isabelle/HOL proof assistant. To guard against bugs, ATP proofs must be reconstructed in Isabelle. Reconstructing complex proofs involves translating them to detailed Isabelle proof texts, using suitable proof methods to justif ..." Cited by 1 (1 self) Add to MetaCart Sledgehammer integrates external automatic theorem provers (ATPs) in the Isabelle/HOL proof assistant. To guard against bugs, ATP proofs must be reconstructed in Isabelle. Reconstructing complex proofs involves translating them to detailed Isabelle proof texts, using suitable proof methods to justify the inferences. This has been attempted before with little success, but we have addressed the main issues: Sledgehammer now transforms the proofs by contradiction into direct proofs (as described in a companion paper [3]); it reconstructs skolemization inferences correctly; it provides the right amount of type annotations to ensure formulas are parsed correctly without marring them with types; and it iteratively tests and compresses the output, resulting in simpler and faster working proofs.
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.145.8845","timestamp":"2014-04-18T20:15:40Z","content_type":null,"content_length":"37491","record_id":"<urn:uuid:0705d6c3-9814-4dd5-931c-36171bc1a186>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: general statistical reasoning question in biomedical statist [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: general statistical reasoning question in biomedical statistics(no Stata content) From cryan@binghamton.edu To statalist@hsphsun2.harvard.edu Subject Re: st: general statistical reasoning question in biomedical statistics(no Stata content) Date Thu, 11 Dec 2003 14:47:25 -0500 (EST) A very good point. Thank you for reminding me. > Let me be picky with just one point you make because I see it > repeated too often, and that is the issue that, "And for every 20 > baseline variables compared, you'd *expect* about 1 of those baseline > variables to have a P of < 0.05" > This is a misquote of a mathematical tautology that says that 5% of > all tests (1 in 20) will fall into the 5% region. The proper quote > is that this refers to *independent* tests. This oversight is > especially important here because if ever one should question the > independence assumption it is in this situation. When we are looking > at a number of baseline characteristrics on the patients, it is > probably more than likely that there is some dependence amongst them. > For example, if the two arms are not balanced with respect to height > with one arm getting the shorter patients, then more than likely > that arm will have the lighter patients too. > So when we do a number of related tests, we may get more or less > than 5% significant due to chance. It all depends on the > structure of the dependency, a point that should be made to students. > m.p. * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2003-12/msg00365.html","timestamp":"2014-04-18T10:56:09Z","content_type":null,"content_length":"6997","record_id":"<urn:uuid:aca1a00e-c044-4f8e-9829-cd7dc48b74a7>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
An entropy-based learning algorithm of Bayesian conditional trees Results 1 - 10 of 25 , 1997 "... Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state-of-the-art classifiers such as C4.5. This fact raises the question of whether a classifier with less restr ..." Cited by 587 (22 self) Add to MetaCart Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state-of-the-art classifiers such as C4.5. This fact raises the question of whether a classifier with less restrictive assumptions can perform even better. In this paper we evaluate approaches for inducing classifiers from data, based on the theory of learning Bayesian networks. These networks are factored representations of probability distributions that generalize the naive Bayesian classifier and explicitly represent statements about independence. Among these approaches we single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same time maintains the computational simplicity (no search involved) and robustness that characterize naive Bayes. We experimentally tested these approaches, using problems from the University of California at Irvine repository, and compared them to C4.5, naive Bayes, and wrapper methods for feature selection. , 1996 "... This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks. Connections are drawn between the statistical, neural network, and uncertainty communities, and between the ..." Cited by 172 (0 self) Add to MetaCart This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks. Connections are drawn between the statistical, neural network, and uncertainty communities, and between the different methodological communities, such as Bayesian, description length, and classical statistics. Basic concepts for learning and Bayesian networks are introduced and methods are then reviewed. Methods are discussed for learning parameters of a probabilistic network, for learning the structure, and for learning hidden variables. The presentation avoids formal definitions and theorems, as these are plentiful in the literature, and instead illustrates key concepts with simplified examples. Keywords--- Bayesian networks, graphical models, hidden variables, learning, learning structure, probabilistic networks, knowledge discovery. I. Introduction Probabilistic networks or probabilistic gra... - Journal of Machine Learning Research , 2000 "... This paper describes the mixtures-of-trees model, a probabilistic model for discrete multidimensional domains. Mixtures-of-trees generalize the probabilistic trees of Chow and Liu [6] in a different and complementary direction to that of Bayesian networks. We present efficient algorithms for learnin ..." Cited by 109 (2 self) Add to MetaCart This paper describes the mixtures-of-trees model, a probabilistic model for discrete multidimensional domains. Mixtures-of-trees generalize the probabilistic trees of Chow and Liu [6] in a different and complementary direction to that of Bayesian networks. We present efficient algorithms for learning mixtures-of-trees models in maximum likelihood and Bayesian frameworks. We also discuss additional efficiencies that can be obtained when data are “sparse, ” and we present data structures and algorithms that exploit such sparseness. Experimental results demonstrate the performance of the model for both density estimation and classification. We also discuss the sense in which tree-based classifiers perform an implicit form of feature selection, and demonstrate a resulting insensitivity to irrelevant attributes. - In KDD-96: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining , 1996 "... We present a framework for characterizing Bayesian classification methods. This framework can be thought of as a spectrum of allowable dependence in a given probabilistic model with the Naive Bayes algorithm at the most restrictive end and the learning of full Bayesian networks at the most general e ..." Cited by 108 (5 self) Add to MetaCart We present a framework for characterizing Bayesian classification methods. This framework can be thought of as a spectrum of allowable dependence in a given probabilistic model with the Naive Bayes algorithm at the most restrictive end and the learning of full Bayesian networks at the most general extreme. While much work has been carried out along the two ends of this spectrum, there has been surprising little done along the middle. We analyze the assumptions made as one moves along this spectrum and show the tradeoffs between model accuracy and learning speed which become critical to consider in a variety of data mining domains. We then present a general induction algorithm that allows for traversal of this spectrum depending on the available computational power for carrying out induction and show its application in a number of domains with different properties. Introduction Recently, work in Bayesian methods for classification has grown enormously (Cooper & Herskovits 1992) - In Proceedings of the thirteenth national conference on artificial intelligence , 1996 "... Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state of the art classifiers such as C4.5. This fact raises the question of whether a classifier with less restr ..." Cited by 78 (2 self) Add to MetaCart Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state of the art classifiers such as C4.5. This fact raises the question of whether a classifier with less restrictive assumptions can perform even better. In this paper we examine and evaluate approaches for inducing classifiers from data, based on recent results in the theory of learning Bayesian networks. Bayesian networks are factored representations of probability distributions that generalize the naive Bayes classifier and explicitly represent statements about independence. Among these approacheswe single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same time maintains the computational simplicity (no search involved) and robustness which are characteristic of naive Bayes. We experimentally tested these approaches using benchmark problems from... , 1999 "... Estimation of Distribution Algorithms (EDA) constitute an example of stochastics heuristics based on populations of individuals every of which encode the possible solutions to the optimization problem. These populations of individuals evolve in succesive generations as the search progresses -- organ ..." Cited by 43 (6 self) Add to MetaCart Estimation of Distribution Algorithms (EDA) constitute an example of stochastics heuristics based on populations of individuals every of which encode the possible solutions to the optimization problem. These populations of individuals evolve in succesive generations as the search progresses -- organized in the same way as most evolutionary computation heuristics. In opposition to most evolutionary computation paradigms which consider the crossing and mutation operators as essential tools to generate new populations, EDA replaces those operators by the estimation and simulation of the joint probability distribution of the selected individuals. In this work, after making a review of the different approaches based on EDA for problems of combinatorial optimization as well as for problems of optimization in continuous domains, we propose new approaches based on the theory of probabilistic graphical models to solve problems in both domains. More precisely, we propose to adapt - Machine Learning , 2000 "... The naive Bayesian classifier provides a simple and e#ective approach to classifier learning, but its attribute independence assumption is often violated in the real world. A number of approaches have sought to alleviate this problem. A Bayesian tree learning algorithm builds a decision tree, and ge ..." Cited by 39 (8 self) Add to MetaCart The naive Bayesian classifier provides a simple and e#ective approach to classifier learning, but its attribute independence assumption is often violated in the real world. A number of approaches have sought to alleviate this problem. A Bayesian tree learning algorithm builds a decision tree, and generates a local naive Bayesian classifier at each leaf. The tests leading to a leaf can alleviate attribute inter-dependencies for the local naive Bayesian classifier. However, Bayesian tree learning still su#ers from the small disjunct problem of tree learning. While inferred Bayesian trees demonstrate low average prediction error rates, there is reason to believe that error rates will be higher for those leaves with few training examples. This paper proposes the application of lazy learning techniques to Bayesian tree induction and presents the resulting lazy Bayesian rule learning algorithm, called Lbr. This algorithm can be justified by a variant of Bayes theorem which supports a weaker conditional attribute independence assumption than is required by naive Bayes. For each test example, it builds a most appropriate rule with a local naive Bayesian classifier as its consequent. It is demonstrated that the computational requirements of Lbr are reasonable in a wide cross-section of natural domains. Experiments with these domains show that, on average, this new algorithm obtains lower error rates significantly more often than the reverse in comparison to a naive Bayesian classifier, C4.5, a Bayesian tree learning algorithm, a constructive Bayesian classifier that eliminates attributes and constructs new attributes using Cartesian products of existing nominal attributes, and a lazy decision tree learning algorithm. It also outperforms, although the result is not statisticall... - In Uncertainty in Artificial Intelligence , 1995 "... Chain graphs combine directed and undirected graphs and their underlying mathematics combines properties of the two. This paper gives a simplified definition of chain graphs based on a hierarchical combination of Bayesian (directed) and Markov (undirected) networks. Examples of a chain graph are mul ..." Cited by 27 (1 self) Add to MetaCart Chain graphs combine directed and undirected graphs and their underlying mathematics combines properties of the two. This paper gives a simplified definition of chain graphs based on a hierarchical combination of Bayesian (directed) and Markov (undirected) networks. Examples of a chain graph are multivariate feed-forward networks, clustering with conditional interaction between variables, and forms of Bayes classifiers. Chain graphs are then extended using the notation of plates so that samples and data analysis problems can be represented in a graphical model as well. Implications for learning are discussed in the conclusion. 1 Introduction Probabilistic networks are a notational device that allow one to abstract forms of probabilistic reasoning without getting lost in the mathematical detail of the underlying equations. They offer a framework whereby many forms of probabilistic reasoning can be combined and performed on probabilistic models without careful hand programming. Efforts ... - In NIPS , 1998 "... This publication can be retrieved by anonymous ftp to publications.ai.mit.edu. This paper introduces a probability model, the mixture of trees that can account for sparse, dynamically changing dependence relationships. We present a family of efficient algorithms based on the EM and the Minimum Spann ..." Cited by 27 (6 self) Add to MetaCart This publication can be retrieved by anonymous ftp to publications.ai.mit.edu. This paper introduces a probability model, the mixture of trees that can account for sparse, dynamically changing dependence relationships. We present a family of efficient algorithms based on the EM and the Minimum Spanning Tree algorithms that learn mixtures of trees in the ML framework. The method can be extended to take into account priors and, for a wide class of priors that includes the Dirichlet and the MDL priors, it preserves its computational efficiency. Experimental results demonstrate the excellent performance of the new model both in density estimation and in classification. Finally, we show that a single tree classifier acts like an implicit feature selector, thus making the classification performance insensitive to irrelevant attributes. , 1998 "... Graphical structures such as Bayesian networks or M arkov networks are very useful tools for representing irrelevance or independency relationships, and they may be used to efficiently perform reasoning tasks. Singly connected networks are important specific cases where there is no more than one un ..." Cited by 18 (10 self) Add to MetaCart Graphical structures such as Bayesian networks or M arkov networks are very useful tools for representing irrelevance or independency relationships, and they may be used to efficiently perform reasoning tasks. Singly connected networks are important specific cases where there is no more than one undirected path connecting each pair of variables. The aim of this paper is to investigate the kind of properties that a dependency model must verify in order to be equivalent to a singly connected graph structure, as a way of driving automated discovery and construction of singly connected networks in data. The main results are the characterizations of those dependency models which are isomorphic to singly connected graphs (either via the d-separation criterion for directed acyclic graphs or via the separation criterion for undirected graphs), as well as the development of efficient algorithms for learning singly connected graph representations of dependency models.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=212695","timestamp":"2014-04-18T22:54:56Z","content_type":null,"content_length":"40790","record_id":"<urn:uuid:50351447-69be-44e5-8cfc-531701a2ad44>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Simplifying a summation March 5th 2007, 07:42 AM #1 Junior Member Nov 2006 Simplifying a summation Hi, how would you simplify this summation. I have the steps, but I don't see it. Can someone explain it? =] Sum from n = 0 to infinity: [(lambda*e^t)^n]/n! The next step is: e^(lambda*e^t) okay, so this step is just working backwards. you have to know the formula for the taylor expansion of e. it goes like this: so you notice in your question, what is in place of x is lambda*e^t, so you just replace x with that in the above formula, so you see on the left it is equal to e^(lambda*e^t) wow! Thanks! xD March 5th 2007, 07:59 AM #2 March 5th 2007, 08:22 AM #3 Junior Member Nov 2006
{"url":"http://mathhelpforum.com/calculus/12190-simplifying-summation.html","timestamp":"2014-04-19T16:14:19Z","content_type":null,"content_length":"34829","record_id":"<urn:uuid:e2f7d7a0-4ac2-4f36-a447-1fa1b8b413de>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: ultrafinitism Robert Black Robert.Black at nottingham.ac.uk Wed Nov 1 19:58:52 EST 2000 Professors Sazanov and Kanovei both seem to be defending some form of ultrafinitism. That's fine, it's a good old Russian tradition, and perhaps they're right (though I don't myself think so). What is odd, though, is that they both seem to think that this viewpoint is just obviously true, whereas it's (1) *radically* revisionary of what we ordinarily think and (2) to my knowledge at least has never achieved any adequate formal expression (e.g. in the way Heyting managed to give formal expression to intuitionist ideas). There are all sorts of problems with this sort of ultrafinitism (which is not to say that the problems are insoluble). Just for starters, if mathematical proof has to do with formal provability in, say, ZFC, and if the latter is to be understood not in terms of the existence of abstract structures but in terms of concrete, feasible proofs made of dried ink, is there any reason at all to think that formalizations in ZFC of currently accepted proofs would fit into the universe? (There can be spectacular blow-ups here: I seem to remember a FOM posting of a couple of months ago giving a quite grotesque length for Bourbaki's definition of '2' reduced to primitive notation.) Robert Black Dept of Philosophy University of Nottingham Nottingham NG7 2RD tel. 0115-951 5845 More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2000-November/004510.html","timestamp":"2014-04-21T08:31:29Z","content_type":null,"content_length":"3571","record_id":"<urn:uuid:b344ec61-20ad-4e1d-9d7e-8e463e197c5e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
For what values of x is the curve y=e^(-x^2) ...? Problem: For what values of $x$ is the curve $y\,=\,e^{-x^2}$ concave downward? OK, so this calls for a 2nd derivative test. So this is what I did: $\frac{dy}{dx}\,=\,-2xe^{-x^2}$. That was easy. But now we have a product. $\frac{d^2y}{dx^2}\,=\,4e^{-x^2}(x^2\,+1)$. Now, if I'm looking for concave downward, I want the value of the $y''$ expression to be $<\,0$, $4e^{-x^2}(x^2\,+1)\,<\,0$. We know $x^2\,+\1\,>\,0$; then we must be really looking for $e^{-x^2}\,<\,0$ That's how far I got with it. The answer is supposed to be the interval $[-\frac{\sqrt{2}}{2},\,\frac{\sqrt{2}}{2}]$. Did I botch something somewhere? Re: For what values of x is the curve y=e^(-x^2) ...? jaybird0827 wrote:Problem: For what values of $x$ is the curve $y\,=\,e^{-x^2}$ concave downward? OK, so this calls for a 2nd derivative test. So this is what I did: $\frac{dy}{dx}\,=\,-2xe^{-x^2}$. That was easy. But now we have a product. $\frac{d^2y}{dx^2}\,=\,4e^{-x^2}(x^2\,+1)$. Now, if I'm looking for concave downward, I want the value of the $y''$ expression to be $<\,0$, $4e^{-x^2}(x^2\,+1)\,<\,0$. We know $x^2\,+\1\,>\,0$; then we must be really looking for $e^{-x^2}\,<\,0$ That's how far I got with it. The answer is supposed to be the interval $[-\frac{\sqrt{2}}{2},\,\frac{\sqrt{2}}{2}]$. Did I botch something somewhere? Re: For what values of x is the curve y=e^(-x^2) ...? Martingale wrote: I did wonder if I'd messed up on the 2nd derivative. Thanks, Martingale!
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=14&t=665&p=2074","timestamp":"2014-04-19T17:20:57Z","content_type":null,"content_length":"23436","record_id":"<urn:uuid:76b1dbc0-f94a-4d56-8b8c-4285bdd21090>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: "xtmixed" and bootstrapping standard errors of a ratio [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: "xtmixed" and bootstrapping standard errors of a ratio From ymarchenko@stata.com (Yulia Marchenko, StataCorp) To statalist@hsphsun2.harvard.edu Subject Re: st: "xtmixed" and bootstrapping standard errors of a ratio Date Wed, 05 Apr 2006 11:04:47 -0500 Lena Lindahl <lena.Lindahl@sofi.su.se> asks how to bootstrap the standard errors for the intraclass correlation after -xtmixed-: > I want to estimate the correlation in income among pupils who went in the same > school. In order to achieve the variance components needed to calculate the > correlation I have used xtmixed in the following way: > xtmixed income || school:, cov(unstructured) variance > Then I get estimates of within and between school variance and their standard > errors. The correlation is calculated as the proportion of the total variance > that occurs between schools: ((between variance/(between+within variance)). > How do I bootstrap standard errors of the correlation? Lena can use -nlcom- after -xtmixed- to obtain the standard error for the intraclass correlation. Here is an example using the auto dataset: /***** begin example *****/ sysuse auto, clear xtmixed mpg || rep78: matrix list e(b) local var_u exp([lns1_1_1]_b[_cons])^2 local var_e exp([lnsig_e]_b[_cons])^2 nlcom `var_u'/(`var_u'+`var_e') mat V = r(V) di as txt "SE = " as res sqrt(V[1,1]) /****** end example ******/ We can look up the corresponding names of the coefficients to be used in -nlcom- from the column names of the e(b) matrix (. matrix list e(b)). Also, since the coefficients in e(b) are stored in the alternative metric we need to transform them first to get the variance components. Note also that since the dimension of the random effect is one in the model above there is no need to specify -cov(unstructured)-. To bootstrap standard error of the intraclass correlation Lena needs to write a separate program that returns the values of this standard error in r() and use that program in -bootstrap-: /***** begin example *****/ cap program drop bs_rho_se program bs_rho_se, rclass xtmixed mpg || rep78: local var_u exp([lns1_1_1]_b[_cons])^2 local var_e exp([lnsig_e]_b[_cons])^2 nlcom `var_u'/(`var_u'+`var_e') ret list tempname V mat `V' = r(V) return scalar rho_se = sqrt(`V'[1,1]) sysuse auto, clear bootstrap rho_se = r(rho_se): bs_rho_se /****** end example ******/ For more detailed examples see [R] bootstrap. -- Yulia * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2006-04/msg00142.html","timestamp":"2014-04-19T00:14:51Z","content_type":null,"content_length":"7763","record_id":"<urn:uuid:01a101c5-6eac-4375-b6e1-ff0659c58de4>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
rectilinear polygon intersection up vote 8 down vote favorite I am looking for / trying to develop an optimal algorithm for rectilinear polygon intersection with rectangles. The polygons I am testing do not have holes. Answers like those given here and here are for very general polygons, and the solutions are understandably quite complex. Hoping that the S.O. community can help me document algorithms for the special cases with just rectilinear polygons. I am looking for the polygon filled in green in the image below: algorithm math geometry are the edges of the rectilinear polygon and the rectangles parallel to the axes ? – Andre Holzner May 21 '11 at 19:45 @Andre -- yes -- all lines are parallel – jedierikb May 21 '11 at 19:47 As a first thought, storing the rectilinear polygon in a segment tree (for two dimensions) comes to my mind. Assuming that the rectilinear polygons are the ones which do not change and the rectangles vary. – Andre Holzner May 21 '11 at 19:53 Yes, your assumption is correct about what is mutable and what is not. Thanks for the suggestion. – jedierikb May 21 '11 at 20:12 add comment 2 Answers active oldest votes The book Computational Geometry: an Introduction by Preparata and Shamos has a chapter on rectilinear polygons. up vote 2 down vote Thank you. I will look at chapters 2 and 8. I see the term I want is isothetic polygons. – jedierikb May 21 '11 at 19:32 add comment Use a sweep line algorithm, making use of the fact that a rectilinear polygon is defined by its vertices. Represent the vertices along with the rectangle that they belong to, i.e. something like (x, y, #rect). To this set of points, add those points that result from the intersections of all edges. These new points are of the form (x, y, final), since we already know that they belong to the resulting set of points. • sort all points by their x-value • use a sweep line, starting at the first x-coordinate; for each new point: □ if it's a "start point", add it to a temporary set T. Mark it "final" if it's a point from rectangle A and between y-coordinates from points from rectangle B in T (or vice versa). up vote 1 □ if it's an "end point", remove it and its corresponding start point from T. down vote After that, all points that are marked "final" denote the vertices of the resulting polygon. Let N be the total number of points. Further assuming that testing whether we should mark a point as being "final" takes time O(log(n)) by looking up T, this whole algorithm is in O(N*log Note that the task of finding all intersections can be incorporated into the above algorithm, since finding all intersections efficiently is itself a sweep line algorithm usually. Also note that the resulting set of points may contain more than one polygon, which makes it slightly harder to reconstruct the solution polygons out of the "final" vertices. add comment Not the answer you're looking for? Browse other questions tagged algorithm math geometry or ask your own question.
{"url":"http://stackoverflow.com/questions/6083264/rectilinear-polygon-intersection","timestamp":"2014-04-18T01:41:15Z","content_type":null,"content_length":"73318","record_id":"<urn:uuid:62a1dabd-291c-4701-99f4-ae04bad7d28f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
Biostatistics for Oral Healthcare ISBN: 978-0-8138-2818-3 344 pages January 2008, Wiley-Blackwell Biostatistics for Oral Healthcare offers students, practitioners and instructors alike a comprehensive guide to mastering biostatistics and their application to oral healthcare. Drawing on situations and methods from dentistry and oral healthcare, this book provides a thorough treatment of statistical concepts in order to promote in-depth and correct comprehension, supported throughout by technical discussion and a multitude of practical examples. See More Chapter 1. Introduction 1. What Is Biostatistics?. 2. Why Do I Need Statistics?. 3. How Much Mathematics Do I Need?. 4. How to Study Statistics?. 5. Reference. Chapter 2. Summarizing Data. 1. Raw Data and Basic Terminology. 2. The Levels of Measurements. 3. Frequency Distributions. Frequency Tables. Relative Frequency. 4. Graphs. Bar Graphs. Pie Charts. Line Graphs. Stem and Leaf Plots. 5. Clinical Trials. 6. Confounding Variables. 7. Exercises. 8. References. Chapter 3. Measures of Central Tendency, Dispersion, and Skewness. 1. Introduction. 2. Mean. 3. Weighted Mean. 4. Median. 5. Mode. 6. Geometric Mean. 7. Harmonic Mean. 8. Mean and Median of Grouped Data. 9. Mean of Two or More Means. 10. Range. 11. Percentiles and Interquartile Range. 12. Box-whisker Plot. 13. Variance and Standard Deviation. 14. Coefficient of Variation. 15. Variance of the Grouped Data. 16. Skewness. 17. Exercises. 18. References. Chapter 4. Probability. 1. Introduction. 2. Sample Space and Events. 3. Basic Properties of Probability. 4. Independence and Mutually Exclusive Events. 5. Conditional Probability. 6. Bayes Theorem. 7. Rates and Proportions. Prevalence and Incidence. Sensitivity and Specificity. Relative Risk and Odds Ratio. 8. Exercises. 9. References. Chapter 5. Probability Distributions. 1. Introduction. 2. Binomial Distribution. 3. Poisson Distribution. 4. Poisson Approximation to Binomial Distribution. 5. Normal Distribution. Properties of Normal Distributions. Standard Normal Distribution. Using Normal Probability Table. Further Applications of Normal Probability. Normal Approximation to the Binomial Distribution. 6. Exercises. 7. References. Chapter 6. Sampling Distributions. 1. Introduction. 2. Sampling Distribution of the Mean. Standard Error of the Sample Mean. Central Limit Theorem. 3. Student's t Distribution. 4. Exercises. 5. References. Chapter 7. Confidence Intervals and Sample Size. 1. Introduction. 2. Confidence Intervals for the Mean and Sample Size n when Is Known. 3. Confidence Intervals for the Mean when is Not Known. 4. Confidence Intervals for the Binomial Parameter p. 5. Confidence Intervals for the Variances and Standard Deviations. 6. Exercises. 7. References. Chapter 8. Hypothesis Testing: One Sample Case. 1. Introduction. 2. Concept of Hypothesis Testing. 3. One-tailed Z Test of the Mean of a Normal Distribution When Is Known. 4. Two-tailed Z Test of the Mean of a Normal Distribution When Is Known. 5. t Test of the Mean of a Normal Distribution. 6. The Power of a Test and Sample Size. 7. One-Sample Test for a Binomial Proportion. 8. One-Sample Test for the Variance of a Normal Distribution. 9. Exercises. 10. References. Chapter 9. Hypothesis Testing: Two-Sample Case. 1. Introduction. 2. Two Sample Z Test for Comparing Two Means. 3. Two Sample t Test for Comparing Two Means with Equal Variances. 4. Two Sample t Test for Comparing Two Means with Unequal Variances. 5. The Paired t Test. 6. Z Test for Comparing Two Binomial Proportions. 7. The Sample Size and Power of a Two Sample Test. Estimation of a Sample Size. The Power of a Two Sample Test. 8. The F Test for the Equality of Two Variances. 9. Exercises. 10. References. Chapter 10. Categorical Data Analysis. 1. Introduction. 2. 2 x 2 Contingency Table. 3. r x c Contingency Table. 4. The Cochran-Mantel-Haenszel Test. 5. The McNemar Test. 6. The Kappa Statistic. 7. Goodness of Fit Test. 8. Exercises. 9. References. Chapter 11. Regression Analysis and Correlation. 1. Introduction. 2. Simple Linear Regression. Description of Regression Model. Estimation of Regression Function. Aptness of a Model. 3. Correlation Coefficient. Significance of Correlation Coefficient. 4. Coefficient of Determination. 5. Multiple Regression. 6. Logistic Regression. The Logistic Regression Model. Fitting the Logistic Regression Model. 7. Multiple Logistic Regression Model. 8. Exercises. 9. References. Chapter 12. One-Way Analysis of Variance. 1. Introduction. 2. Factors and Factor Levels. 3. Statement of the Problem and Model Assumptions. 4. Basic Concepts in ANOVA. 5. F-test for Comparison of k Population Means. 6. Multiple Comparisons Procedures. Least Significant Difference Method. Bonferroni Approach. Scheffe's Method. Tukey's Procedure. 7. One-way ANOVA Random Effects Model. 8. Test for Equality of k Variances. Bartlett's Test. Hartley's Test. 9. Exercises. 10. References. Chapter 13. Two-Way Analysis of Variance. 1. Introduction. 2. General Model. 3. Sum of Squares and Degrees of Freedom. 4. F Test. 5. Exercises. 6. References. Chapter 14. Non-Parametric Statistics. 1. Introduction. 2. The Sign Test. 3. The Wilcoxon Rank Sum Test. 4. The Wilcoxon Signed Rank Test. 5. The Median Test. 6. The Kruskal-Wallis Test. 7. The Friedman Test. 8. The Permutation Test. 9. The Cochran Test. 10. The Squared Rank Test For Variances. 11. Spearman's Rank Correlation Coefficient. 12. Exercises. 13. References. Chapter 15. Survival Analysis. 1. Introduction. 2. Person-Time Method and Mortality Rate. 3. Life Table Analysis. 4. Hazard Function. 5. Kaplan-Meier Product Limit Estimator. 6. Comparing Survival Functions. Gehan's Generalized Wilcoxon Test. The Logrank Test. The Mantel and Haenszel Test. 7. Piecewise Exponential Estimator (PEXE). Small Sample Illustration. General Description of PEXE. An Example. Properties of PEXE and Comparisons with Kaplan-Meier Estimator. 8. References. Solutions to Selected Exercises. Table A. Table of Random Numbers. Table B. Table of Binomial Probabilities. Table C. Table of Poisson Probabilities. Table D. Standard Normal Probabilities. Table E. Percentiles of the t Distribution. Table F. Percentiles of the Distribution. Table G. Percentiles of the F Distribution See More Jay S. Kim, Ph.D., is Professor of Biostatistics at Loma Linda University, CA. A specialist in this area, he has been teaching biostatistics since 1977 to students in public health, medical school, and dental school. Currently his primary responsibility is teaching biostatistics courses to dental hygiene students, pre-doctoral dental students and students in advanced dental education. He also collaborates with the faculty and students on a variety of research projects. Ronald J. Dailey, Ph.D., M.A., is the Associate Dean for Academic Affairs at Loma Linda University School of Dentistry. He has taught dental, dental hygiene and college students for the past 32 See More ● Comprehensive guide to biostatistics ● Draws on examples from dentistry and oral healthcare research ● Encourages intuitive understanding of statistical concepts ● Includes glossary of definitions and notation See More "This is a book of massive erudition, of great value to the career statistician but not much help to those of us who rank statistics lower in our priorities." (Primary Dental Care and Team in Practice, 1 April 2011) See More
{"url":"http://www.wiley.com/WileyCDA/WileyTitle/productCd-081382818X.html","timestamp":"2014-04-19T05:35:40Z","content_type":null,"content_length":"51065","record_id":"<urn:uuid:6330db4d-aa19-424d-88e1-98e9a2ffd449>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about algebraic curves on Quomodocumque Very nice paper just posted by Vivek Shende and Jacob Tsimerman. Take a sequence {C_i} of hyperelliptic curves of larger and larger genus. Then for each i, you can look at the pushforward of a random line bundle drawn uniformly from Pic(C) / [pullbacks from P^1] to P^1, which is a rank-2 vector bundle. This gives you a measure $\mu_i$ on Bun_2(P^1), the space of rank-2 vector bundles, and Shende and Tsimerman prove, just as you might hope, that this sequence of measures converges to the natural measure. I think (but I didn’t think this through carefully) that this corresponds to saying that if you look at a sequence of quadratic imaginary fields with increasing discriminant, and for each field you write down all the ideal classes, thought of as unimodular lattices in R^2 up to homothety, then the corresponding sequence of (finitely supported) measures on the space of lattices converges to the natural one. Equidistribution comes down to counting, and the method here is to express the relevant counting problem as a problem of counting points on a variety (in this case a Brill-Noether locus inside Pic (C_i)), which by Grothendieck-Lefschetz you can do if you can control the cohomology (with its Frobenius action.) The high-degree part of the cohomology they can describe explicitly, and fortunately they are able to exert enough control over the low-degree Betti numbers to show that the contribution of this stuff is negligible. In my experience, it’s often the case that showing that the contribution of the low-degree stuff, which “should be small” but which you don’t actually have a handle on, is often the bottleneck! And indeed, for the second problem they discuss (where you have a sequence of hyperelliptic curves and a single line bundle on each one) it is exactly this point that stops them, for the moment, from having the theorem they want. Error terms are annoying. (At least when you can’t prove they’re smaller than the main term.) Y. Zhao and the Roberts conjecture over function fields Before the developments of the last few years the only thing that was known about the Cohen-Lenstra conjecture was what had already been known before the Cohen-Lenstra conjecture; namely, that the number of cubic fields of discriminant between -X and X could be expressed as $\frac{1}{3\zeta(3)} X + o(X)$. It isn’t hard to go back and forth between the count of cubic fields and the average size of the 3-torsion part of the class group of quadratic fields, which gives the connection with Cohen-Lenstra in its usual form. Anyway, Datskovsky and Wright showed that the asymptotic above holds (for suitable values of 12) over any global field of characteristic at least 5. That is: for such a field K, you let N_K(X) be the number of cubic extensions of K whose discriminant has norm at most X; then $N_K(X) = c_K \zeta_K(3)^{-1} X + o(X)$ for some explicit rational constant $c_K$. One interesting feature of this theorem is that, if it weren’t a theorem, you might doubt it was true! Because the agreement with data is pretty poor. That’s because the convergence to the Davenport-Heilbronn limit is extremely slow; even if you let your discriminant range up to ten million or so, you still see substantially fewer cubic fields than you’re supposed to. In 2000, David Roberts massively clarified the situation, formulating a conjectural refinement of the Davenport-Heilbronn theorem motivated by the Shintani zeta functions: $N_{\mathbf{Q}}(X) = (1/3)\zeta(3)^{-1} X + c X^{5/6} + o(X^{5/6})$ with c an explicit (negative) constant. The secondary term with an exponent very close to 1 explains the slow convergence to the Davenport-Heilbronn estimate. The Datskovsky-Wright argument works over an arbitrary global field but, like most arguments that work over both number fields and function fields, it is not very geometric. I asked my Ph.D. student Yongqiang Zhao, who’s finishing this year, to revisit the question of counting cubic extensions of a function field F_q(t) from a more geometric point of view to see if he could get results towards the Roberts conjecture. And he did! Which is what I want to tell you about. But while Zhao was writing his thesis, there was a big development — the Roberts conjecture was proved. Not only that — it was proved twice! Once by Bhargava, Shankar, and Tsimerman, and once by Thorne and Taniguchi, independently, simultaneously, and using very different methods. It is certainly plausible that these methods can give the Roberts conjecture over function fields, but at the moment, they don’t. Neither does Zhao, yet — but he’s almost there, getting $N_K(T) = \zeta_K(3)^{-1} X + O(X^{5/6 + \epsilon})$ for all rational function fields K = F_q(t) of characteristic at least 5. And his approach illuminates the geometry of the situation in a very beautiful way, which I think sheds light on how things work in the number field case. Geometrically speaking, to count cubic extensions of F_q(t) is to count trigonal curves over F_q. And the moduli space of trigonal curves has a classical unirational parametrization, which I learned from Mike Roth many years ago: given a trigonal curve Y, you push forward the structure sheaf along the degree-3 map to P^1, yielding a rank-3 vector bundle on P^1; you mod out by the natural copy of the structure sheaf; and you end up with a rank-2 vector bundle W on P^1, whose projectivization is a rational surface in which Y embeds. This rational surface is a Hirzebruch surface F_k, where k is an integer determined by the isomorphism class of the vector bundle W. (This story is the geometric version of the Delone-Fadeev parametrization of cubic rings by binary cubic forms.) This point of view replaces a problem of counting isomorphism classes of curves (hard!) with a problem of counting divisors in surfaces (not easy, but easier.) It’s not hard to figure out what linear system on F_k contains Y. Counting divisors in a linear system is nothing but a dimension count, but you have to be careful — in this problem, you only want to count smooth members. That’s a substantially more delicate problem. Counting all the divisors is more or less the problem of counting all cubic rings; that problem, as the number theorists have long known, is much easier than the problem of counting just the maximal orders in cubic fields. Already, the geometric meaning of the negative secondary term becomes quite clear; it turns out that when k is big enough (i.e. if the Hirzebruch surface is twisty enough) then the corresponding linear system has no smooth, or even irreducible, members! So what “ought” to be a sum over all k is rudely truncated; and it turns out that the sum over larger k that “should have been there” is on order X^{5/6}. So how do you count the smooth members of a linear system? When the linear system is highly ample, this is precisely the subject of Poonen’s well-known “Bertini theorem over finite fields.” But the trigonal linear systems aren’t like that; they’re only “semi-ample,” because their intersection with the fiber of projection F_k -> P^1 is fixed at 3. Zhao shows that, just as in Poonen’s case, the probability that a member of such a system is smooth converges to a limit as the linear system gets more complicated; only this limit is computed, not as a product over points P of the probability D is smooth at P, but rather a product over fibers F of the probability that D is smooth along F. (This same insight, arrived at independently, is central to the paper of Erman and Wood I mentioned last week.) This alone is enough for Zhao to get a version of Davenport-Heilbronn over F_q(t) with error term O(X^{7/8}), better than anything that was known for number fields prior to last year. How he gets even closer to Roberts is too involved to go into on the blog, but it’s the best part, and it’s where the algebraic geometry really starts; the main idea is a very careful analysis of what happens when you take a singular curve on a Hirzebruch surface and start carrying out elementary transforms at the singular points, making your curve more smooth but also changing which Hirzebruch surface it’s on! To what extent is Zhao’s method analogous to the existing proofs of the Roberts conjecture over Q? I’m not sure; though Zhao, together with the five authors of the two papers I mentioned, spent a week huddling at AIM thinking about this, and they can comment if they want. I’ll just keep saying what I always say: if a problem in arithmetic statistics over Q is interesting, there is almost certainly interesting algebraic geometry in the analogous problem over F_q(t), and the algebraic geometry is liable in turn to offer some insights into the original question. Tagged algebraic curves, algebraic geometry, arithmetic statistics, bhargava, david roberts, function fields, i got yer postdoc right here, number theory, trigonal, yongqiang zhao Hwang and To on injectivity radius and gonality, and “Typical curves are not typical.” Interesting new paper in the American Journal of Mathematics, not on arXiv unfortunately. An old theorem of Li and Yau shows how to lower-bound the gonality of a Riemann surface in terms of the spectral gap on its Laplacian; this (together with new theorems by many people on superstrong approximation for thin groups) is what Chris Hall, Emmanuel Kowalski, and I used to give lower bounds on gonalities in various families of covers of a fixed base. The new paper gives a lower bound for the gonality of a compact Riemann surface in terms of the injectivity radius, which is half the length of the shortest closed geodesic loop. You could think of it like this — they show that the low-gonality loci in M_g stay very close to the boundary. “The middle” of M_g is a mysterious place. A “typical” curve of genus g has a big spectral gap, gonality on order g/2, a big injectivity radius… but most curves you can write down are just the Typical curves are not typical. When g is large, M_g is general type, and so the generic curve doesn’t move in a rational family. Are all the rational families near the boundary? Gaby Farkas explained to me on Math Overflow how to construct a rationally parametrized family of genus-g curves whose gonality is generic, as a pencil of curves on a K3 surface. I wonder how “typical” these curves are? Do some have large injectivity radius? Or a large spectral gap? Tagged algebraic curves, algebraic geometry, moduli of curves Random Dieudonne modules, random p-divisible groups, and random curves over finite fields Bryden Cais, David Zureick-Brown and I have just posted a new paper, “Random Dieudonne modules, random p-divisible groups, and random curves over finite fields.” What’s the main idea? It actually arose from a question [DEL:David:DEL] Bryden asked during Derek Garton‘s speciality exam. We know by now that there is some insight to be gained about studying p-parts of class groups of number fields (the Cohen-Lenstra problem) by thinking about the analogous problem of studying class groups of function fields over F_l, where F_l has characteristic prime to p. The question David asked was: well, what about the p-part of the class group of a function field whose characteristic is equal to p? That’s a different matter altogether. The p-divisible group attached to the Jacobian of a curve C in characteristic l doesn’t contain very much information; more or less it’s just a generalized symplectic matrix of rank 2g(C), defined up to conjugacy, and the Cohen-Lenstra heuristics ask this matrix to behave like a random matrix with respect to various natural statistics. But p-divisible groups in characteristic p are where the fun is! For instance, you can ask: What is the probability that a random curve (resp. random hyperelliptic curve, resp. random plane curve, resp. random abelian variety) over F_q is ordinary? In my view it’s sort of weird that nobody has asked this before! But as far as I’ve been able to tell, this is the first time the question has been considered. We generate lots of data, some of which is very illustrative and some of which is (to us) mysterious. But data alone is not that useful — much better to have a heuristic model with which we can compare the data. Setting up such a model is the main task of the paper. Just as a p-divisible group in characteristic l is decribed by a matrix, a p-divisible group in characteristic p is described by its Dieudonné module; this is just another linear-algebraic gadget, albeit a little more complicated than a matrix. But it turns out there is a natural “uniform distribution” on isomorphism classes of Dieudonné modules; we define this, work out its properties, and see what it would say about curves if indeed their Dieudonné modules were “random” in the sense of being drawn from this distribution. To some extent, the resulting heuristics agree with data. But in other cases, they don’t. For instance: the probability that a hyperelliptic curve of large genus over F_3 is ordinary appears in practice to be very close to 2/3. But the probability that a smooth plane curve of large genus over F_3 is ordinary seems to be converging to the probability that a random Dieudonné module over F_3 is ordinary, which is (1-1/3)(1-1/3^3)(1-1/3^5)….. = 0.639…. Why? What makes hyperelliptic curves over F_3 more often ordinary than their plane curve counterparts? (Note that the probability of ordinarity, which makes good sense for those who already know Dieudonné modules well, is just the probability that two random maximal isotropic subspaces of a symplectic space over F_q are disjoint. So some of the computations here are in some sense the “symplectic case” of what Poonen and Rains computed in the orthogonal case. We compute lots more stuff (distribution of a-numbers, distribution of p-coranks, etc.) and decline to compute a lot more (distribution of Newton polygon, final type…) Many interesting questions Tagged algebraic curves, bryden cais, cohen-lenstra, david zureick-brown, Dieudonne module, Newton polygon, positive characteristic Gonality, the Bogomolov property, and Habegger’s theorem on Q(E^tors) I promised to say a little more about why I think the result of Habegger’s recent paper, ” Small Height and Infinite Non-Abelian Extensions,” is so cool. First of all: we say an algebraic extension K of Q has the Bogomolov property if there is no infinite sequence of non-torsion elements x in K^* whose absolute logarithmic height tends to 0. Equivalently, 0 is isolated in the set of absolute heights in K^*. Finite extensions of Q evidently have the Bogomolov property (henceforth: (B)) but for infinite extensions the question is much subtler. Certainly $\bar{\mathbf{Q}}$ itself doesn’t have (B): consider the sequence $2^{1/2}, 2^{1/3}, 2^{1/4}, \ldots$ On the other hand, the maximal abelian extension of Q is known to have (B) (Amoroso-Dvornicich) , as is any extension which is totally split at some fixed place p (Schinzel for the real prime, Bombieri-Zannier for the other primes.) Habegger has proved that, when E is an elliptic curve over Q, the field Q(E^tors) obtained by adjoining all torsion points of E has the Bogomolov property. What does this have to do with gonality, and with my paper with Chris Hall and Emmanuel Kowalski from last year? Suppose we ask about the Bogomolov property for extensions of a more general field F? Well, F had better admit a notion of absolute Weil height. This is certainly OK when F is a global field, like the function field of a curve over a finite field k; but in fact it’s fine for the function field of a complex curve as well. So let’s take that view; in fact, for simplicity, let’s take F to be C What does it mean for an algebraic extension F’ of F to have the Bogomolov property? It means that there is a constant c such that, for every finite subextension L of F and every non-constant function x in L^*, the absolute logarithmic height of x is at least c. Now L is the function field of some complex algebraic curve C, a finite cover of P^1. And a non-constant function x in L^* can be thought of as a nonzero principal divisor. The logarithmic height, in this context, is just the number of zeroes of x — or, if you like, the number of poles of x — or, if you like, the degree of x, thought of as a morphism from C to the projective line. (Not necessarily the projective line of which C is a cover — a new projective line!) In the number field context, it was pretty easy to see that the log height of non-torsion elements of L^* was bounded away from 0. That’s true here, too, even more easily — a non-constant map from C to P^1 has degree at least 1! There’s one convenient difference between the geometric case and the number field case. The lowest log height of a non-torsion element of L^* — that is, the least degree of a non-constant map from C to P^1 — already has a name. It’s called the gonality of C. For the Bogomolov property, the relevant number isn’t the log height, but the absolute log height, which is to say the gonality divided by [L:F]. So the Bogomolov property for F’ — what we might call the geometric Bogomolov property — says the following. We think of F’ as a family of finite covers C / P^1. Then (GB) There is a constant c such that the gonality of C is at least c deg(C/P^1), for every cover C in the family. What kinds of families of covers are geometrically Bogomolov? As in the number field case, you can certainly find some families that fail the test — for instance, gonality is bounded above in terms of genus, so any family of curves C with growing degree over P^1 but bounded genus will do the trick. On the other hand, the family of modular curves over X(1) is geometrically Bogomolov; this was proved (independently) by Abramovich and Zograf. This is a gigantic and elegant generalization of Ogg’s old theorem that only finitely many modular curves are hyperelliptic (i.e. only finitely many have gonality 2.) At this point we have actually more or less proved the geometric version of Habegger’s theorem! Here’s the idea. Take F = C(t) and let E/F be an elliptic curve; then to prove that F(E(torsion)) has (GB), we need to give a lower bound for the curve C_N obtained by adjoining an N-torsion point to F. (I am slightly punting on the issue of being careful about other fields contained in F(E (torsion)), but I don’t think this matters.) But C_N admits a dominant map to X_1(N); gonality goes down in dominant maps, so the Abramovich-Zograf bound on the gonality of X_1(N) provides a lower bound for the gonality of C_N, and it turns out that this gives exactly the bound required. What Chris, Emmanuel and I proved is that (GB) is true in much greater generality — in fact (using recent results of Golsefidy and Varju that slightly postdate our paper) it holds for any extension of C(t) whose Galois group is a perfect Lie group with Z_p or Zhat coefficients and which is ramified at finitely many places; not just the extension obtained by adjoining torsion of an elliptic curve, for instance, but the one you get from the torsion of an abelian variety of arbitrary dimension, or for that matter any other motive with sufficiently interesting Mumford-Tate group. Question: Is Habegger’s theorem true in this generality? For instance, if A/Q is an abelian variety, does Q(A(tors)) have the Bogomolov property? Question: Is there any invariant of a number field which plays the role in the arithmetic setting that “spectral gap of the Laplacian” plays for a complex algebraic curve? A word about Habegger’s proof. We know that number fields are a lot more like F_q(t) than they are like C(t). And the analogue of the Abramovich-Zograf bound for modular curves over F_q is known as well, by a theorem of Poonen. The argument is not at all like that of Abramovich and Zograf, which rests on analysis in the end. Rather, Poonen observes that modular curves in characteristic p have lots of supersingular points, because the square of Frobenius acts as a scalar on the l-torsion in the supersingular case. But having a lot of points gives you a lower bound on gonality! A curve with a degree d map to P^1 has at most d(q+1) points, just because the preimage of each of the q+1 points of P^1(q) has size at most d. (You just never get too old or too sophisticated to whip out the Pigeonhole Principle at an opportune moment….) Now I haven’t studied Habegger’s argument in detail yet, but look what you find right in the introduction: The non-Archimedean estimate is done at places above an auxiliary prime number p where E has good supersingular reduction and where some other technical conditions are met…. In this case we will obtain an explicit height lower bound swiftly using the product formula, cf. Lemma 5.1. The crucial point is that supersingularity forces the square of the Frobenius to act as a scalar on the reduction of E modulo p. Yup! There’s no mention of Poonen in the paper, so I think Habegger came to this idea independently. Very satisfying! The hard case — for Habegger as for Poonen — has to do with the fields obtained by adjoining p-torsion, where p is the characteristic of the supersingular elliptic curve driving the argument. It would be very interesting to hear from Poonen and/or Habegger whether the arguments are similar in that case too! Tagged abramovich, algebraic curves, algebraic geometry, arithmetic geometry, bogomolov, elliptic curves, expanders, gonality, habegger, heights, number theory, poonen, rational points, zograf There’s no 4-branched Belyi’s theorem — right? Much discussion on Math Overflow has not resolved the following should-be-easy question: Give an example of a curve in ${\mathcal{M}}_g$ defined over $\bar{Q}$ which is not a family of 4-branched covers of P^1. Surely there is one! But then again, you’d probably say “surely there’s a curve over $\bar{Q}$ which isn’t a 3-branched cover of P^1.” But there isn’t — that’s Belyi’s theorem. Tagged algebraic curves, algebraic geometry, belyi, moduli of curves, PlanetMO Hain-Matsumoto, “Galois actions on fundamental groups of curves…” I recently had occasion to spend some time with Richard Hain and Makoto Matsumoto’s 2005 paper “Galois actions on fundamental groups and the cycle C – C^-,” which I’d always meant to delve into. It’s really beautiful! I cannot say I’ve really delved — maybe something more like scratched — but I wanted to share some very interesting things I learned. Serre proved long ago that the image of the l-adic Galois representation on an elliptic curve E/Q is open in GL_2(Z_l), so long as E doesn’t have CM. This is a geometric condition on E, which is to say it only depends on the basechange of E to an algebraic closure of Q, or even to C. What’s the analogue for higher genus curves X? You might start by asking about the image of the Galois representation G_Q -> GSp_2g(Z_l) attached to the Tate module of the Jacobian of X. This image lands in GSp_{2g}(Z_l). Just as with elliptic curves, any extra endomorphisms of Jac(X) may force the image to be much smaller than GSp_{2g}(Z_l). But the question of whether the image of rho must be open in GSp_2g(Z_l) whenever no “obvious” geometric obstruction forbids it is difficult, and still not completely understood. (I believe it’s still unknown when g is a multiple of 4…?) One thing we do know in general, though, is that when X is the generic curve of genus g (that is, the universal curve over the function field Q(M_g) of M_g) the resulting representation $\rho^{univ}: G_{Q(M_g)} \rightarrow GSp_{2g}(\mathbf{Z}_\ell)$ is surjective. Hain and Matsumoto generalize in a different direction. When X is a curve of genus greater than 1 over a field K, the Galois group of K acts on more than just the Tate modules (or l-adic H_1) of X; it acts on the whole pro-l geometric fundamental group of X, which we denote pi. So we get a morphism $\rho_{X/K}: G_K \rightarrow Aut(\pi)$ What does it mean to ask this representation to have “big image”? Tagged algebraic curves, algebraic geometry, arithmetic geometry, elliptic curves, fundamental groups, Galois representations, gross-schoen, hain, matsumoto, number theory, papers, zhang Do all curves over finite fields have covers with a sqrt(q) eigenvalue? On my recent visit to Illinois, my colleage Nathan Dunfield (now blogging!) explained to me the following interesting open question, whose answer is supposed to be “yes”: Q1: Let f be a pseudo-Anosov mapping class on a Riemann surface Sigma of genus at least 2, and M_f the mapping cylinder obtained by gluing the two ends of Sigma x interval together by means of f. Then M_f is a hyperbolic 3-manifold with first Betti number 1. Is there a finite cover M of M_f with b_1(M) > 1? You might think of this as (a special case of) a sort of “relative virtual positive Betti number conjecture.” The usual vpBnC says that a 3-manifold has a finite cover with positive Betti number; this says that when your manifold starts life with Betti number 1, you can get “extra” first homology by passing to a cover. Of course, when I see “3-manifold fibered over the circle” I whip out a time-worn analogy and think “algebraic curve over a finite field.” So here’s the number theorist’s version of the above Q2: Let X/F_q be an algebraic curve of genus at least 2 over a finite field. Does X have a finite etale cover Y/F_{q^d} such that the action of Frobenius on H^1(Y,Z_ell) has an eigenvalue equal to q^{d/2}? Tagged 3-manifolds, algebraic curves, algebraic geometry, arithmetic geometry, finite fields, finite groups, frobenius, heuristics, napkin scratching, nathan dunfield, number theory, things i don't know, topology
{"url":"http://quomodocumque.wordpress.com/tag/algebraic-curves/","timestamp":"2014-04-17T15:33:22Z","content_type":null,"content_length":"107737","record_id":"<urn:uuid:70d9645c-cb04-4593-895b-a3d3fc7b1339>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
ADINA Theory 1 Page 1 For the theory used in ADINA, for structural analysis, CFD, and FSI, and also for the philosophy used in the program development, please refer to the publications given here: Books by K.J. Bathe and co-authors Finite Element Procedures Finite Element Procedures in Engineering Analysis Numerical Methods in Finite Element Analysis The Mechanics of Solids and Structures — Hierarchical ... The Finite Element Analysis of Shells — Fundamentals Inelastic Analysis of Solids and Structures To Enrich Life (Sample pages here) Proceedings edited by K.J. Bathe Computational Fluid and Solid Mechanics 2001-2011 Nonlinear Finite Element Analysis and ADINA: 1977-1999 (6 volumes) (12 volumes) Theory and Modeling Guides distributed on the ADINA Installation CD. These manuals describe in short form the theory used in ADINA Structures, Thermal, CFD and EM, and give hints for modeling problems correctly. For ADINA users: manuals Papers on the Development of Finite Element Methods, with some of these Papers Related to ADINA For publications that reference the use of ADINA, please see here. Solution Methods for Eigenvalue Problems in Structural Mechanics Bathe, Klaus-Jürgen; Wilson, Edward L. Source: International Journal for Numerical Methods in Engineering, v 6, 213-266, 1973. ISSN: 0029-5981 CODEN: IJNMBH Publisher: John Wiley & Sons, Ltd. Abstract: A survey of probably the most efficient solution methods currently in use for the problems Kφ = ω^2M and Kψ = λK[G]ψ is presented. In the eigenvalue problems the stiffness matrices K and K [G] and the mass matrix M can be full or banded; the mass matrix can be diagonal with zero diagonal elements. The choice is between the well-known QR method, a generalized Jacobi iteration, a new determinant search technique and an automated sub-space iteration. The system size, the bandwidth and the number of required eigenvalues and eigenvectors determine which method should be used on a particular problem. The numerical advantages of each solution technique, operation counts and storage requirements are given to establish guidelines for the selection of the appropriate algorithm. A large number of typical solution times are presented. Keywords: structural mechanics, eigenvalue problem, QR method, subspace iteration Stability and Accuracy Analysis of Direct Integration Methods Bathe, Klaus-Jürgen; Wilson, Edward L. Source: International Journal of Earthquake Engineering and Structural Dynamics, v 1, 283-291, 1973. ISSN: 0098-8847 (print); 1096-9845 (online) Publisher: John Wiley & Sons, Ltd. Abstract: A systematic procedure is presented for the stability and accuracy analysis of direct integration methods in structural dynamics. Amplitude decay and period elongation are used as the basic parameters in order to compare various integration methods. The specific methods studied are the Newmark generalized acceleration scheme, the Houbolt method and the Wilson θ-method. The advantages of each of these methods are discussed. In addition, it is shown how the direct integration of the equations of motion is related to the mode superposition analysis. Keywords: Newmark generalized acceleration scheme, Houbolt method, Wilson θ-method, mode superposition analysis Eigensolution of Large Structural Systems with Small Bandwith Bathe, Klaus-Jürgen; Wilson, Edward L. Source: J. Eng. Mech. Div., v 99, 467-479, June 1973. ISSN: 0044-7951 Publisher: American Society of Civil Engineers Abstract: The basic technique for the accurate calculation of the smallest (largest) eigenvalues and corresponding eigenvectors in large generalized eigenvalue problems arising in dynamic and buckling analysis are considered. This leads to the design of a very efficient practical algorithm when the system has small bandwidth. The solution technique combines an accelerated secant iteration in which the Sturm sequence of the leading principal minors is used with vector inverse iteration. Example analyses are presented to show typical convergence characteristics and solution times. Keywords: buckling, computation, computers, dynamics, eigenvalues Some chapters in books on the theory used in ADINA Towards a Model for Large Strain Anisotropic Elasto-Plasticity Montáns, Francisco Javier; Bathe, Klaus-Jürgen. Source: Chapter in Computational Plasticity, E. Onate and R. Owen, eds., 13-36, 2007. ISBN: 1402065760; 9781402065767 Publisher: Springer-Verlag Finite Element Formulation, Modeling and Solution of Nonlinear Dynamic Problems Bathe, Klaus-Jürgen. Source: Chapter in Numerical Methods for Partial Differential Equations, (S. W. Parter, ed.), 1979. ISBN: 978-0120567614 Publisher: Academic Press Convergence of Subspace Iteration Bathe, Klaus-Jürgen. Source: Formulations and Computational Algorithms in Finite Element Analysis, (K.J. Bathe, J.T. Oden and W. Wunderlich, eds.), 1977. ISBN: 978-0-262-02127-2 Publisher: M.I.T. Press Page 1
{"url":"http://www.adina.com/theory/theory1.shtml","timestamp":"2014-04-20T08:14:36Z","content_type":null,"content_length":"24785","record_id":"<urn:uuid:470da81e-9ff9-405e-bb69-76e6d87890d2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Russell, IL Prealgebra Tutor Find a Russell, IL Prealgebra Tutor ...My experience with special needs students has led me to be a skillful diagnostician of learning problems in all children and to customize learning plans that work. I use a broad range of methods and strategies to ensure student success, each of which is adapted to the learning style of the stude... 34 Subjects: including prealgebra, English, reading, writing ...As a part of my coursework, I have taken the equivalent of seven semesters of Latin, and I have taken six semesters of Classical Greek. Additionally, I gained a broad array of Latin teaching skills by taking a Latin teaching practicum class. Through this course, I acquired techniques for teachi... 20 Subjects: including prealgebra, English, reading, algebra 1 ...During my internship I was able to gain experience teaching and aiding in multiple classroom settings in which students had varying degrees of skill level. I was also in charge of an after school Algebra Assistance Program in which I, along with the other mathematics department intern, would pro... 4 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...I have a Masters degree in applied mathematics and most coursework for a doctorate. This includes linear algebra, modern algebra, mathematical physics, topology, complex and real variable analysis and functional analysis in addition to calculus and differential equations. I have a Masters degree in applied mathematics and most coursework for a doctorate. 18 Subjects: including prealgebra, physics, GRE, calculus ...I have been teaching high school math for 7 years. Since I started teaching, I have been working with Juniors who are preparing to take the ACT. I have a plethora of ACT style questions as well as helpful hints for doing well on the test that I use with my students every year. 8 Subjects: including prealgebra, geometry, algebra 1, algebra 2 Related Russell, IL Tutors Russell, IL Accounting Tutors Russell, IL ACT Tutors Russell, IL Algebra Tutors Russell, IL Algebra 2 Tutors Russell, IL Calculus Tutors Russell, IL Geometry Tutors Russell, IL Math Tutors Russell, IL Prealgebra Tutors Russell, IL Precalculus Tutors Russell, IL SAT Tutors Russell, IL SAT Math Tutors Russell, IL Science Tutors Russell, IL Statistics Tutors Russell, IL Trigonometry Tutors Nearby Cities With prealgebra Tutor Benet Lake prealgebra Tutors Franksville prealgebra Tutors Indian Creek, IL prealgebra Tutors Ingleside, IL prealgebra Tutors Kansasville prealgebra Tutors Lindenhurst, IL prealgebra Tutors Paddock Lake, WI prealgebra Tutors Round Lake Heights, IL prealgebra Tutors Somers, WI prealgebra Tutors Sturtevant prealgebra Tutors Third Lake, IL prealgebra Tutors Tower Lakes, IL prealgebra Tutors Trevor prealgebra Tutors Union Grove, WI prealgebra Tutors Woodworth, WI prealgebra Tutors
{"url":"http://www.purplemath.com/Russell_IL_Prealgebra_tutors.php","timestamp":"2014-04-16T16:44:18Z","content_type":null,"content_length":"24298","record_id":"<urn:uuid:c05387fe-3869-45ce-89a1-d19e97c0941e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: find the derivative of y with respect to x, t or theta. (a) e^7 - 10x Answer: -10e^7-10x (b) 8xe^x - 8e^x answer: 8xe^x (c) y= (x^2 -2x+4)e^x answer: (x^2+2)e^x (d) y= sin e^theta^4 answer: (-4theta^ 3 e^-theta^4) cos e^-theta^4 please show the steps. thank you • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51181db7e4b0e554778c12d6","timestamp":"2014-04-25T08:22:02Z","content_type":null,"content_length":"70754","record_id":"<urn:uuid:0d0a31b5-897b-445d-b2b0-6c86e3956c40>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
Robert W. Brooks b. Washington, D.C., 16-Sep-1952 d. Montreal, 5-Sep-2002 Robert W. Brooks, 49, a Washington-born mathematics professor known for his work in spectral geometry and fractals, died of a heart attack Sept. 5 at a hospital in Montreal, where he was on sabbatical at McGill University. Dr. Brooks, who was raised in Bethesda, taught at the University of Maryland from 1979 to 1984. Since 1995, he had been on the faculty of Technion, the Israel Institute of Technology in Haifa. He was writing a book on spectral geometry, a field that examines the relationship between the shape of an object and the frequencies at which it vibrates radically. Colleagues said he was one of the world's leading experts in a field that has drawn increasing interest in recent years, yielding applications in physics and computer science. Dr. Brooks also studied "circle packings" -- searching for the fundamental principles of mathematics expressed in the fitting of circular tiles of various sizes into prescribed areas. A McGill colleague said that Dr. Brooks's circle-packing theorem was used recently by William P. Thurston of Princeton University to prove a deep theorem in modern geometry. Dr. Brooks was a 1970 graduate of Walt Whitman High School and a 1974 graduate of Harvard University, where he also received master's and doctoral degrees in mathematics. While undertaking postdoctoral studies at the State University of New York at Stony Brook, Dr. Brooks worked with J. Peter Matelski in creating pictures of fractals -- patterns that reveal greater complexity when enlarged. Fractals can be subdivided into parts that are images of the whole. The researchers' work was later reflected in the groundbreaking "Mandelbrot set" pictures of mathematician Benoit Mandelbrot, who first defined fractals. Dr. Brooks went on to become a professor of mathematics at the University of Southern California, after having done research at the Courant Institute of Mathematical Sciences of New York University. He was also a Fulbright senior scholar at Hebrew University in Jerusalem. Dr. Brooks spoke often at scientific conferences and published 80 mathematical papers in journals that included Topology, American Journal of Mathematics, Duke Mathematical Journal and St. Petersburg He was also visiting lecturer at a number of universities internationally. His honors included an Alfred P. Sloan fellowship, a Guastella fellowship and Technion's Taub Prize for Excellence in The obituary was published in the Washington Post
{"url":"http://www.math.technion.ac.il/Site/people/inmemoriam/Robert-Brooks/","timestamp":"2014-04-17T01:55:36Z","content_type":null,"content_length":"3452","record_id":"<urn:uuid:9d6a1d5c-4d41-44e3-860f-1a62b3e1deab>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
1 migrant needed to prevent genetic divergence - Gene Expression | DiscoverMagazine.com In the survey below I asked if you knew about how many migrants per generation were needed to prevent divergence between populations. About ~80 percent of you stated you did not know the answer. That was not totally surprising to me. The reason I asked is that the result is moderately obscure, but also rather surprisingly simple and fruitful. The rule of thumb is that 1 migrant per generation is needed to prevent divergence.* It doesn’t tell you much in and of itself of course. But if you think about it you can inject that fact into all sorts of other population genetic phenomena. For example, to have selection across two populations which is not reducible to selection within those populations (i.e., inter-demic selection) you need group-level genetic differences. These differences can be measured by the Fst statistic. In short the value of Fst tells you the proportion of variation which can be attributed to between-group differences (e.g., Fst across human races is ~0.15). For natural selection to have any adaptive effect you also need heritable variation. If you have lots of heritable variation selection can be weaker, while if you have little heritable variation selection has to be very strong (see response to selection). Fst is a rough gauge of heritable variation when you are evaluating group level differences. An Fst of 1.0 would imply that the groups are nearly perfectly distinct at the loci of interest, while an Fst of 0.0 would imply that the groups are not genetically distinct at all. With no distinction selection would have no efficacy in terms of driving adaptation. All this is a long way to saying that the 1 migrant rule is one reason that evolutionary biologists take a skeptical position in relation to group selection. It tends to quickly erase the variation which group selection depends upon. To make it concrete here is the equation which you use to generate the equilibrium F statistic: In this formula N = the population size, and m = the proportion of migrants within the population within a given generation. Nm then works out to be the number of migrants in any given generation. So 1 migrant per generation would mean for 1,000 individuals m = 0.001. For 100, the m = 0.01. To see the power of a given number of migrants per generation on long term Fst, the measure of between population difference, I’ve plotted some computed results below (Fst y-axis, Nm on the x-axis). This should make intuitive sense. If there is no migration (gene flow) between populations then over the long term they become perfectly distinct. As you increase migration naturally that is going to homogenize differences between populations. But I suspect the question you may still have is how is it that only a few individuals are necessary in even large populations to prevent differentiation? Here the intuition is simple. In a neutral scenario between-population differences emerge as gene frequencies change over time. The generation to generation change is inversely proportional to population. This is simply the sample variance or transmission noise. The expected deviation is going to be proportional to 1/N, where N is the population (2N for diploid). As N gets rather large you converge upon zero. So as the population gets very large there is less and less divergence which may occur in one given generation. In contrast you have a lot of generation to generation variation, and rapid change in frequency, in a small population. So why only 1 migrant? In a large population 1 migrant does not effect much change, but much change is not necessary. In a small population it has much more impact, but the generation to generation change is also much bigger. These two dynamics work at cross purposes so that the number of migrants needed remains relatively insensitive to population size. * This is the result derived from population genetics, some ecological geneticists have made the case that you may actually need 10 migrants, 1 being the lower boundary. Image credit: Wikipedia • http://www.scribd.com/doc/74944514/ Does the less recombination-prone DNA (X, Y, mtDNA) still do this? Thanks for explaining this. OK, this makes intuitive sense for a neutral scenario, where two populations are under the same selective pressures, and the only force working to separate them is genetic drift. But the question doesn’t specify a neutral scenario, it simply asks how many migrants per generation are required to prevent the divergence of two unspecified populations, so it sounds like this should include, let’s say, a species with a large population living in a rain forest and another large population on an arid savanna. It’s not at all intuitively clear to me that one migrant per generation would prevent divergence in this scenario! One small point, N is Ne here, the effective population size, which will be considerably smaller than the census population size. The proportion of migrants per generation needs to be proportionally larger by the ratio of the real to the effective population size. This also means that – to answer the first post – that the Y chromosome and mtDNA will diverge by more than the autosomes (if we ignore sex biased migration) as only one Y chromosome migrates for every four autosomes, so Fst for the Y can be much larger. • http://blogs.discovermagazine.com/gnxp a species with a large population living in a rain forest and another large population on an arid savanna. It’s not at all intuitively clear to me that one migrant per generation would prevent divergence in this scenario! selection will keep the adaptive locus/loci different. Yes, and that was my point. The question and answer as stated seem misleading to me, because they make it sound like one migrant per year is enough to stop any two interfertile populations from diverging, when, in fact, if selection pressures are different, populations can diverge drastically even with substantial migration. Clearly for any situation there will always be some level of migration that will prevent divergence, but it will often be much higher than one migrant per generation. The question should make it clear that we are only talking about the neutral scenario, not the general case of any two populations. • http://blogs.discovermagazine.com/gnxp The question should make it clear that we are only talking about the neutral scenario, not the general case of any two populations. yes, but the vast majority of cases are probably neutral. so its a good null. this is usually brought up in conservation genetics fwiw. Surely this has to be one migrant per generation who’s _successful in reproducing_, no? But in many populations the chance of an individual’s successfully reproducing is very low. So degree of contact and number of “migrants” in the strict sense might need to be much higher than one per generation, no? Or is this already the technical sense of the word “migrant” in population genetics? In which case pardon my ignorant comment. • http://blogs.discovermagazine.com/gnxp #8, i didn’t want to get into the issue of ‘effective population,’ but this is the effective migration rate. i might clarify effective population in a separate post. i wanted to keep this simple, though these questions and concerns are all useful. I find this amazing! I guess the lesson here is that genetic drift is very weak for large populations. • http://washparkprophet.blogspot.com In the random, phenotypically neutral case this rule of thumb is right. But, of course, migration between groups, groups of humans anyway, is not random. It is a quintessentially selective event driven by a careful, semiconscious weighting of phenotypic traits (and often evaluation of unexpressed genotypic traits that are inferred to be likely to be present from people related to the So, for traits that either have a selectively meaningful phenotype (or inferrable genotype) or are statistically associated with a selectively meaningful phenotype (or inferrable genotype), intergroup migration will often be assortive and enhance rather than reduce divergence between the two populations. For example, suppose that there is a genotype associated with the Big Five personality trait conscientiousness which has a fairly high level of heritability, and that two populations – herders for whom conscientiousness has little value as a phenotype, and farmers for whom it is an extremely valueable phenotype live side by side and sometimes exchange brides. A farmer with an unconscientious daughter is likely to try to marry her off to a herder in a community where she would be more highly valued than in her own. The herder with a conscientious daughter may be likely to try to marry her off to a farmer who may value her more than the men in his community. Of course, it doesn’t have to be bride exchange immediately. People who have the Big Five trait of openness to experience may preferrentially move to Vermont, while people who do not may preferrentially move to New Hampshire. Brown eyed brunettes born in Sweden where they are considered “un-Swedish looking” and perhaps unattractive, may preferrentially migrate to Germany where they are better appreciated. People born in the mountains who lack adaptations necessary for them to have healthy pregnancies and deliveries at high altitudes may migrate to lowlands, while people in lowlands who are adapted to high altitude may be more likely to stay in a community away from home after successfully having a child there. Afro-Caribbeans have historically intentionally tried to pedigree their descendants to a maximum degree of European admixture that undoes the gene pool sharing that took place at their birth. Taken to an extreme, this can speed up the process of sympatric speciation. • http://blogs.discovermagazine.com/gnxp Afro-Caribbeans have historically intentionally tried to pedigree their descendants to a maximum degree of European admixture that undoes the gene pool sharing that took place at their birth. this illustrates the problem with your argument. humans don’t have a automatic genotyping system. they use coarse predictors. as i noted above, unless the trait is a prefect reflection of the whole genome selection is going to drive differentiation on a subset of loci. The easy theoretical counter to the fact that migration quickly diminishes the variation on which group selection is the possibility that group selection pressure may be extremely high, so that it would act to find, advantage and quickly fix relatively small differences in group genetic profile. And indeed, that seems to correspond fairly well to at least some situations in recent human history and to putative situations in human prehistory, in which weaker groups would be wiped out, despite having some very adaptive individuals. I’m not necessarily advocating this position. Just putting forth as a possible case in which your point about the difficulty of group selection in a setting of limited divergence can be overcome. • http://blogs.discovermagazine.com/gnxp And indeed, that seems to correspond fairly well to at least some situations in recent human history and to putative situations in human prehistory, in which weaker groups would be wiped out, despite having some very adaptive individuals. be specific, don’t be general. what situations are you thinking of? I don’t think it’s absurd here to mention the holocaust. (Godwin’s Law shouldn’t apply here, since I’m not calling anyone a Nazi.) Or to use a similar example, Katyn – thousands of presumably quite adaptively fit individuals wiped out because of the group they were a part of. (I use “weaker groups” strictly in a sense of what actually happened – and not any higher “ideal” sense of group weakness or strength.) In general, I’d argue that war is and has been like this typically for centuries – that men lose because their flank is turned, through little fault of their own as individuals, and so fitness in the sense of successful reproduction has in great part depended on both the luck of what group you happened to belong to and to group dynamics that may well be partially genetically determined. One’s reproductive success may have depended in great degree on producing brave brothers and cousins and on belonging to a group predisposed to a high degree of loyalty along “fictive kinship” Obviously there are many other aspects of fitness, and in addition to competing against other groups, humans compete against other individuals in their own group, fellow villagers, cousins, sibling rivalry, etc. It’s hard for me to understand why geneticists are so resistant to SOME degree of group fitness, since species individuals are aggregations of cooperative cells and genes that clearly survive and reproduce in large part dependent on whether the other genes and cells around them are adaptive and successful. • http://blogs.discovermagazine.com/gnxp I don’t think it’s absurd here to mention the holocaust. (Godwin’s Law shouldn’t apply here, since I’m not calling anyone a Nazi.) Or to use a similar example, Katyn – thousands of presumably quite adaptively fit individuals wiped out because of the group they were a part of. (I use “weaker groups” strictly in a sense of what actually happened – and not any higher “ideal” sense of group weakness or strength.) these are very weak to worthless examples from what i can tell. but be explicit about your genetic parameters. group selection is not just one group killing another. It’s hard for me to understand why geneticists are so resistant to SOME degree of group fitness do you know the math? if you don’t, then it may be hard. the math isn’t hard btw in a technical sense. but if you don’t know the basics you shouldn’t be offering up your opinion on the issues (or at least just say you’re lay person’s perspective blah, blah). Fair enough. I feel I have a good intuitive sense of the math – following your argument and understanding which direction the arrows point given certain gene flows. I don’t have time to work it out fully for myself just now. You’ve given a fuller account of your view in your more recent post on monogamy (monogyny?) which is really interesting. I do recognize that you’ve got a much more solid understanding of the math. So I’m writing back here only by way of disengaging respectfully given time constraints. I love the blog and only venture to comment every now and then. • http://blogs.discovermagazine.com/gnxp
{"url":"http://blogs.discovermagazine.com/gnxp/2012/01/1-migrant-needed-to-prevent-genetic-divergence/","timestamp":"2014-04-21T12:16:11Z","content_type":null,"content_length":"351419","record_id":"<urn:uuid:14e848ab-3df1-4cd5-993b-90ff447d05a5>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: May someone please help me? Pretty please? It's attached below. • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. hmm,whats that ? Best Response You've already chosen the best response. I'm not downloading it. Best Response You've already chosen the best response. It's microsoft word 2010. Best Response You've already chosen the best response. "Windows cannot open this program" Copy and paste it here. Best Response You've already chosen the best response. Ok, just give me a moment. Best Response You've already chosen the best response. Kelly wrote the following steps to prove the equation: Given: 2(x – 3) = 5x + 3 Prove: x = –3 Step Mathematical Statement Justification 0 2(x - 3) = 5x + 3 Given 1 2x - 6 = 5x + 3 Distributive Property 2 – 6 = 3x + 3 Subtraction Property of Equality 3 3x + 3 = –6 Transitive Property of Equality 4 3x = –9 Subtraction Property of Equality 5 x = –3 Division Property of Equality Which step of justification is incorrect, and what should the justification for that step be to solve the equation? Step 1; Commutative Property of Multiplication Step 2; Associative Property of Addition Step 3; Symmetric Property of Equality Step 4; Combine Like Terms Best Response You've already chosen the best response. There is some stuff missing. Best Response You've already chosen the best response. That's what they gave me. Best Response You've already chosen the best response. "Distributive property." No need for that. H should have added the like terms. Best Response You've already chosen the best response. Actually, the property of equality or adding like terms; It's a matter of how you go at it. Best Response You've already chosen the best response. So its the second option. Best Response You've already chosen the best response. Thank you!!!!!! Best Response You've already chosen the best response. Watch, I'm going to add like terms. 2x - 6 = 5x + 3 2x - 5x = +6 + 3 3x = 9 x = 3 Best Response You've already chosen the best response. Thanks!!! :) I now understand fully! Best Response You've already chosen the best response. All though both operations are correct; they're only going to ask for one, right? Best Response You've already chosen the best response. Best Response You've already chosen the best response. Is it multiple option or something you write to report? Best Response You've already chosen the best response. multiple choice Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50394ebce4b05acbc95fb46b","timestamp":"2014-04-18T08:23:29Z","content_type":null,"content_length":"72210","record_id":"<urn:uuid:9befc179-7e72-40f1-8e0a-cc8db0261ab8>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 8 Wanstead Engineering Case 9 The annual bonus for senior managers at Wanstead Engineering was first agreed in 1998. Since then the company has seen no need to make any changes to the way in which the bonus is calculated. Both shareholders and managers agree the bonus is importa... Newton's law 7. A uniform flexible chain of length l, with weight per unit length lamba, passes over a small, frictionless, massless pulley. It is released from a rest position with a length of chain x hanging from one side and a length l-x from the other side. Find the acceleration a as a... two dimensional kinematics A freight train is moving at a constant speed of Vt. A man standing on a flatcar throws a ball into the air and catches it as it falls. Relative to the flatcar, the initial vellocity of the ball is Vib straight up. a. What are the magnitude and direction of the initial velocit... two dimensional kinematics A hammer slides down a roof of angle theta (with respect to the ground). It slides along the roof distance D. As it leaves the roof, a height H above the ground, it has velocity in both directions. Find how far from the base of the building the hammer lands. two dimensional kinematics If you can throw a stone straight up to a height of 16m, how far could you throw it horizontally over level ground? Assume the same throwing speed and opitimum launch angle. two dimensional kinematics A jetliner with an airspeed of 1000 kmlhr sets out on a 1500 km flight due south. To maintain a southward direction, however, the plane must be pointed fifteen degrees west of south. Ifthe flight takes 1 00 min, what is the wind velocity? A bus with a vertical windshiled moves along in a rainstorm at speed V relative to the ground. The raindrops fall vertically with a terminal speed of V' relative to the ground. At what angle do the raindrops strike the windshield? A rifle with a muzzle velocity of Vo shoots a bullet at a target R ft away. How high above the target must the rifle be aimed so that the bullet will hit the target?
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=ringo","timestamp":"2014-04-18T09:44:26Z","content_type":null,"content_length":"8276","record_id":"<urn:uuid:f0db5902-f4e4-45fb-a3fa-2e6f6f463a88>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
Online Dictionary of Crystallography From Online Dictionary of Crystallography Superespace, Hyperespace (Fr.) Superspace is a Euclidean vector space with a preferred (real) subspace V[E] that has the dimension of the physical space, usually three, but two for surfaces, and one for line structures. Superspace is used to describe quasi-periodic structures (See aperiodic crystal). We denote the dimension of the physical space by m. Then a function ρ(r}) is quasi-periodic if its Fourier transform is given by $\rho( r)~=~\sum_{ k \in M^*} \hat{\rho}( k \exp (2\pi i k. r ),$ where the Fourier module M^* is the set of vectors (Z-module) of the form $k~=~\sum_{i=1}^n h_i a_i^* , ~~~(h_i~{\rm integers}),$ for a basis a[i]^* with the property that $\sum_i h_i a_i^*=0$ implies h[i]=0 (all i) if the indices h[i] are integers. The number n of basis vectors is the rank of the Fourier module. The vectors r[s] have two components: r and r[I]. The relation is written as $r_s~=~( r,~ r_I)$ In the superspace there is a reciprocal basis a[si]^* such that a[i]^* is the projection of a[si]^* on the subspace V[E]. The reciprocal lattice Σ in superspace then is projected on the Fourier module M<su>*</sup>. The function ρ(r) then can be embedded as $\rho_s(r_s)~=~\sum_{k_s\in\Sigma^*} \hat{\rho}(\pi k_s= k) \exp (2\pi i k_s.r_s).$ The function ρ(r) is the restriction of ρ[s](r[s]) to the subspace $\rho( r)~=~\rho_s( r,0).$ Because the Fourier components in superspace belong to reciprocal vectors of Σ^*, the function ρ[s](r[s]) is lattice periodic. Its direct lattice Σ is the dual of Σ^* and its basis vectors are a[is] satisfying a[si].a[sj]^* = δ[ij]. Therefore, the symmetry group of ρ[s](r[s]) is a space-group in the n-dimensional superspace, where the dimension n is equal to the rank of the Fourier module. In the superspace the usual crystallographic notions and techniques may be applied. For point atoms, the function ρ[s](r[s]) is concentrated on surfaces of co-dimension equal to the dimension of the physical space (See Figure). Figure Caption: An example of the embedding of a one-dimensional modulated chain with two modulation wave vectors. It consists of an array of two-dimensional surfaces, with three-dimensional lattice periodicity. The intersection of the surfaces with the physical space (a line) gives the positions of the modulated structure.
{"url":"http://reference.iucr.org/dictionary/Superspace","timestamp":"2014-04-18T15:39:43Z","content_type":null,"content_length":"15101","record_id":"<urn:uuid:58fb479b-6ae0-4991-bd99-af3eca7a7a0c>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
[Haskell-cafe] About ($) robert dockins robdockins at fastmail.fm Thu Jun 2 15:08:13 EDT 2005 > Hello there, > name's Frank-Andre Riess. Nice to meet you m(_ _)m > So, well, my first question on this list is admittedly somewhat simple, but I > managed to delay it long enough and now I think I should ask about it: Does > ($) have any relevance at all except for being a somewhat handier version of > parentheses? That's mostly how it is used (although some will say that it is a terrible idea), but one can also do some pretty neat tricks with it as a higher-order function. Eg, zipWith ($) Is a function which takes a list of functions and a list of arguments and applies the functions pairwise with the arguments. In addition, because of the way the zip* functions work, you can create an infinite cycle of functions to apply to some arguments as in: zipWith ($) (cycle [sin,cos]) [1..5] which is equivalent to: [sin 1,cos 2,sin 3,cos 4,sin 5] I'm sure there are other more esoteric things, but this is about as complex as I try to go to avoid severe headaches :) Robert Dockins More information about the Haskell-Cafe mailing list
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2005-June/010277.html","timestamp":"2014-04-19T16:06:45Z","content_type":null,"content_length":"3507","record_id":"<urn:uuid:0ca9b012-441b-4397-9b7f-35aa39405bb7>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Feynman Diagram 1. The problem statement, all variables and given/known data Draw the Feynman diagram for: e[tex]^{-}[/tex] + p [tex]\rightarrow[/tex] n + [tex]\upsilon[/tex][tex]_{e}[/tex] Sorry, I dunno how to make everything even but you should get the gist. 2. Relevant equations 3. The attempt at a solution Well, what I have is the electron and proton annihilating(and this is what confuses me) into a Z[tex]^{o}[/tex] which in turn becomes the neutron and electron neutrino. Is this correct?
{"url":"http://www.physicsforums.com/showpost.php?p=1981938&postcount=1","timestamp":"2014-04-17T18:33:38Z","content_type":null,"content_length":"8885","record_id":"<urn:uuid:070c9594-7307-49ac-965e-b478caa04177>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimize Class E Power Amplifiers Amplifier efficiency is essential not only for mobile devices, but increasingly to conserve power consumption in wireless communications base stations and cell sites. The Class E amplifier in this article produced efficiency of 60 percent from 1.9 to 2.2 GHz using a standard packaged transistor. 1 The techniques used to design and build this amplifier can be employed to design Class E amplifiers at any frequency of interest. The Class E amplifier has been extensively studied and is relatively easy to implement.^2-5 By allowing the drain shunt capacitance to be discharged when the instantaneous RF voltage crosses zero switching losses can potentially be eliminated. This makes 100-percent efficiency theoretically possible, although in practice component tolerances and finite on-resistance of the switching transistor limit efficiency. To explore the limits of the Class E topology, an amplifier using lumped-element matching components was designed based on a 10-W GaAs pHEMT model MRFG35010 packaged transistor from Freescale Semiconductor. It achieves its high performance in part by presenting the correct harmonic terminations^2 transformed to the device package plane. The amplifier's drain efficiency was greater than 74 percent when fabricated on a standard printed-circuit board (PCB) at 2.55 GHzone of the highest efficiencies reported using a packaged transistor at this frequency. The approach was then extended to achieve greater bandwidth and produced average efficiency of 56.8 percent from 1.8 to 2.7 GHz and 60 percent from 1.9 to 2.2 GHz. Classical analysis of the Class E switching amplifier mode was performed in the time domain^3,4 using a simplified version of the time-domain analysis^5 to obtain the initial Class E design parameters. The analysis was extended to include the package impedances of the transistor, and the harmonic terminations were then determined at 2.5 GHz and realized using discrete components. The broadband design was realized using distributed circuit matching networks with the matching circuit designed to provide optimal harmonic impedances from 1.8 to 2.7 GHz. The simple circuit for this analysis (Fig. 1) was modeled as an ideal switch in parallel with a capacitance (C[p]) consisting of the transistor's output capacitance (C[ds]) in parallel with capacitance that must be added to obtain the correct switching times for the circuit (that is, to yield zerovoltage switching to reduce switching losses). The output circuit contains a series resonator so that only fundamental current flows in load resistor RL. The optimal load for RL was transformed from 50 Ohms in the design. The analysis assumes that the switch is either "on" (shorting the output capacitance so all current flows in the switch element) or "off" (an open circuit), and the output capacitance is part of the resonator circuit. As a result, the circuit is characterized by two resonant circuits with different loading or quality factors. Timedomain analysis is performed over a range of switching frequencies and switch conduction angles (a) for a given drain bias on the transistor. A conduction angle of 110 to 120 deg. yields optimum efficiency and power output.^5 The analysis also showed that by careful choice of operating frequency, conduction angle, and circuit parameters, all parallel capacitance can be delivered by the output capacitance of the device, eliminating the need for an additional capacitor at its output. The transistor package was modeled as a T-network equivalent circuit between the device output plane and the circuit. The series inductors represent the bond wires and the package tab, and the shunt capacitor is a parasitic capacitance to ground of the connecting components through the plastic package. With these package components in place, only C[ds] is shorted when the switch is closed. This difficulty was overcome in the analysis by including the package impedances in the resonant circuit when the switch is closed. When the switch is open, the package components are accommodated as part of drain shunt capacitance C[p]. The transistor output capacitance can be combined with the package equivalent circuit components to produce a transmission matrix between the switch and the Class E amplifier circuit. This transmission matrix is then factored into an effective capacitance (C[eq]), and a reduced transmission matrix (Fig. 2)^6. Equation 1 is ultimately obtained, which shows that the package parasitics increase the capacitance at the transistor drain: The equivalent capacitance will always be greater than Cds, and by a suitable choice of operating frequency, drain bias, and conduction angle, the required parallel capacitance for Class E operation can be met by the equivalent shunt capacitance C[eq]. The equivalent shunt capacitance presented to the transistor can be tuned with the bond wire.^1 In Class E, the instantaneous drain voltage can be more than three times the DC drain bias (V[dd]). As the recommended drain bias for the MRFG35010 is +12 VDC, it was derated to +8 VDC. With this bias, a conduction angle of 128 deg. yields optimum efficiency and power output and the total parallel capacitance is 7 pF at 2.5 GHz. This operating frequency was chosen so that C[p] = C[eq] and no additional external capacitor is required. Other parameters for this fundamental frequency are: load resistance, RL, of 4 Ohms; load angle = 51 deg; resonator formed of Ls = 1 nH and C[s] = 4 pF; and tuning inductance (dL)=0.3157 nH. The assumption for Class E design is that the current waveform at the drain of the device is created by the applied drive and bias, and the voltage waveform at the drain is created from the interaction of the current components with the frequency-dependent impedance presented at the drain^2. At microwave frequencies the number of available harmonic impedances is limited and the self-resonant frequencies of components have an effect on performance. The limited number of harmonics constrains the obtainable efficiency because the overlap of voltage and current waveforms increases as the number of harmonics decreases. The impedance angle at the harmonics must be 90 deg. to prevent generation of significant power in the harmonics, and this design assumes the presence of fundamental, second, and third harmonics. The impedance to the drain of the device at the fundamental, second, and third harmonics becomes Z[1] = R[1] +jX[1], Z[2] = jX2, and Z3 = jX3. From ref. 2, Z1 = (1.526+j1.1064)R[L]; Z[]2 = -j2.7233R[L]; and Z[3] = -j1.8155R[L]. With these terminations and using only three harmonics, the maximum theoretical efficiency is 82 percent.^2 Using the above values and ideal passive components for the matching network, a simulation was performed with the nonlinear model for the MRFG35010, which yielded drain efficiency of 81.5 percent and poweradded efficiency of 78 percent at an output power of +37.5 dBm (5.6 W). Gain was more than 12.5 dB. The next step was to replace ideal components with real component models that include losses and resonances. The harmonic load angles and simulated efficiency for a +25-dBm input is shown in the table as the load network is converted from ideal to real model components. Load angles of about -90 deg. can be maintained with real components, at least at the second (most significant) harmonic. High drain efficiencies can be achieved from the device without modifying the in-package network. The power-added efficiency with this load network is more than 72.5 percent. Continue to page 2 Page Title The drain voltage and current waveforms of Fig. 3 are obtained at the drain contact of the nonlinear FET model and include the effects of the drain capacitance, which cannot be deembedded. The waveforms show that the voltage and current generally do not overlap, which indicates a high-efficiency mode. Optimum input match was found from source-pull simulation at +25-dBm drive. Only the fundamental input match was considered. The amplifier was built on a 30-mil-thick RF printed circuit board (PCB) with dielectric constant of 3.55. Lumped components were initially chosen for highest self-resonant frequency (SRF) for best However, when tuning the second and third harmonics, careful choice of the component resonant frequency can optimize their harmonic impedances. A comparison between the target impedances and the final tuned impedances for the fundamental, second, and third harmonic frequencies (Fig. 4) shows close agreement with the simulated values. The tuned half-board was then transferred to the complete amplifier, and drive-up measurements were conducted. The measured drain efficiency is shown in Fig. 5 for a range of gate biases and a drain bias of +8 VDC. The highest efficiency is more than 70 percent for all biases and maximum power of +35 dBm. Efficiency of more than 74 percent was achieved at drain bias of +6 VDC when output power was reduced to +32 dBm. A bandwidth of about 100 MHz was observed (2.45 to 2.55 GHz). Extending the amplifier to a broadband design was based on a distributed output load network.^7 If loading up to the third harmonic is considered, the maximum bandwidth is determined when the third harmonic of the low frequency (f[L]) is coincident with the second harmonic of the high frequency (f[H]): Bandwidth = f[H] f[L] 3f[L] = 2f[H] (2) Ideal Class E harmonic load impedances at the internal device drain were calculated for the amplifier's intended range based on the simplified timedomain analysis method. These ideal fundamental and harmonic impedances were then transformed to the external package drain lead through Cds and the package parasitics. Genesys PCB design software was used to synthesize an initial broadband output matching network topology based on ideal lumped elements (Fig. 6). An equivalent matching network was then implemented in microstrip as a distributed circuit and the line lengths and widths were optimized using an EM simulator to achieve the desired impedances. The frequency responses of the synthesized circuit, optimized distributed circuit, and measured PCB are shown in Fig. 7, and are in good agreement. 1. J. Wood, "Overview of Class D, Class E, and Class F power amplifiers based on a finite number of harmonics," presented at the Workshop on Transmitter Design for High Power Efficiency, IEEE Radio & Wireless Symposium, Orlando, FL, 2008. 2. F. H. Raab, "Class E, Class C, and Class F Power Amplifiers Based upon a Finite Number of Harmonics," IEEE Trans. Microwave Theory & Techn., Vol. 49, No. 8, 2001, pp. 1462-1468. 3. N. O. Sokal and A. D. Sokal, "Class E A new class of high efficiency tuned single-ended switching power amplifiers," IEEE J. Solid-State Circuits, Vol. 10, No. 3, 1975, pp. 168-176. 4. F. H. Raab, "Idealized Operation of the Class E Tuned Power Amplifier," IEEE Transactions on Circuits & Systems, Vol. 24, No. 12, 1977, pp. 725-735. 5. S. C. Cripps, RF Power Amplifiers for Wireless Communications, Artech House, Norwood, MA, 1999. 6. G. Collins, J. Wood, M. Bokatius, and M. Miller, "A Practical Hybrid Class E Amplifier Design," IEEE Topical Symposium on Power Amplifiers, Orlando, FL, January 2008. 7. J. K. A. Everard and A. J. King, "Broadband power efficient Class E amplifiers with a non-linear CAD model of the active MOS device," Journal of Institute of Electrical & Radio Engineering, Vol. 57, No. 2, 1987, pp. 52-58.
{"url":"http://mwrf.com/print/components/optimize-class-e-power-amplifiers","timestamp":"2014-04-21T08:32:09Z","content_type":null,"content_length":"26655","record_id":"<urn:uuid:d346c800-389a-46ac-82ab-537dbd5e7c08>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple Calculus November 16th 2009, 12:38 PM Simple Calculus My math class has just started getting into calculus and I have a few questions that I'd like to ask. Our school uses a program called TLM that spits out some random math problems from a huge database, and here are a couple of questions that I'm having trouble with. Given the function $f(x) = 4x^2-1x+4$ evaluate $f(2+\Delta x)$ For this question should I just pop in $2+\Delta x$ for x? The question then asks me what is $\Delta Y$ and $\Delta y / \Delta x$ when x changes from 1.6 to 2.2 For delta y I just subbed in the numbers and got 15 For finding the dy/dx I'm drawing a complete blank. I feel like it should be easy but I can't figure out what to do. November 16th 2009, 05:10 PM Scott H You are correct for the first question: $f(2+\Delta x)=4(2+\Delta x)^2-(2+\Delta x)+4.$ Your answer to the second question also looks correct. To find $\frac{\Delta y}{\Delta x}$, we just divide the answer by $\Delta x=2.2-1.6$. To find $\frac{dy}{dx}$, we evaluate what is called a limit: $\frac{dy}{dx}=\lim_{\small \Delta x\rightarrow 0}\frac{f(x+\Delta x)-f(x)}{\Delta x}.$ We can do this by simplifying the fraction and noting that certain terms containing $\Delta x$ approach $0$ as $\Delta x$ itself approaches zero, leaving us with the derivative of the function $f Hope this helps! November 17th 2009, 03:59 PM Thanks for the answers Scott, but I'm still having a bit of trouble grasping the concept for the second part of my second question. $y=5x^2+6x+21$ when x changes from 1.6 to 2.2. Find $\frac {\Delta y} {\Delta x}$ I don't really understand what "rule" or equation to use to find the answer. It ended up being 25. Again, any help is much appreciated! EDIT: I found out this is the delta process, but I'm still attempting to wrap my head around it.
{"url":"http://mathhelpforum.com/calculus/114970-simple-calculus-print.html","timestamp":"2014-04-16T21:12:33Z","content_type":null,"content_length":"8443","record_id":"<urn:uuid:1a06328c-a13d-4ee6-8133-a902365bf0e7>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Generating random samples without repeats Paul Moore pf_moore@yahoo.co... Fri Sep 19 04:08:20 CDT 2008 Robert Kern <robert.kern <at> gmail.com> writes: > On Thu, Sep 18, 2008 at 16:55, Paul Moore <pf_moore <at> yahoo.co.uk> wrote: > > I want to generate a series of random samples, to do simulations based > > on them. Essentially, I want to be able to produce a SAMPLESIZE * N > > matrix, where each row of N values consists of either > > > > 1. Integers between 1 and M (simulating M rolls of an N-sided die), or > > 2. A sample of N numbers between 1 and M without repeats (simulating > > deals of N cards from an M-card deck). > > > > Example (1) is easy, numpy.random.random_integers(1, M, (SAMPLESIZE, N)) > > > > But I can't find an obvious equivalent for (2). Am I missing something > > glaringly obvious? I'm using numpy - is there maybe something in scipy I > > should be looking at? > numpy.array([(numpy.random.permutation(M) + 1)[:N] > for i in range(SAMPLESIZE)]) And yet, this takes over 70s and peaks at around 400M memory use, whereas the equivalent for (1) takes less than half a second, and negligible working memory (both end up allocating an array of the same size, but your suggestion consumes temporary working memory - I suspect, but can't prove, that the time taken comes from memory allocations rather than computation. As a one-off cost initialising my data, it's not a disaster, but I anticipate using idioms like this later in my calculations as well, where the costs could hurt more. If I'm going to need to write C code, are there any good examples of this? (I guess the source for numpy.random is a good place to start). More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-September/037466.html","timestamp":"2014-04-21T04:32:32Z","content_type":null,"content_length":"4544","record_id":"<urn:uuid:58734c8b-d026-4000-b8a2-85604c63c924>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Optimal Temperature Policy for First-Order Reversible Reactions Consider the first-order reversible reaction involving chemical species and : . The reaction rate for is given by: , where is the fraction of converted, and are the reaction rate constants, and and are the initial concentrations of species and with . When this reaction takes place in a perfectly mixed batch reactor, the mass balance is given by the following equation: where , and are the final reaction time and the time variable, , for and , and are the activation energy and pre-exponential factor, is the ratio of the activation energies, and is the dimensionless This mass balance can be integrated to give: with . The value of the optimum temperature or can be found for given values of , , and from: at . After the optimum value of is found, the actual optimal temperature is given by: ; the reaction is assumed to be exothermic (i.e., ). This assumption is important for the Demonstration to perform correctly because only for exothermic reactions can one find an optimum temperature. Indeed, high values increase the rate constant but lower the equilibrium conversion. Thus, one must reconcile between competing thermodynamic and kinetic considerations by choosing an appropriate optimal temperature. High is better in the beginning where equilibrium limitations are not important and low is better later when one approaches equilibrium The Demonstration plots the optimal dimensionless temperature, , and the corresponding final conversion, , versus parameter for user-set values of the ratio of the activation energies, . C. D. Fournier and F. R. Groves, "Rapid Method for Calculating Reactor Temperature Profiles," Chem. Eng. (3), 1970 p. 121.
{"url":"http://demonstrations.wolfram.com/OptimalTemperaturePolicyForFirstOrderReversibleReactions/","timestamp":"2014-04-16T16:14:54Z","content_type":null,"content_length":"48412","record_id":"<urn:uuid:1c93cb60-b188-4731-aea1-846b4f49c0ca>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] Collision of particles - coefficient of restitution July 4th 2008, 01:35 AM #1 Jun 2007 [SOLVED] Collision of particles - coefficient of restitution If there is a particle X of mass 3m which is moving with velocity Ui on a smooth horizontal table and it collides with a particle Y of mass 5m which is at rest (and we are given that the coefficient of restitution between the particles is e, then how can I find the velocities of X and Y immediately after the collision in terms of e and u? Also is there a way of showing the speed of X immediately after the collision does not exceed 3u/8 for all values of e? If there is a particle X of mass 3m which is moving with velocity Ui on a smooth horizontal table and it collides with a particle Y of mass 5m which is at rest (and we are given that the coefficient of restitution between the particles is e, then how can I find the velocities of X and Y immediately after the collision in terms of e and u? $\text{Coefficient Of Restitution (e)} = \frac{\text{Velocity of Seperation}}{\text{Velocity of Approach}}$ You are told that particle Y is at rest hence $U_y = 0$. $e = \frac{V_x + V_y}{U} \implies V_x+V_y = eU \ \ \ ---(1)$ Also, by Conservation of Linear Momentum; Momentum Before = Momentum After. Therefore: $(3m)(U) + (5m)(0) = (3m)(V_x) + (5m)(V_y) \implies 3U = 3V_x + 5V_y \ \ \ ---(2)$ Solve The simultaneous equation to get Velocity of $x$ and $y$ in terms of $e$ and $U$. Depending on your result for $V_x$, you may get a negative answer which will mean that the particle has changed direction. This implies that you can produce an inequality where $V_x \le 0$ and when solved you should get it to show that it does not exceed $\frac{3u}{8}$ Also, you can see this thread which has a similar question that I posted once and received help on: http://www.mathhelpforum.com/math-he...stitution.html July 4th 2008, 01:51 AM #2 July 4th 2008, 01:57 AM #3
{"url":"http://mathhelpforum.com/advanced-applied-math/43007-solved-collision-particles-coefficient-restitution.html","timestamp":"2014-04-18T13:36:31Z","content_type":null,"content_length":"39839","record_id":"<urn:uuid:bb8cff84-03d9-46d9-bf0a-6a3152e284e9>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
Instead of producing a series of numbers distributed uniformly over an interval, we need data following one of the classical distributions such as the normal distribution (i.e. the numbers should give a "bell curve"). The easy solution is use the library from . Let's try generating numbers with a Gaussian distribution (a normal distribution with mean 0 and standard deviance 1): > (require (planet "random.ss" ("schematics" "random.plt" 1 0))) > (random-gaussian) > (random-gaussian) If we want to mimick a stocastic variable, we can use and a source of random bits. > (define X (random-source-make-gaussians default-random-source)) > (X) > (X) Alternatively, one could simple define > (define X (lambda () (random-gaussian))) Given a distribution, lookup an algorithm in a statistics reference. If you can't find an algorithm, consult a numerical analyst. Let's examine the case of the normal distribution. The two parameters (mean) and (standard deviance) determines a specific normal distribution. ; derived from example in the documentation of SRFI27 (require (lib "27.ss" "srfi")) (define (make-normal-distributed-variable mu sigma) (let ((mu (* 1.0 mu)) (sigma (* 1.0 sigma)) (next #f)) (lambda () (next (let ((result next)) (set! next #f) (+ mu (* sigma result)))) (else (let loop () (let* ((v1 (- (* 2.0 (random-real)) 1.0)) (v2 (- (* 2.0 (random-real)) 1.0)) (s (+ (* v1 v1) (* v2 v2)))) ((>= s 1.0) (loop)) (else (let ((scale (sqrt (/ (* -2.0 (log s)) s)))) (set! next (* scale v2)) (+ mu (* sigma scale v1)))))))))))) An example of usage: > (define X (make-normal-distributed-variable 0 1)) > (X) > (X) > (X) If you are unsatisfied with the fact that you get the exact same numbers as above, then randomize the source of random numbers: > (random-source-randomize! default-random-source) The algorithm used is the polar Box Muller method. The algorithm takes two independent uniformly distributed random numbers between 0 and 1 (represented in the code as ) and generates two numbers with a mean of my and standard deviation sigma. The method produces two numbers at a time, so since we only need one, the second is saved for later in the variable Note that the Perl Cookbook includes an interesting discussion of converting a set of values (and weights) into a distribution. This should also be converted to Scheme and shown here. Mathematically-inclined Schemers should also take a good look at , which contains these and many other statistical methods. It's also worth noting that if a bell-curve type thing is all you're looking for, generating two or more random numbers and taking the average will tend to favor the middle values. For example, consider a pair of dice: there is exactly one combination out of 36 that yields 2 and one that yields 12 (the outlying values), while there are six combinations that yield 7 (the center value). You could also use a weighed average to reduce the effect if averaging two random numbers produces a bell curve which is too steep for your - 14 May 2004 -- - 01 Jun 2004 -- - 12 Dec 2006 [TODO: Move the following remarks to another recipe] If you wish to randomly select from a set of weights and values, convert the weights into a probability distribution, then use the resulting distribution to pick a value. If you have a list of weights and values you want to randomly pick from, follow this two-step process: First, turn the weights into a probability distribution with weight_to_dist below, and then use the distribution to randomly pick a value with weighted_rand: [TODO: Use the random-source-make-discretes from random.ss to solve the above problem]
{"url":"http://schemewiki.org/view/Cookbook/NumberRecipeBiasedRands","timestamp":"2014-04-19T01:49:49Z","content_type":null,"content_length":"17859","record_id":"<urn:uuid:e0191dbe-7e2d-4a39-9119-861166706b08>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
GHS Math 2012 Monday, April 22, 2013 Wednesday, February 13, 2013 Well , all my students have decided on an innovation day project. They will start working on them this Friday. They are doing all sorts of activities. Learning massage, writing songs, doing art work, minecraft education tutorials, cooking, and more! I have great students. I can't wait to see the end products. My Geometry classes have been working on Trigonometry and they have struggled. They had to turn in an infographic with Properties of Triangles. My expectations were pretty high. I was a little disappointed. BUT, we move on! Monday students will start a bridge building design and engineering project. I have done these projects in the past. However, I have never used manilla file folders as the construction material. We will talk about the engineering and measurement of stress on the parts of the bridge. Westpoint Bridge Building Projects are very well organized and worth the time. They also have a great downloadable Bridge Design program available for students to test their designs. This unit will fit nicely with our recently completed unit on Triangles and Trigonometry. In MATH APPS, we have completed two units, one on game playing and probability. Have you ever played Pig Dice? What is your strategy for winning ? Our other unit has been: FERMI problems. I have to give credit to Ethan Bolker and Maura Mast, authors of COMMON SENSE MATHEMATICS: a quantitative reasoning course. They have given me permission to use the material from their College Quantitative Reasoning Course. I love the way they present great REAL LIFE MATH. I have been learning along with the students. Did you know you can pay for a Harvard Degree by collecting cans and bottles. It has been done and we proved it is possible. OR, did you know you can actually get paid to plant trees AND you can make great money doing it? How many trees do you think you could plant in a day? How about 4,000 trees? The range is between 1,000 and 4,000 trees. Can you verify that we throw away enough plastic silverware to circle the globe 300 times? We have looked into drop-out rates, FBI fingerprints, and more. Let me say, "You can't believe everything you read on the internet, but you can do a lot of MATH checking facts! I am having a great time teaching this small class of students. We have been blogging, tweeting, researching, calculating, and THINKING about numbers and math. Our newest unit is about MONEY. We are studying Payday Loans, Rent-to-Own, Mortgages and foreclosures, Interest rates, and cost of living comparisons now vs 50 years ago. Again, I would love to take credit for this unit, but I was able to compile this unit from a book called 25 Real Life Math Investigations that will ASTOUND Teachers and Students . I can't express how fun it is to TEACH MATH IN CONTEXT of real life. I also LOVE TO SEARCH for classroom resources. I remember the drudgery of teaching out of a textbook every day, trying to answer the question, "When am I ever going to use this?" I don't hear that question when we are going through these types of units. It is not always easy and it takes a lot of TIME. However, it IS worth it! I love teaching and spending time with my amazing students. I can't think of a better way to spend my day! Posted by Rhoadzie at 7:44 PM 2 comments: Posted by Rhoadzie at 10:31 AM No comments: Friday, February 1, 2013 Today you need to finalize your "INNOVATION DAY" Project Details in your blog. You need to describe your project including the following: WHAT is the main focus of your project? - Summarize in ONE SENTENCE, what your project is about. WHY is this important to you? - describe why you chose this project, how it reflects your interests and how it fits into your future plans. HOW are you going to complete this project? - make a list of things you need to complete this project. Include the details of the format of your project. Will you make a video, Powerpoint, recording, an ACTUAL product, or some other product. WHEN do you need to be done with your details? - set goal dates to reach your deadline. You have the following dates in class to work on your project: February 1st, 15th, 22nd, March 1st, 8th, and PROJECT DEADLINE - MARCH 19th. Posted by Rhoadzie at 9:23 AM No comments: Thursday, January 24, 2013 MAKE A LIST OF: 10 THINGS YOU LOVE TO DO AND LEARN 10 THINGS YOU ARE GOOD AT 10 THINGS YOU WONDER Write the answers to these 30 questions in your own BLOG. Are you struggling trying to find an IDEA for Innovation Day? Here is a link that might spur your GENIUS: Kreb's Class Blogs If you still aren't sure come to me. Maybe together we can figure it out! Posted by Rhoadzie at 10:34 PM No comments: Friday is INNOVATION DAY #1 in my Math Classes. I want to see this implemented school wide. But, I need to experience it myself. So, my classes are having Innovation Day every Friday for the rest of the year. What is INNOVATION DAY? I got the idea from Josh Stumpenhorst, Stumpteacher Blog . Other schools have called it "Genius Hour" and you can even follow #geniushour on Twitter. The idea is based on the 80/20 Rule at Google. Google give it's Engineers 20% of their work week to spend creating things that interest them. This work is completely separate from their usual daily assignments. Other companies have done the same thing. This policy has helped Google remain the leading Search Engine and true Internet leader. It is the GOLD STANDARD for companies across the world. I'm bringing Google's 80/20 rule to Garrett High School. My students have been given every Friday through the end of the year to Innovate, Create, Research, Study, and learn about what THEY want to learn about. Tomorrow, we will begin the road to teaching CREATIVITY to students through practicing . The other 80% of the time we, as teachers, will lead and guide students in the paths Tomorrow they will do the leading. I introduced this topic on Monday and already MANY of my students are planning their projects. I have a student who is going to work at creating an IPAD app for me. Neither of us know exactly how to do this. But, he has already been researching how to make an APP. Frankly, this student is brilliant in many ways. He has a great mind for computers and programming. He has already some pretty complicated computer programming. I can't wait to see what he creates. I have another group of students who will be working on a film. They are also pumped and have done some preliminary work on their project. I do have a LARGE number of students that DON"T HAVE AN IDEA WHAT TO DO! Unfortunately, this is a SAD commentary on education. We have sucked the creativity out of our students. Many of our students struggle when we leave them on their own to create. I even see this in my traditional math lessons. When we do "DISCOVERY" types of lessons, I get the most whining and complaining. Students would prefer to be given a text book full of identical problems that they can workout like a factory. BORING! and not very LIFE LIKE! Students have learned the system. Most of our students have figured out "THE SYSTEM" and are comfortable with it. Then here comes "Mr. RHOADES" and he changes the system. It really stretches them and makes them uncomfortable. If we want students to create, we are going to have to help them to their creativity. SO, what does this have to do with MATH? One of the most important objectives I have for my MATH students is to teach them to plan, organize, and problem solve. Math teaches you to take problems, break them down into small pieces, organize the pieces, and follow theses pieces to a logical solution. In real life, problem solving, producing, or creating may or may not involve a lot of numbers and formulas. But, all the things I teach my students are crucial to creating and being successful. They may never have to use the Quadratic Equation again after they graduate. But, the REAL math skills we teach them will be used everywhere! Tomorrow is the start of a very SCARY day for me, the traditional MATH teacher. I am already apprehensive. Do you know why? Because I'm not going to be in control, the students are. That is scary for this 52 year Math Teacher. WISH ME LUCK or should I say, WISH MY STUDENTS SUCCESS! Posted by Rhoadzie at 9:16 PM No comments: Sunday, January 13, 2013 It's been quite some time since I posted on my Math Blog. I got busy over the Holidays and the Girls Basketball Schedule has kept my pretty busy. I wanted to post since we are nearing the end of the 2nd term. I think it's important to take some time in as we have only 4 days left to the end of the term. This means I will be done with Algebra 1 and adding Math Applications to the daily schedule. I am excited and nervous about this new class. is new to me and new to education, for the most part. I am 52 years old. I spent my k-12 career in the standard classroom. Then I spent my college years and my teaching years in math classes. Even this year, my Algebra 1 class and Geometry class have been more traditional than I like. I'm not against TRADITIONAL Math Classes. I think they have an important place in the education of our students. However, if we are to prepare our students for their future, we need to walk a balance beam between the Abstract and the REAL WORLD. I know this is hotly debated. Again, I am not against traditional Math education. I think it is important to stretch our students by abstraction. But, the attempts to interject Real World Math into our education system has been weak. So, I'm delving into the world REAL WORLD MATH to High School students. It will be a learning experience for all of us. As we enter the last week, we will spend our 4 remaining days, preparing and taking the final exams in both Geometry and Algebra. This will be the last chance for some of these students to overcome their poor performance on the ECA to try to salvage a passing grade. Geometry is an all year class and we will take some time this week to reflect on what we have done and what we will do in the 2nd half of the year. The most important reflection tool for this week will be the Student Course Assessment tool. Thursday, I will get some very important feedback from my students about Algebra and Geometry. I feel it is so important for my students and I to BOTH be . I need to understand their thoughts about education and use them to make a more meaningful course. Here is the schedule for the week: Geometry Schedule -- January 14th-17th Algebra 1 Schedule -- January 14th-17th Posted by Rhoadzie at 2:55 PM No comments:
{"url":"http://ghsmath2012.blogspot.com/","timestamp":"2014-04-16T10:09:59Z","content_type":null,"content_length":"90631","record_id":"<urn:uuid:e6c973d1-0627-4e7a-81e6-108532e7f1c9>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
About "Hypermedia Textbook for Grades 6-12 (MathMol)" Description: Designed for middle school students, but also useful for students of high school chemistry. Introductory concepts: mass, weight, volume, density, scientific notation, our 3-dimensional world, VRML, geometry quiz, the geometry of 2 and 3 dimensions, and mathematical equations. A Model of Matter: structure of an atom, bonding of atoms, and motion of molecules. Structure and Properties of Important Molecules: water and ice, the element, carbon, simple carbon compounds, molecules of life, materials, and drugs. Appendix of Structures: water and ice, carbon, hydrocarbons, lipids, DNA, amino acids, sugars, photosynthetic pigments, drugs, and math structures.
{"url":"http://mathforum.org/library/view/4745.html","timestamp":"2014-04-16T11:14:48Z","content_type":null,"content_length":"5713","record_id":"<urn:uuid:ffa77387-c884-4589-8029-180179954bf1>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
Markov's Marbles A container holds 2 green marbles and 9 red marbles. Two players, A and B, alternately pick a marble from the container. If it's red, the marble is replaced. If it's green, it is kept. The player who gets the second green marble wins. What are the odds of winning for the player who draws first? Problems like this can be solved by means of a simple Markov model. The game is either in State 1 (no greens have yet been drawn) or State 2 (one of the greens has been drawn and removed). In State 1, each draw has probability 2/11 of going to State 2, and 9/11 of staying in State 1. In State 2, each draw has probability 1/10 of ending the game (by drawing the second green) and 9/10 of staying in State 2. So, if we let P1[n] and P2[n] denote the probability of being in states 1 and 2 respectively after the nth draw, we begin with P1[0]=1 and P2[0]=0. Thereafter, the probabilities can be computed by the recurrence relation P1[n+1] = (9/11)P1[n] P2[n+1] = (2/11)P1[n] + (9/10)P2[n] Letting W[n] denote the probability that the game has been won by someone after the nth draw, we obviously have W[n] = 1 - P2[n] - P1[n] Now, the recurrence relation can be written in matrix form as P[n] = M P[n-1] where P[n] is the column vector of probabilities and M is the transition matrix | 9/11 0 | M = | | | 2/11 9/10 | From this we also have the simple closed-form expression for the nth probability vector P[n] = M^n P[0]. We want to know the probability of winning for the player who draws first. The two players alternate, so the one who draws first will also draw 3rd and 5th and so on. Thus, the question is: what is the probability that the game will end on an odd-numbered draw? The probability of ending the game on the first draw is W[1]-W[0], and of winning on the 3rd draw is W[3]-W[2], and so on. In general, the probability of winning on the (2n+1)th draw is W[2n+1] - W[2n] = (1-P1[2n+1]-P2[2n+1]) - (1-P1[2n]-P2[2n]) = {P1[2n] + P2[2n]} - {P1[2n+1] + P2[2n+1]} So, the sum of all the probabilities of ending the game on an odd- numbered draw is given by the sum of the components of the vector P[0] - P[1] + P[2] - P[3] + ..., and this equals the geometric series (I - M + M^2 - M^3 + M^4 - ...)P[0] Since P[0] is just [1,0]^T, we want the sum of the left-hand column of the matrix | 11/20 0 | (I+M)^-1 = | | | -1/19 10/19 | So the probability of a win on the odd draws is 11/20 - 1/19, which equals 189/380 = 0.49736... Of course, we could have computed the probability of a win on the even draws, simply by bumping the index up one number, i.e., multiplying by M, to give | 9/20 0 | M(I+M)^-1 = | | | 1/19 9/19 | which confirms that the "even" probability is, as expected, 9/20 + 1/19 = 191/380 = 0.50263... Obviously this approach can be extended to cover any number of red and green marbles. If there are N green marbles, the transition matrix will be of size N x N. Return to MathPages Main Menu
{"url":"http://mathpages.com/home/kmath503.htm","timestamp":"2014-04-21T09:36:55Z","content_type":null,"content_length":"3837","record_id":"<urn:uuid:1af4def9-e37c-4942-82e9-62e4f6566c01>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with assignment March 21st, 2013, 02:00 AM Help with assignment The ex given n ( number of problems), L (time of one day) , C ( a cont to calculate thi DIs). The input given us n, L, C, and an timeArr[] included the time need to solve the problem i. For example. The input That mean we have n=10 prolble have to solve, L=120, C=10, and an array mean , we need 80 time to solve problem 1, 80 to solve probem 2, 10 to solve problem 3.... All the problem need to solve step by step We need to find out the shortest day to solve all the problem and the DI must be minumun. The DI is the rest time of one day... lớp bằng giá trị DI như sau: DI=0 when t=0 DI=-C when 1<=t<=10 DI=(t-10)^2 when t>10 t = the rest time of one day. In that example, we must calculate that need minimux 6 day to solve all the problem, and group that (80) (80) (10,50,30) (20,40,30) (120) (100) So the result of this example is 6 day and minDI is 2700. ================================================== ==================================== I could not have any algorthym for this ... Can you help me!!!! Thanks alot March 21st, 2013, 04:10 AM Re: Help with assignment Show what code you have done so for. Make sure to put it in code tags as well. <code here> March 21st, 2013, 01:02 PM Re: Help with assignment This is a repost!!!
{"url":"http://www.javaprogrammingforums.com/%20whats-wrong-my-code/26068-help-assignment-printingthethread.html","timestamp":"2014-04-19T05:19:16Z","content_type":null,"content_length":"5566","record_id":"<urn:uuid:7fbaab29-00bb-46b4-92db-95b26be297ee>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
Wakefield, MA Math Tutor Find a Wakefield, MA Math Tutor ...An experienced tutor not only focuses the student on the most important facts and concepts, but also provides coaching on how to allocate your time during the test, how to analyze the structure of a question to gain insight into the probable answer, and how to improve your odds of guessing correc... 44 Subjects: including econometrics, algebra 1, algebra 2, calculus I have three years of experience tutoring in physics, math, biology, and chemistry. I have worked mostly with college level introductory courses and high school students. I have worked with many students of different academic levels from elementary to college students. 19 Subjects: including calculus, ACT Math, SAT math, trigonometry ...I work with students of any age. From Parents and Students: "You have really made a difference and bonded nicely with my daughter. You have a gift with teenagers, as with SAT's work I doubt there is ever much laughter. 38 Subjects: including geometry, grammar, prealgebra, algebra 1 ...I acquired teaching experience through working as a teaching assistant, in the Master's program. The work involves teaching introductory laboratory classes in Physics. This includes short lectures, demonstration of experiments, helping students to perform their experiments, and grading of laboratory reports and the final exam. 15 Subjects: including geometry, physics, SAT math, GRE ...I conduct exciting, cutting-edge research in a materials chemistry laboratory. Since entering graduate school, I have been a general chemistry lab teaching assistant, physical chemistry II (quantum) teaching assistant, and advanced physical chemistry (graduate level) teaching assistant. I have ... 10 Subjects: including algebra 1, algebra 2, calculus, prealgebra Related Wakefield, MA Tutors Wakefield, MA Accounting Tutors Wakefield, MA ACT Tutors Wakefield, MA Algebra Tutors Wakefield, MA Algebra 2 Tutors Wakefield, MA Calculus Tutors Wakefield, MA Geometry Tutors Wakefield, MA Math Tutors Wakefield, MA Prealgebra Tutors Wakefield, MA Precalculus Tutors Wakefield, MA SAT Tutors Wakefield, MA SAT Math Tutors Wakefield, MA Science Tutors Wakefield, MA Statistics Tutors Wakefield, MA Trigonometry Tutors Nearby Cities With Math Tutor Arlington, MA Math Tutors Belmont, MA Math Tutors Burlington, MA Math Tutors Chelsea, MA Math Tutors Danvers, MA Math Tutors Lexington, MA Math Tutors Lynnfield Math Tutors Malden, MA Math Tutors Melrose, MA Math Tutors Reading, MA Math Tutors Saugus Math Tutors Stoneham, MA Math Tutors Wilmington, MA Math Tutors Winchester, MA Math Tutors Woburn Math Tutors
{"url":"http://www.purplemath.com/wakefield_ma_math_tutors.php","timestamp":"2014-04-21T00:02:57Z","content_type":null,"content_length":"23802","record_id":"<urn:uuid:1e2698c8-67a4-4969-93a4-98072bc16ed2>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
How do they expect the offense to function? [Archive] - RedsZone.com - Cincinnati Reds Fans' Home for Baseball Discussion During the end of the Bowden years and then again during the O'Brien era, the reds seemed to evaluate starting pitchers by the number of wins the starting pitcher had a couple of years ago. We heard about it with Ramon Ortiz and Paul Wilson and Jimmy Haynes. "He was a 15 game winner two years ago" like that was an individual accomplishment and not a function of the quality of his teammates. As a predictor of future success that particular measurement didn't work out very well. Anybody who has played any fantasy game could have told them as much. At it's root the whole moneyball idea is to find a place where the market for talent is undervalued and exploit that inefficiency. The A's looked for On Base Percentage which is the single most valuable offensive number and defense. The idea was that Your gus would get on base and not use outs on offense and on defense you would turn more of the balls in play into outs. The reds on the other hand seem enamored with power. Cantu, Gill, Gonzales to a lesser degree Phillips....they're all decent enough players whose primary offensive value doesn't lie in their ability to get on base, but in their ability to drive the ball for extra bases. David Ross is in the same bin. His OBP is negligible. Almost all of his offensive value lies in his homers. Cody Ross was the same way. Given that the offense already has Dunn and Junior and a park where even powerless players can crack out a homer now and then, is this a reasonable approach to building a team?
{"url":"http://www.redszone.com/forums/archive/index.php/t-60961.html","timestamp":"2014-04-18T01:04:18Z","content_type":null,"content_length":"9215","record_id":"<urn:uuid:0594caf3-f905-4454-aa48-1c4ad80f747a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of stone's-throw Stone's representation theorem for Boolean algebras states that every Boolean algebra to a field of sets . The theorem is fundamental to the deeper understanding of Boolean algebra that emerged in the first half of the 20th century. The theorem was first proved by (1936), and thus named in his honor. Stone was led to it by his study of the spectral theory on a Hilbert space Stone spaces Each Boolean algebra B has an associated topological space, denoted here S(B), called its Stone space. The points in S(B) are the ultrafilters on B, or equivalently the homomorphisms from B to the 2-element Boolean algebra. The topology on S(X) is generated by a basis consisting of all sets of the form $\left\{ x in S\left(X\right) mid b in x\right\},$ is an element of For any Boolean algebra B, S(B) is a compact totally disconnected Hausdorff space; such spaces are called Stone spaces. Conversely, given any topological space X, the collection of subsets of X that are clopen (both closed and open) is a Boolean algebra. Representation theorem A simple version of Stone's representation theorem states that any Boolean algebra B is isomorphic to the algebra of clopen subsets of its Stone space S(B). The full statement of the theorem uses the language of category theory; it states that there is a duality between the category of Boolean algebras and the category of Stone spaces. This duality means that in addition to the isomorphisms between Boolean algebras and their Stone spaces, each homomorphism from a Boolean algebra A to a Boolean algebra B corresponds in a natural way to a continuous function from S(B) to S(A). In other words, there is a contravariant functor that gives an equivalence between the categories. This was the first example of a nontrivial duality of categories. The theorem is a special case of Stone duality, a more general framework for dualities between topological spaces and partially ordered sets. The proof requires either the axiom of choice or a weakened form of it. Specifically, the theorem is equivalent to the Boolean prime ideal theorem, a weakened choice principle which states that every Boolean algebra has a prime ideal. See also A monograph available free online:
{"url":"http://www.reference.com/browse/stone's-throw","timestamp":"2014-04-18T22:36:17Z","content_type":null,"content_length":"78850","record_id":"<urn:uuid:87b26be7-0c2a-4752-b19d-3ba4f32fbf5c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry Coordinate 1. The problem statement, all variables and given/known data Given that Z is a complex number with condition |Z-1|+|Z+1|=7 Illustrate Z on Argand Diagram and write out the equation of Locuz Z I attempted to figured out the equation of locus Z, [tex]\sqrt{}[(x-1)^2+y^2][/tex] + [tex]\sqrt{}[(x+1)^2 + y^2][/tex] = 7 [tex]\sqr{}x^2 + 1 - 2x + y^2 + x^2 + 1 + 2x + y^2 = 49[/tex] [tex]\sqr{}2x^2 + 2y^2 = 47[/tex] it's not necessary the correct answer though... however, I can't figure how to illustrate the diagram! help!
{"url":"http://www.physicsforums.com/showthread.php?t=181297","timestamp":"2014-04-19T22:47:51Z","content_type":null,"content_length":"22644","record_id":"<urn:uuid:e501c5eb-ff6e-4c87-a933-ca0e8254cfde>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
North Houston Calculus Tutor Find a North Houston Calculus Tutor ...Not every student thinks in the same way, so I explain the subject in a variety of ways. My approach is to identify the gaps in a student's knowledge and fill those gaps. I don't aim for fun, but we usually laugh along the way. 41 Subjects: including calculus, chemistry, English, reading ...As a certified math teacher in Cy-Fair and as a tutor, I believe that anyone can learn and understand math. Those problems that appear complicated are just a series of simple concepts woven together. I deconstruct the problem for students so they can understand the simple problems woven in to what "appears" to be a complicated problem. 16 Subjects: including calculus, reading, GRE, algebra 1 ...So that’s how it works!” moments as algebra becomes more familiar and understandable. Algebra 2 builds on the foundation of algebra 1, especially in the ongoing application of the basic concepts of variables, solving equations, and manipulations such as factoring. My approach in working with yo... 20 Subjects: including calculus, writing, algebra 1, algebra 2 I have been a private math tutor for over ten(10) years and am a certified secondary math instructor in the state of Texas. I have taught middle and high-school math for over ten (10) years. I am available to travel all over the greater Houston area, including as far south as Pearland, as far north as Spring, as far west as Katy and as far east as the Galena Park/Pasadena area. 9 Subjects: including calculus, geometry, algebra 1, algebra 2 ...While tutoring with Spring Branch, I worked with 12th grade students who had failed Algebra II and who were in summer school. At Kumon I worked with elementary age students going over fractions and getting comfortable working with three digit and large numbers. I strongly believe that it is imp... 15 Subjects: including calculus, geometry, GRE, algebra 1
{"url":"http://www.purplemath.com/North_Houston_Calculus_tutors.php","timestamp":"2014-04-21T11:05:24Z","content_type":null,"content_length":"24303","record_id":"<urn:uuid:1fdf13c6-10bd-4420-b0ff-f48e04f55f4a>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
coordinates for llangrannog You asked: coordinates for llangrannog • 52°09'42.57"N and 4°28'02"W the group of objects: the latitude 52 degrees 9 minutes and 42.57 seconds north and the longitude 4 degrees 28 minutes and 2 seconds west • 52°09'42.57"N and 4°25'37.11"W the group of objects: the latitude 52 degrees 9 minutes and 42.57 seconds north and the longitude 4 degrees 25 minutes and 37.11 seconds west • 52°09'37"N and 4°28'02"W the group of objects: the latitude 52 degrees 9 minutes and 37 seconds north and the longitude 4 degrees 28 minutes and 2 seconds west • 52°09'37"N and 4°25'37.11"W the group of objects: the latitude 52 degrees 9 minutes and 37 seconds north and the longitude 4 degrees 25 minutes and 37.11 seconds west Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/coordinates_for_llangrannog","timestamp":"2014-04-19T23:14:33Z","content_type":null,"content_length":"59235","record_id":"<urn:uuid:e04f3510-5b69-424e-81e6-c19662af66d6>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
System of Equations and the Substitution Method Date: 06/07/98 at 21:54:10 From: Rachael Poindexter Subject: System of equations: Substitution method I have been flipping page after page in every single math book I own, and none of them tells how to do the Substitution method, yet they all talk about it. I was wondering if you could send some directions and perhaps an example. I appreciate any help you might be able to give me. Thanks! --Rachael Poindexter Date: 06/08/98 at 12:48:33 From: Doctor Gary Subject: Re: System of equations: Substitution method Suppose, for example, that I told you that Susan was two years younger than Greg, and asked you Susan's age. All you would know was that: s = g - 2 Now, if I told you that Greg was 18, you could "substitute" 18 for g in the first equation, and learn that Susan was 16. "Substitution" is based on the principle that, although we can't solve an equation with two unknowns, we can solve it if we can find a way to re-express the equation with only one unknown. If you had the following two equations: 3x + 2y = 10 2x - y = 4 all you would have to do is use of the equations to express one of the variables in terms of the other. For example, you could use the second equation to express y as 2x - 4 (can you see the steps you'd take to get that result?). Now you can "substitute" (2x - 4) for y in the first equation: 3x + 2(2x - 4) = 10 3x + 4x - 8 = 10 7x = 18 x = 18/7 If x is 18/7, then 2x is 36/7, so y must be 8/7 for the second equation to be true. Be sure to "test" your answers by trying them out in the other equation: 3(18/7) + 2(8/7) = (54 + 16)/7 = 70/7 = 10 -Doctor Gary, The Math Forum Check out or web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/53128.html","timestamp":"2014-04-19T10:20:48Z","content_type":null,"content_length":"6772","record_id":"<urn:uuid:4a9f2110-cc3a-422f-a517-bed5fda96910>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
Knowers of the Universe Extract No.26 Ancient Pi ( Knowers of the Universe Charles William Johnson Any practical attempt to divide the diameter of a circle into its own circumference can only meet with failure. Such a procedure is entirely theoretical in nature. Dividing unlikes, a straight line (the diameter of a circle) into a curved line (the circumference of a circle) can only be met with frustration. The kind of frustration that is portrayed throughout history in humankind's attempt to measure the incommensurable. No matter how hard one may try, even with the assistance of contemporary electronic computers, bending either the straight line or the curved line, alters the nature of the problem and yields an impossibility. As soon as one of the lines is bent the results are tainted. Then, there is the question of the very thickness of the lines being measured in length. Whether one measures the inner part of the curved line of the circumference or the outer edge makes a great deal of difference; especially, when one is attempting to achieve an exactness in the concept of pi If we realize that the measurement of the ratio between the diameter and the circumference of a circle is entirely theoretical and speculative, then we may also realize that the result shall always represent an approximation. In fact, the very fact that pi is always expressed in terms of an unending fraction (with mathematicians searching it to the nth number of decimal places), should cause us to accept the idea that pi can only be an approximation. (As Lambert illustrated in 1767, " Once we realize that pi represents a fractional expression in numbers, it were as though either nature itself were wrong, or the numbers must surely be able to be manipulated to render whole numbers. The ancients sought to work with whole numbers. However, once we realize that the ancient reckoning system may have been based upon the concept of a floating decimal place, then we should understand that all numbers, in fact, may be visualized as whole numbers. The cut-off point becomes one of arbitrary choice at times. With regard to the concept of pi, contemporary mathematicians have not decided to accept that arbitrary cut-off point, and continue to search for the unending decimal expression of pi. At one time, not too long ago, pi was simply represented to be 3.1416; and, in a practical sense, it served all purposes of constructing things out of matter and energy. Today, the unending expression of pi to hundreds of thousands of decimal places serves no practical purpose that we know of, at least, other than that of an unending contest to discover the ultimate expression of pi. One has only to admire the relation of the diameter of any circle to its circumference to note that particular expression. Throughout history, the expression of pi has taken on many variations. Petr Beckmann (Cfr., A History of , Golem, 1971), offers an exemplary analysis of the concept throughout history. The Babylonians 3 1/8; the Egyptians 4(8/9)²; Siddhantas, 3.1416; Brahmagupta, 3.162277; Chinese, 3.1724; Liu Hui, 3.141024 < Liu Hui, 3.14159; Tsu Chung-Chih, 3.1415926 < Archimedes, 3.14084 < 1/7); Heron, 3.1738; Ptolemy, 3.14167; Fibonacci, Viète, 3.141592635 < Beckmann (p.101): "There is no practical or scientific value in knowing more than the 17 decimal places used in the foregoing, already somewhat artificial, application". Nonetheless, in 1844, Johann Martin Zacharias Dase calculated Decimal hunting games aside, the practical uses of knowing pi (the ratio of the diameter of a circle to its circumference) even as an approximation has infinite applications in astronomy. And, the ancients were on the whole astronomers; knowers of the universe. This ratio becomes significant in calculating the movements of the planets and the stars; in computing their coming and going in the sky. Once more, since we are dealing with movement, the movement of the planetary bodies and the stars, we are always speaking about approximations; even in and especially so in astronomy. Therefore, the approximations to pi serve a purpose in knowing the approximate movements of the planets. Such are the problems concerning the measurement of moving bodies. As soon as they have been measured, they have already moved from that measurement. When we observe the measurements offered by Tsu Chung-Chih given above, it becomes obvious that ancient approximations were at times far ahead of latter day computations. And, then there is the problem that one may obtain pi to the nth decimal place, but such decimal expressions are beyond the human capacity to measure or even observe matter-energy to such a minute degree. In our analyses, we cannot cite any specific ancient documents for the computation of pi among the ancients. Yet, the historically significant numbers that do exist within the ancient reckoning systems may reveal some partial aspects of the computations themselves. No matter which contemporary studies we examine, pi is always given in relation to the number ca. 3.1-something, as a guidepost. Yet, it may be the case that the ancients conceived of pi in relation to the number of divisions that made up the circle; the number of degrees or segments contained therein. The concept of pi refers to the constant ratio of the diameter:circumference of any circle; irrespective of the number of degrees contained within that circle. Historically, the Babylonians came to use the number 360 for the divisional segments within a circle, and we have employed that same number ever since. The abstracted universal circle, then, would have a constant diameter of one (1.0), and the length of its circumference would be pi ( Now, if we consider the circumference to be divided into 360 degrees (or segments; angular divisions with lines cutting through the center of the circle as we know them), then using the contemporary figure for pi (3.141592654), the length of the circumference could be 360 units, while the length of the diameter would be 114.591559 (i.e., 360 / Now, let us suppose that the circle is divided into 260 degrees (something that we are unaccustomed to considering, in fact). If we employ the same length of the diameter of the previous example (114.591559), then the relational figure for pi for a 260-degree circle would be: 2.268928028. With that something very intriguing developes. Within ancient Nineveh, there exists an historically significant fractal number cited as 2268. One could imagine that the 2268 fractal number may relate to the concept of proportion (i.e., pi) regarding a 260-division circle. The number 260 is relevant because during ancient times there existed in various cultures a calendar based on a 260c day-count. Furthermore, the Great Cycle of the sun, known as Precession, also involves a fractal of 260 (i.e., 26000 years). Now, were we to consider the Nineveh number for representing pi on a 260-degree circle, then the constant value for the diameter would then be 114.638448 (i.e., 260 / 2.268). Throughout history, an inexact representation of pi has always been cited as that of 3 1/7 (or, 3.142857); a reciprocal of seven number. However, when we consider that the length of the diameter of a 360-degree circle yields a number that approximates a reciprocal of seven number (114.591559), we can consider the possibility of employing 114.285714 in its place. The use of the reciprocal of seven number (114.284714) for the length of the 360 and 260 circle would offer the following values for pi, respectively: 360 / 114.285714 = 3.150000008 (pi proportion for 360c circle) 260 / 114.285714 = 2.275000006 (pi proportion for 260c circle) Note that the 3.15 number offers a mediatio/duplatio series based on the 63c, which was significant in ancient reckoning systems: 315, 630, 1260, 2520, 5040, 10080, 20160, 40320, 80640, 161280, 322560, 645120,1290240, 2580480 (a Precession number/fractal); and, 63, 126, 189, 252, 315, 378, 441, 504, 567 (kemi), 630, 693 (Sothic), 756 (Giza), 819 (k'awil; maya), 882, 945, 1008, 1071, 1134 (Nineveh, 2 x 1134 = 2268), 1197, etc. Note that the 2275 fractal number is relevant for the computational series within the ancient reckoning system of the 364c day-count: 2275, 4550, 9100, 18200, 36400, etc. Also, note that the difference between the Nineveh 2268c and the pi-like number 2275 is seven (2275 - 2268 = 7); which could be easily translated from one series to the other by remainder math based on multiples of Many of the distinctive historically significant numbers of the ancient reckoning system reflect a relationship based on the reciprocal of seven. Consider the maya long count period number of 1872 000, which has received so much speculation regarding its beginning and ending date. Also, consider the period called the k'awil of the maya cited as consisting of 819c days. Now, notice the number that obtains from the division resulting from half of the long count period figure by the k'awil: 936 / 819 = 1.142857143. The same figure obtains regarding the constant length of a diameter of a circle based on a pi-like number in relation to the reciprocal of seven as explained earlier. Other relationships obtain regarding similar historically significant numbers from other systems. The Great Pyramid entails the number 756c as its baseline. Also, there exists the 432c number/fractal associated with the Consecration. If we double the 432 figure and divide by the 756c, the same result obtains: 864 / 756 = 1.142857143. Consider: 360 x .864 = 311.04 (31104 being an historically significant number for China and Mesoamerica). The significance of seven and its reciprocal becomes obvious throughout the historically significant numbers/fractals. Even the obvious relationship, of the 364c day-count of ancient Mesoamerica, which was employed for computations, reveals a direct basis of seven: 364 / 7 = 52. Immediately, one will recognize the 52c that is so well-known in ancient Mesoamerica as the calendar round (52 years times 365 days = 18980 days; and 52 years times 360 days = 18720! days). And, the ancient kemi appear to have employed a 54c in its place: 7 times 54 = 378 (2 x 378 = 756; or, 7 x 108 = 756). No matter where one turns, the number seven and its reciprocal make their appearance. The reasoning behind this procedure may be rather obvious, although we have not discerned it previously. The number 1.142857143 concerns the ratio 8/7ths. The Aztec Calendar appears to be based upon a spatial division that reflects the logic of 7:8 or 8:7, depending upon the rings and segments to be considered (Cfr., Earth/matriX No.88). If one were attempting to consider the diameter of the Solar System, or the Universe, knowing that these events consist of imaginary circles (ellipses), then the use of the unit 1.0 for the length of their respective diameters would not be of much value. And, furthermore, if the ancients had employed the contemporary (and possibly past) concept of pi (based on a close approximation to 3.141592654, give or take a fraction), then the numbers would have been unmanageable and not very attractive. The apparent relational aspects of the many different historical numbers found in the many distinctive ancient reckoning systems suggest a common origin and reasoning. If the length of the diameter of the solar system or the Universe were assigned a value consisting of the reciprocal of seven (i.e., 1.142857143), then this would be the next best thing to working with whole numbers for computing the time cycles of the movement of the planetary bodies and the stars. Furthermore, knowing the actual measurement of pi (the exact proportion of the diameter:circumference ratio) could be compensated with remainder math adjustments quite easily. Consider the following computations: 1.142857 x 819 = 935.999883 (936) (maya long count fractal) 1.285714 x 819 = 1052.999766 (1053) 1.428571 x 819 = 1169.999649 (1170) (Venus sidereal count) 1.571428 x 819 = 1286.999532 (1287) 1.714285 x 819 = 1403.999415 (1404) (kemi count; 351c) 1.857142 x 819 = 1520.000298 (1521) (39²) 1.142857 x 315 = 359.999955 (360) (360c; kemi; maya) 1.285714 x 315 = 404.99991 (405) (1296000c; kemi) 1.428571 x 315 = 449.999865 (450) (maya long count; 9 base system) 1.571428 x 315 = 494.99982 (495) (99c lunar count) 1.714285 x 315 = 539.999775 (540) (kemi count) 1.857142 x 315 = 584.99973 (585) (Venus synodic count) One of the most interesting realtionships of this nature concerns the 2268c Nineveh count: 1.142857 x 2268 = 2591.999676 (2592) (Platonic Year, 25920 years) Scholars consider the figure of 3 1/7ths to have been an erroneous computation for pi. Yet, we have never really known how the ancients computed their mathematics. The few documents that remain (such as the Rhind document of the ancient kemi) concern everyday matters; not the mathematics and geometry of the study of the Universe. By employing the reciprocal of seven in the computations, which is what an initial analysis of the historically significant numbers reveals, the ancients may have been seeking an easier method for arriving at their knowledge of the Universe than what is offered by the precise unending fractional expression of pi, the proportion of the diameter to the circumference of a circle. This may be further understood when we realize that the comings and goings of the planetary bodies and the stars throughout the Universe do not travel on perfect pi-like circles. The ancients may have employed distinct constant fractals/numbers for adjustments in their computations: the length of the diameter may have been based on 114.2857, 114.591559, 114.638448; etc; the distance of the circumference may have been related to the 260c, 360c, 378c, 936c, etc.; and, the pi ratio (proportion) of the diameter:circumference may have been 2.268, 3.15, 3.1416, 3.142857, 819, etc. The distinctive historically significant numbers reflect different aspects of the computations and their corresponding adjustments. From this dynamic perspective, the historically significant numbers may be communicating to us a much more precise knowledge of astronomy and mathematical and geometrical computations than we have been willing to concede to the ancients. e-mail: Charles W. Johnson ©1998-2013 Copyrighted by Charles William Johnson. All rights reserved. Ancient Phi and Its Extension The Concept of Pi and the Ancient Reckoning Numbers (Trigono/metriX)
{"url":"http://www.earthmatrix.com/ancient/pi.htm","timestamp":"2014-04-21T12:08:46Z","content_type":null,"content_length":"31377","record_id":"<urn:uuid:882a4879-9d02-42d8-9e03-73780d25d911>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Triangle PQR has vertices P(0, 1), Q(0, -4), and R(2, 5). After rotating triangle PQR counterclockwise about the origin 45º, the coordinates of P' to the nearest hundredth are (-0.71, ? • one year ago • one year ago Best Response You've already chosen the best response. You don't even have to rotate the triangle... just rotate the point P. Best Response You've already chosen the best response. y coordinate is also the same.. Best Response You've already chosen the best response. so you would basically switch the x points to y points? Best Response You've already chosen the best response. Best Response You've already chosen the best response. no, you can't swap coordinates if you only rotate 45 degrees. Best Response You've already chosen the best response. The point P is 1 unit away from the origin, right? One unit up on the y axis. The rotated point P' also needs to be 1 unit away from the origin. Best Response You've already chosen the best response. However, it will be in the middle of that top left quadrant, since a 45 degree rotation doesn't get it all the way down to the x axis Best Response You've already chosen the best response. You are doing good dude @JakeV8 I'll leave it to you.. Best Response You've already chosen the best response. @Timtime does that help you get the idea? What I would do next is sketch a line from the origin at that 45 degree angle, and label its length as "1", and then realize that it is a hypotenuse Best Response You've already chosen the best response. That angled side from P' to the origin makes the hypotenuse of a right triangle if you drop a side down from point P' to the x axis... Best Response You've already chosen the best response. Best Response You've already chosen the best response. were taught to use use matrices to solve the problem {0 -1} {1 0} Best Response You've already chosen the best response. interesting. Is a 45 degree rotation something you can express as matrix transformation? Best Response You've already chosen the best response. yes its a matrix rotation Best Response You've already chosen the best response. I know the answer based on the shape and right triangle math, but you could probably generalize the idea of a rotation of any angle. I hadn't thought of that before :) Best Response You've already chosen the best response. its weird how my online class is doing it Best Response You've already chosen the best response. Does knowing the answer help you figure out how to set up the matrix rotation, working backward? Best Response You've already chosen the best response. it really would like i have already done like so much with this online working backwords would be different Best Response You've already chosen the best response. ordinarily I wouldn't just want to give you the answer, but in this case, knowing the answer might help figure out what sort of matrix rotation would have been appropriate. I'm not sure I know how to do it that way though... The point P needs to rotate from (0,1) to (-(sqrt(2)/2, sqrt(2)/2) ), so about (-0.707, 0.707) Best Response You've already chosen the best response. or (-0.71, 0.71) to the nearest hundredth. Best Response You've already chosen the best response. thanks you Best Response You've already chosen the best response. hope it helps :) good luck! Best Response You've already chosen the best response. ur cute Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50732116e4b057a2860d4e8b","timestamp":"2014-04-17T21:25:30Z","content_type":null,"content_length":"91911","record_id":"<urn:uuid:0bfe7b71-5755-4704-a69a-8f526f293abc>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
How best to sum up lots of floating point numbers? up vote 25 down vote favorite Imagine you have a large array of floating point numbers, of all kinds of sizes. What is the most correct way to calculate the sum, with the least error? For example, when the array looks like this: [1.0, 1e-10, 1e-10, ... 1e-10.0] and you add up from left to right with a simple loop, like sum = 0 numbers.each do |val| sum += val whenever you add up the smaller numbers might fall below the precision threshold so the error gets bigger and bigger. As far as I know the best way is to sort the array and start adding up numbers from lowest to highest, but I am wondering if there is an even better way (faster, more precise)? EDIT: Thanks for the answer, I now have a working code that perfectly sums up double values in Java. It is a straight port from the Python post of the winning answer. The solution passes all of my unit tests. (A longer but optimized version of this is available here Summarizer.java) * Adds up numbers in an array with perfect precision, and in O(n). * @see http://code.activestate.com/recipes/393090/ public class Summarizer { * Perfectly sums up numbers, without rounding errors (if at all possible). * @param values * The values to sum up. * @return The sum. public static double msum(double... values) { List<Double> partials = new ArrayList<Double>(); for (double x : values) { int i = 0; for (double y : partials) { if (Math.abs(x) < Math.abs(y)) { double tmp = x; x = y; y = tmp; double hi = x + y; double lo = y - (hi - x); if (lo != 0.0) { partials.set(i, lo); x = hi; if (i < partials.size()) { partials.set(i, x); partials.subList(i + 1, partials.size()).clear(); } else { return sum(partials); * Sums up the rest of the partial numbers which cannot be summed up without * loss of precision. public static double sum(Collection<Double> values) { double s = 0.0; for (Double d : values) { s += d; return s; algorithm language-agnostic floating-point precision add comment 5 Answers active oldest votes For "more precise": this recipe in the Python Cookbook has summation algorithms which keep the full precision (by keeping track of the subtotals). Code is in Python but even if you don't know Python it's clear enough to adapt to any other language. up vote 19 down vote accepted All the details are given in this paper. awesome answer, instant win! – martinus Dec 26 '08 at 19:51 very nice, indeed. – duffymo Dec 26 '08 at 20:01 since i am not a python guy, what does partials[i:] = [x] do? – martinus Dec 26 '08 at 20:32 partials[i:] - slice of a list (named partials). It is a part of a list from an index i to the end of a list. aList[i:] = [x] - cut off the part of a list behind the i-th element and replace it with [x] (list containing only x) – Abgan Dec 26 '08 at 21:32 @martinus: partials[i:] = [x] replaces a slice partials[i:] with a single-element list [x]. For example: a = [0,1,2,3,4]; assert a[2:] == [2,3,4]; a[2:] = [-1]; assert a[2:] = = [-1] and a == [0,1,-1]. – J.F. Sebastian Dec 26 '08 at 21:34 show 1 more comment See also: Kahan summation algorithm It does not require O(n) storage but only O(1). up vote 8 down vote Technically this is possibly not the most accurate answer to the question, but I found it much more useful in practice. It's O(n) in time and O(1) in storage, same as the original naive code, so there's no reason not use this algorithm at the very least when summing numbers. – flodin Jun 13 '10 at 8:08 add comment There are many algorithms, depending on what you want. Usually they require keeping track of the partial sums. If you keep only the the sums x[k+1] - x[k], you get Kahan algorithm. If you keep track of all the partial sums (hence yielding O(n^2) algorithm), you get @dF 's answer. Note that additionally to your problem, summing numbers of different signs is very problematic. Now, there are simpler recipes than keeping track of all the partial sums: up vote 2 down vote • Sort the numbers before summing, sum all the negatives and the positives independantly. If you have sorted numbers, fine, otherwise you have O(n log n) algorithm. Sum by increasing • Sum by pairs, then pairs of pairs, etc. Personal experience shows that you usually don't need fancier things than Kahan's method. add comment Well, if you don't want to sort then you could simply keep the total in a variable with a type of higher precision than the individual values (e.g. use a double to keep the sum of up vote 0 floats, or a "quad" to keep the sum of doubles). This will impose a performance penalty, but it might be less than the cost of sorting. down vote add comment If your application relies on numeric processing search for an arbitrary precision arithmetic library, however I don't know if there are Python libraries of this kind. Of course, all up vote 0 depends on how many precision digits you want -- you can achieve good results with standard IEEE floating point if you use it with care. down vote add comment Not the answer you're looking for? Browse other questions tagged algorithm language-agnostic floating-point precision or ask your own question.
{"url":"http://stackoverflow.com/questions/394174/how-best-to-sum-up-lots-of-floating-point-numbers","timestamp":"2014-04-21T15:38:00Z","content_type":null,"content_length":"85656","record_id":"<urn:uuid:2953746e-b55e-48f8-8ba1-e21ea032753b>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: I need help multi choice 1 question • one year ago • one year ago Best Response You've already chosen the best response. Ill help. But you haven't posted it. Best Response You've already chosen the best response. ...uh, it would be easier to help if you posted the question already Best Response You've already chosen the best response. if you dont put the question we can't help you|dw:1355006496529:dw| Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50c29ddde4b0c673d53d5ae4","timestamp":"2014-04-18T10:39:26Z","content_type":null,"content_length":"41284","record_id":"<urn:uuid:f20f16f6-a536-4137-8bc9-67c6acb57d4e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
Could you help me some differentiation questions? October 28th 2013, 01:44 PM #1 Junior Member Nov 2012 Could you help me some differentiation questions? Find all points x where f achieves a local maximum or minimum for the following functions (i.e. all the turning points). State whether f has local maximum or local minimum at each point. F(x) = (x^2 + 1) / (x^2 - 1) I have got f'(x) = -4x / (x^2-1)^2 and then what should I do..? I also got f''(x) is {-4(x^2-1) + 16x^2(x^2-1)} / (x^2-1)^4 ..But I don't know what I can do for next.. Please help me Re: Could you help me some differentiation questions? Whoever gave you these problems expects you to know that at a maximum, the first derivative goes from + (going upward) to - (going downward) and so must be 0 at the maximum. Similarly, at a minimum, the first derivative goes from - (going downward) to + (going upward) and so must be 0 at the minimum. You should be able to tell that $(x^2- 1)^2$ is never negative so that the sign of $-4x/(x^2- 1)$ is the opposite of the sign of x. You don't really need to know the second derivative. October 28th 2013, 02:29 PM #2 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/calculus/223614-could-you-help-me-some-differentiation-questions.html","timestamp":"2014-04-18T05:47:54Z","content_type":null,"content_length":"33687","record_id":"<urn:uuid:12eea42f-d121-42d9-a84f-45531e9f1d01>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
Advanced Mathematics 9th Edition By Erwin Kreyszig Sponsored High Speed Downloads Solution Manuals Of ADVANCED ENGINEERING MATHEMATICS By ERWIN KREYSZIG 9TH EDITION This is Downloaded From www.mechanical.tk Visit www.mechanical.tk For More Solution Manuals Hand Books EDITION Advanced Engineering Mathematics ERWIN KREYSZIG Professor of Mathematics Ohio State University Columbus, Ohio @Q WILEY JOHN WILEY & SONS, INC. www.NewCircuits.com . P#14, Problem Set 25.3, Advanced Engineering Mathematics, 9th edition, Erwin Kreyszig . Find a 99% confidence interval for. pin the binomial distribution from a ... Advanced Engineering Mathematics 9th Edition By Erwin Kreyszig Manual Source: http://sci.tech−archive.net/Archive/sci.math.num−analysis/2007−10/msg00353.html ... Advanced Engineering Mathematics 9th Edition By Erwin Kreyszig Manual Re: ... Mathematics ERWIN KREYSZIG Professor of Mathematics Ohio State University Columbus, ... Kreyszig, Erwin. Advanced engineering mathematics / Erwin Kreyszig.—9th ed. p. cm. Required book: Erwin Kreyszig Advanced Engineering Mathematics, 9th edition Course outline: 1. Complex numbers and functions (a) Complex numbers (b) Analytic functions (c) Elementary functions in the complex plane (d) Cauchy-Riemann equations 2. Complex Variables & Transforms Credit Hours: 3+0 Textbook: Advanced Engineering Mathematics 9th edition Author: Erwin Kreyszig Reference book: Advanced Engineering Mathematics Advanced Engineering Mathematics by Erwin Kreyszig ... Advanced Engineering Mathematics has 231 ratings and 17 reviews. Jeremy said: I've ... Advanced Engineering Mathematics, 9th Edition - Free eBooks ... Download Free eBook:Advanced Engineering Mathematics, 9th Edition ... Advanced Engineering Mathematics, Fifth Edition [Erwin Kreyszig] on Amazon.com. *FREE* shipping on qualifying offers. ... Advanced Engineering Mathematics (9th Edition ... erwin kreyszig - advanced engineering mathematics - AbeBooks Advanced Engineering Mathematics Third Edition By Kreyszig, Erwin Advanced Engineering Mathematics Third Edition Details: ... Full Solution Advanced Engineering Mathematics 9th Ed Erwin Kreyszig Instructor’S Manual Advanced Engineering Mathematics, 6th Edition, Peter V. Mathematica Computer Manual to accompany Advanced Engineering Mathematics, 8th Edition [Erwin Kreyszig, E. J. Norminton] on Amazon.com. *FREE* shipping on qualifying offers. ... (9th Edition) ??: Erwin Kreyszig Advanced Engineering Mathematics Books, Book Price Comparison ... ADVANCED COURSES (COMPLETED) MATH 405 – Advanced Calculus for Engineers and Scientists I Professor: ... Book: Advanced Engineering Mathematics (9th Edition), Erwin Kreyszig Covered: Complex variables, complex-valued functions, and integral and Engineering Mathematics Advanced Engineering Mathematics by Erwin Kreyszig - New ... Advanced Engineering Mathematics by Erwin Kreyszig - Find this book online from $0.99. SOLUTIONS MANUAL TO Advanced Engineering Mathematics by Erwin Kreyszig, 9th ed SOLUTIONS MANUAL TO Advanced Engineering Mathematics, 6th Edition by ... SOLUTIONS MANUAL TO Advanced Macroeconomics 2nd edition by David Romer SOLUTIONS MANUAL TO Advanced Macroeconomics, ... Mathematics 4 Advanced Engineering Mathematics» by Erwin Kreyszig, 9th Edition, John Wiley & Sons, Inc. 2006. AE2008 (MP2001) Mechanics of Materials . ... Advanced Engineering Mathematics 9th Edition By Erwin Kreyszig An Introduction To Advanced Mathematics for Engineers, 3rd Edition; By Reddick, H. W., And F. H. Miller; Advanced Mathematics for Engineers, 3rd Edition; Details: ADVANCED MATHEMATICS for Engineers. Department of Mathematics ... Advanced Mathematics Syllabus Tensor Analysis Contra variant and covariant vector, Tensors of various types, Algebra of Tensors and ... Advanced Engineering Mathematics, 9th Edition, Erwin Kreyszig, Wiley. 2. −Advanced Engineering Mathematics by Erwin Kreyszig 8ed solutions manual −Analysis and Design of Analog Integrated Circuits (4th Edition) Gray, ... Advanced Engineering Mathematics, 9th Edition By Erwin Kreyszig Solution to Advanced Macroeconomics 3rd edition romer Advanced Macroeconomics, ... Erwin Kreyszig, who died unexpectedly on December 12, ... E. Kreyszig, Advanced Engineering Mathematics, Wiley, New York, 1962 (9th edition, 2006, 1248 pp.). [3] E. Kreyszig, Introductory Functional Analysis with Applications, Wiley, New York, 1978 Erwin Kreyszig. Advanced Engineering Mathematics With Solution Manual. USA: ... Erwin Kreyszig. Advanced Engineering Mathematics ... Engineering Mathematics. India: CBS, 2003. 9th Edition. ISBN: 8123910029. New. Paperback. Elements of Advanced Mathematics, Third Edition By Krantz, Steven G. ... Advanced Engineering Mathematics, 9th Edition By Erwin Kreyszig Solution to Advanced Macroeconomics 3rd edition romer Advanced Macroeconomics, Solutions Manual Romer Mathematics, 9th Edition By Erwin Kreyszig Solution to Advanced Macroeconomics 3rd edition romer Advanced Macroeconomics, Solutions Manual Romer MATHEMATICS ... Advanced Engineering Mathematics, 9th Edition By Erwin Kreyszig Solution to Advanced 9788126508273 advanced engineering mathematics (8th ed.) erwin kreyszig 579 9788126508334 satellite communications (2nd ed.) timothy pratt, charles bostian, jeremy allnutt 419 ... 9788126531356 advanced engineering mathematics 9th edition erwin kreyszig 579 Erwin Kreyszig, Advanced Engineering Mathematics, 8th edition, John Wiley and Sons, 1999. ... MP2007 Mathematics 4 Advanced Engineering Mathematics» by Erwin Kreyszig, 9th Edition, John Wiley & Sons, Inc. 2006. Course: The text for the course is Advanced Engineering Mathematics, 9th edition by Erwin Kreyszig. The course will cover series solutions of ordinary differential equations around regular and regular sin-gular points, special functions ... E.Kreyszig,Advanced Engineering Mathematics,8thed.,JohnWiley,NewYork,1999. [9] ... MATHEMATICS ... Advanced Engineering Mathematics, 9th Edition By Erwin Kreyszig Mathematics-15 • CRN 21475 APPLIED CALCULUS I Hardcover, 1088 pages, ©2013, ... TEXTBOOK: Advanced Engineering Mathematics, 9th edition (2006), by Erwin Kreyszig, Wiley GRADE: Midterm I = 30%, Midterm II = 30%, Final = 40%. If I have a grader, then homework will also be part of your grade. That means that I will have to rearrange the grading policy. This material is covered in the book: Erwin Kreyszig, Advanced Engineering Mathematics (9th edition) Chapter 24 (not including sections 4 and 9). (In the 8th edition this was Chapter 22 and the text of that chapter is almost identical in the two editions. Advanced Engineering Mathematics, 9th Edition By Erwin Kreyszig Advanced Macroeconomics, Solutions Manual Romer Advanced Modern Engineering Mathematics, 3rd Edt by Glyn James ... • Book: Erwin Kreyszig, Advanced Engineering Mathematics, 9th Edition, John Wiley and Sons, New York, 2006. • Syllabus: Chapters 9, 10, 13, 14, 15.1–4, 16. (See the table of contents for the book for a word description of the topics). Textbook: "Advanced Engineering Mathematics", 9th Edition, by Erwin Kreyszig. Course objectives: ... and how to develop and use advanced fast numerical techniques. Team work is encouraged for these optional research projects. In this course, ... Advanced Engineering Mathematics by Erwin Kreyszig, 9th ed Fundamentals of Physics, 8th Edition Halliday, ... 2nd Edition ... An Applied Approach, 9th Edition 187 Sullivan Mathematics: An Applied Approach, 8th Edition ... Edition No 0 Advanced Engineering Physics Parthasarathy ,Harish 1 ... Engineering Mathematics, 9th Edition By Erwin Kreyszig Solution to Advanced Macroeconomics 1996 romer Advanced Macroeconomics, Solutions Manual 1996 Romer ... Advanced Engineering Mathematics, 9th Edition, Wiley, 2006, paperback. DIGITAL solutions manual to Advanced Engineering Mathematics 8Ed Erwin Kreyszig solutions manual to Advanced Engineering Mathematics by Erwin Kreyszig, 9th ed solutions manual to Advanced Engineering Mathematics, 6th Edition by Peter V. O'Neil ... 528 128 Advanced Engineering Mathematics Erwin Kreyszig 9th John Wiley & Sons . Old ... 528 128 Advanced Engineering Mathematics Erwin Kreyszig 10th Wiley ... co. 2002 Advanced Mathematics 2nd Edition* Saxon, ... Required book: Erwin Kreyszig Advanced Engineering Mathematics, 9th edition Course outline: 1. Review of First and Second order ordinary differential equations 2. Series solutions of ordinary differential equations (a) Legendre polynomials TEXT: Advanced Engineering Mathematics, by Erwin Kreyszig (9th edition). HOMEWORK: A list of homework problems will be suggested at the end of each section. You will be asked to turn in a selected of them at the end of each chapter. Erwin Kreyszig, Advanced Engineering Mathematics, 8th edition, John Wiley and Sons, 1999. ... MP2007 Mathematics 4 Advanced Engineering Mathematics» by Erwin Kreyszig, 9th Edition, John Wiley & Sons, Inc. 2006. Text: Erwin Kreyszig Advanced Engineering Mathematics, 9e ... Advanced Engineering Mathematics, 9e w/ WileyPLUS ... OR The Wiley Desktop Editions. Kreyszig ‐ Advanced Engineering Mathematics 9th ... Advanced Engineering Mathematics, 9th Edition By Erwin Kreyszig ... Electric Circuits 8th edition Nilsson Riedel Electric Circuits, ... ?Electric Circuits, 7th Edition by Carl Nilsson Solution Manual ... 9th Edition By Erwin Kreyszig Solution to Advanced Macroeconomics 3rd edition romer Advanced • Discrete Mathematics and Its Applications, 4th Edition, Kenneth H. Rosen, AT&T . ... • Advanced Engeneering Mathematics, 7th Edition, Erwin Kreyszig, Wiley, John & Sons, Advanced mathematics (C) MA504 4 MATHS 4 - - ... 1. Advanced Engineering Mathematics, 9th Edition, Erwin Kreyszig, Wiley. 2. ... Third Edition B.D. Agarwal, L.J. Broutman and K. Chandrashekhara John Wiley & sons, New York, 2006 5. 9th Edition − (Howard Anton, Chris Rorres) ... 220. E−Book−Halliday − Fundamentals of Physics, 8th Edition 221. KREYSZIG (2006) advanced engineering mathematics, 9ed, instructor manual 222. ... 236. advanced_engineering_mathematics_8th_erwin_kreyszig_solution 237. El−Batanouny and ... advanced mathematics which are currently most important in the ... (the chapters’ numbers are from Kreyszing, 9th ed.): (a) Linear Algebra (Ch. 7, 8) (b) Vector Calculus (Ch. 9, 10) (c) Ordinary ... Advanced Engineering Mathematics, by Erwin Kreyszig, John Wiley & Sons, 9th ... A First Course In Probability 7th Edition By Sheldon Ross Advanced Engineering Mathematics 9th Edition By Erwin Kreyszig An Introduction To Signals And Noise In Electrical ... Numerical Methods 3th Edition By Doug Faires, Dick Course# Course# Title Author Edition Publisher 051 006P No Textbook 058 006P No ... 118 18 MathPortal for For All Practical Purposes Comap 9th W H Freeman (access card) ... 528 128 Advanced Engineering Mathematics Erwin Kreyszig 10th Wiley Advanced Engineering Mathematics, 6th Edition Peter V. O'Neil − University of Alabama, Birmingham Advanced Engineering Mathematics, 9th Edition By Erwin Kreyszig Advanced Financial Accounting, 6th edition, by Baker, Lembke, and King solution manual TEXT: Kreyszig, Erwin, Advanced Engineering Mathematics, 9th edition, John Wiley and Sons Inc., New York, 2006. Author: Information Resources Created Date: ... 9th Edition, Chapters 1-7, and 12. 2. Advanced Engineering Mathematics by Erwin Kreyszig, John Wiley & Sons, 8th Edition, Chapters 6, 7, 17, 18.1-18.7 and 22. 3. Equivalent courses: MA 101 and MA 104 Problems: The following problems are from Kreyszig. Problem set Problems 6.3 4, 8 6.4 11, 14 ... ... (9th Edition) By James W. Nilsson, Susan Riedel Electric Circuits ... Advanced Engineering Mathematics, 9th Edition By Erwin Kreyszig ... Electric Circuits 8th edition Nilsson Riedel Electric Circuits, Nillson, ...
{"url":"http://ebookilys.org/pdf/advanced-mathematics-9th-edition-by-erwin-kreyszig","timestamp":"2014-04-17T19:27:34Z","content_type":null,"content_length":"46620","record_id":"<urn:uuid:4ba39a94-6e3b-442d-8501-5dcc2aecec7c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
Not Even Wrong ... is a very amusing new blog by Peter Woit, a mathematical physicist at Columbia University, currently devoted to bashing string theory, and more broadly to interesting topics in fundamental physics, like symmetry-breaking in the vacuum. (I am using "amusing" and "interesting" in senses that only apply to people who've studied quantum field theory and are John Baez groupies.) Oddly, Woit does not provide any link on the blog to his wonderful polemic "String Theory: An Evaluation" (physics/0102051), which is accessible to the laity; so I will.
{"url":"http://vserver1.cscs.lsa.umich.edu/~crshalizi/weblog/000215.html","timestamp":"2014-04-21T04:38:05Z","content_type":null,"content_length":"3038","record_id":"<urn:uuid:8042d212-517e-425c-8ff0-6465c9defc42>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
Commerce City Precalculus Tutor Find a Commerce City Precalculus Tutor My love of teaching started when I was in elementary school. I began tutoring while in middle school and have found a way to keep teaching and learning a central part of my life since then. I worked as a math tutor while in college, and have an adult teaching credential and worked with GED, ABE, and vocational students for 6+ years. 20 Subjects: including precalculus, geometry, accounting, GED NEED HELP WITH MATH OR PHYSICS?? I grew up in Denver, attended East High School, and graduated at the top of my class in 2009. I graduated from University of California, Santa Barbara with a Bachelor of Science in Physics in 2013. I absolutely love learning, and want to share my excitement by teaching others. 13 Subjects: including precalculus, physics, calculus, geometry ...However, you often use the same techniques you used in arithmetic to solve algebra 1 problems! So really, algebra 1 is a lot like the kinds of things you have already worked with - it just "looks" different. Algebra 1 topics include setting up and solving word problems, finding reciprocals, sim... 18 Subjects: including precalculus, calculus, geometry, statistics ...In 2008, I took a position with the tutoring company 'Atlanta Tutors' where I worked one-on-one with High school and College students preparing students for exams, helping with homework, and lots of SAT, ACT, and GRE math test preparation. I prefer one-on-one tutoring and enjoy engaging my stude... 16 Subjects: including precalculus, chemistry, calculus, physics ...I am truly passionate about teaching and tutoring mathematics or physics. I believe that to get strong in a challenging subject such as mathematics, a student needs one-on-one tutoring. I see tutoring as a partnership between a tutor and a student to reach the latter grade expectation. 26 Subjects: including precalculus, calculus, geometry, ASVAB
{"url":"http://www.purplemath.com/commerce_city_co_precalculus_tutors.php","timestamp":"2014-04-21T02:25:52Z","content_type":null,"content_length":"24308","record_id":"<urn:uuid:e0f3306d-2cc5-4264-b562-914e743fe690>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
No idea how to find this numerical value from this first order differential equation March 19th 2010, 01:05 PM No idea how to find this numerical value from this first order differential equation Here is the question (and Answer under "Differential equation solution"): "Find the solution of the differential equation that satisfies the given initial condition." dy&#x2f;dx &#x3d; x&#x2f;y, y&#x28;0&#x29; &#x3d; -3 - Wolfram|Alpha I'm ok when it involves only variables but when it gives me y(0) = -3, it throws me off. My work: dy/dx = x/y integral of ydy = integral of xdx 1/2 * y^2 = 1/2 * x^2 y = x That makes no sense. Can someone please explain to me what the question is really asking as well as help me solve it? I get confused because it gives me numbers and the answer is in variables. Thanks in advance! March 19th 2010, 03:20 PM you forget the integration constant, i mean, you must to have: $y^2=x^2+k$ and whit y(0)=-3 you can obtain value of $k$ March 19th 2010, 05:08 PM $\int y \, dy = \int x \, dx$ $\implies \tfrac{1}{2}y^2+C_1 = \tfrac{1}{2}x^2+C_2$ You can just combine the two constants and get $\implies \tfrac{1}{2}y^2 = \tfrac{1}{2}x^2+C$ Then you can simplify further and use the initial condition to solve for the constant. (This is basically repeating what Nacho said, but I just wanted to make it clear why there is only one constant instead of two.) March 19th 2010, 05:58 PM Ok so I get y = +/- sqrt(x^2 + 9) and the answer is y = -sqrt(x^2 + 9) my current question is...why do I choose the negative? March 19th 2010, 06:13 PM Does the positive solution satisfy y(0) = -3? March 19th 2010, 08:06 PM I see. Thanks!
{"url":"http://mathhelpforum.com/calculus/134597-no-idea-how-find-numerical-value-first-order-differential-equation-print.html","timestamp":"2014-04-21T12:36:17Z","content_type":null,"content_length":"6993","record_id":"<urn:uuid:40e692ca-b80d-4760-beec-5bddc2400a00>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
I'm trying to mess with functors and the way I want it to work is that when I create a functor it will automatically add itself to an array. The attached code demonstrates what I mean. The problem is that its a bit unsatisfactory and I'd like to improve it. In the constructor of TClassA Functors<TClassA, std::string> *Functor = new Functors<TClassA, std::string>(this, &TClassA: So now I every time I want a new functor I have to change the name 4 spots ;/ Is there some way I can extra the name so I don't have to do this? (So essentially the class name gets inserted above, or at the very least it some othe rmethod that would work) My problem is that I really just want to create a set of "transformations" of a string(well, for sake of argument) but have those transformations be put in an array. Its very simple to do with functions and function pointers but I figured that I would try to do it with functors. Actually the way it will work is that a vector or list will contain the functors and the transformations will have a probability associated with them which will determine how likely they are called(or applied) for example, say I have 3 transformations of a alpha-numeric string, T1 = doubles every character, T2 = removes the first 3 characters, T3 = Addes x to every character in the set {s,t,v} T1 has probability of 1/2 being called, T2 has probability 1/4 and T3 probability 1/4. So I have a function that picks the functors with their probability and applies them to a string transforming it. (essentially just picking a random number between 0 and 1 and if it falls within 1/2 then it calls T1, if 1/2 to 3/4 it calls T2 else T3) I can then repeat this as many times as I'd like. The issue is that I will need to create a lot of transformation functions and I'm just curious if I can simplify the method I'm using) template<class type> class Functor virtual type operator()(type &)=0; // call using operator template <class TClass, class type> class Functors : public Functor<type> type (TClass::*fpt)(type &); // pointer to member function TClass* pt2Object; // pointer to object Functors(TClass* _pt2Object, type (TClass::*_fpt)(type &)) pt2Object = _pt2Object; // override operator "()" virtual type operator()(type &v) return (*pt2Object.*fpt)(v); std::vector< Functor<std::string>* > vTab; class TClassA{ int prob; TClassA(int t) prob = t; Functors<TClassA, std::string> *Functor = new Functors<TClassA, std::string>(this, &TClassA: std::string Display(std::string &v) std::string str; std::stringstream out; out << prob; str = out.str(); v.insert(0, str); return v;
{"url":"http://www.velocityreviews.com/forums/t512260-functors.html","timestamp":"2014-04-20T04:09:49Z","content_type":null,"content_length":"40675","record_id":"<urn:uuid:422d84ff-3d91-46df-bf90-79b5f204cb99>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
Clyde Hill, WA Calculus Tutor Find a Clyde Hill, WA Calculus Tutor ...However, I believe I can help you do that. In other words, instead of teaching you how to write a paper on Alexander the Great, I will instead teach you simply how to write a good paper! I firmly believe that the combination of my strong knowledge base, practical experience in the History and E... 16 Subjects: including calculus, reading, chemistry, biology Background: I recently graduated from the University of Washington with a B.S. degree in chemistry. Throughout my college career I had a special focus in mathematics. Outside of school, current events, video games, and the financial markets catches most of my attention--with the exception of my 5 month old dog, Misha. 17 Subjects: including calculus, chemistry, physics, geometry ...High school chemistry student of the year as recognized by the American Chemical Society in 2003 I use Biostatistics in my everyday job as a research scientist at the University of Washington. Additionally, I have taken undergraduate and graduate level Biostatistics courses with success. I have a Ph.D. in Immunology. 17 Subjects: including calculus, chemistry, physics, geometry ...I have two years experience teaching college algebra at Eastern Washington University. While I love all things math, I understand that it is not everyone's cup of tea, and I strive to make the material as approachable as any other subject for my students. I will work with students to ascertain ... 10 Subjects: including calculus, physics, statistics, geometry ...I have helped students at University Tutoring Service and Central Test Prep in Seattle and at Boston Global Education in Westborough, MA. I am committed to helping students gain a deep understanding of the material they are studying, not just getting through their current homework assignment or ... 18 Subjects: including calculus, geometry, GRE, algebra 1 Related Clyde Hill, WA Tutors Clyde Hill, WA Accounting Tutors Clyde Hill, WA ACT Tutors Clyde Hill, WA Algebra Tutors Clyde Hill, WA Algebra 2 Tutors Clyde Hill, WA Calculus Tutors Clyde Hill, WA Geometry Tutors Clyde Hill, WA Math Tutors Clyde Hill, WA Prealgebra Tutors Clyde Hill, WA Precalculus Tutors Clyde Hill, WA SAT Tutors Clyde Hill, WA SAT Math Tutors Clyde Hill, WA Science Tutors Clyde Hill, WA Statistics Tutors Clyde Hill, WA Trigonometry Tutors Nearby Cities With calculus Tutor Beaux Arts Village, WA calculus Tutors Bellevue, WA calculus Tutors Duvall calculus Tutors Houghton, WA calculus Tutors Hunts Point, WA calculus Tutors Kirkland, WA calculus Tutors Medina, WA calculus Tutors Mercer Island calculus Tutors Monroe, WA calculus Tutors Redmond, WA calculus Tutors Sammamish calculus Tutors Seahurst calculus Tutors Snohomish calculus Tutors Woodway, WA calculus Tutors Yarrow Point, WA calculus Tutors
{"url":"http://www.purplemath.com/clyde_hill_wa_calculus_tutors.php","timestamp":"2014-04-20T23:45:29Z","content_type":null,"content_length":"24181","record_id":"<urn:uuid:083d4788-242b-4295-be10-62e5942f618d>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
Twin Prime Conjecture - Visual Proof September 27th 2009, 04:45 PM Twin Prime Conjecture - Visual Proof There are infinitely many primes p such that p + 2 is also prime. First off, I'm a noob at maths and always reverse my logic! Now, a little glimpse into my madness. ;) I saw this unsolved twin prime conjecture several days ago and tried to solve it. After 15 mins I constructed the above visual proof. Here's how I thought it should work: Let line P represent the Infinitude of Primes. To be a member of P, the number must be prime. All other numbers are excluded. Next, pick a prime p and draw a 2D lattice L with sides sqrt(2). Notice how P is the on the lattice diagonal. At this stage, I knew at least one lattice square would intersect P at two distinct primes, say p and p+2 (if it exists!?). Now to prove that p+2 exists, I decided to use Polya's 2D random walk which proves that any point is reachable on a 2D lattice. I intentionally allowed the lattice point to represent p+2. So, let the random walk (shown in red) proceed and eventually it'll reach the lattice prime point p+2. Thus, if it's possible to reach p+2 from p, then p+2 must exist and is prime since it lies on P. Q.E.D. Okay, where did I go wrong? September 27th 2009, 08:39 PM Bruno J. Are you serious? I don't see you using a single property of primes. Try modifying the argument to show that there are infinitely many primes $p$ for which $p+1$ is also prime; if you succeed, and I don't doubt you will, then certainly your proof is flawed. September 28th 2009, 04:47 AM Hi Bruno, p+1 would not be prime. Isn't p+2 the smallest except for 2,3. I do use Euclid's theorem that there are an infinite number of primes. ;) Think of it this way ... "line" P is simply a representation of a prime boundary P={2,3,5,7,11,13,...}, so that superimposing a 2D grid will cross that boundary at two prime points, p and p+2 (if it exists?) The primary goal is to prove that p+2 exists. I tried to prove p+2 exists by finding a path from point p to p+2 by using a random walk by Polya. The key "trick" is to align prime p and a lattice point to create a "prime lattice point" p+2 so that I could say p+2 is prime (since it lies on P) and also simultaneously state that p+2 exists as it's a lattice point. Combining them together, p+2 is prime and exists! ;) Perhaps my proof is flawed by a construction argument? Thanks for the reply. September 28th 2009, 04:50 AM mr fantastic Hi Bruno, p+1 would not be prime. Isn't p+2 the smallest except for 2,3. I do use Euclid's theorem that there are an infinite number of primes. ;) Think of it this way ... "line" P is simply a representation of a prime boundary P={2,3,5,7,11,13,...}, so that superimposing a 2D grid will cross that boundary at two prime points, p and p+2 (if it exists?) The primary goal is to prove that p+2 exists. I tried to prove p+2 exists by finding a path from point p to p+2 by using a random walk by Polya. The key "trick" is to align prime p and a lattice point to create a "prime lattice point" p+2 so that I could say p+2 is prime (since it lies on P) and also simultaneously state that p+2 exists as it's a lattice point. Combining them together, p+2 is prime and exists! ;) Perhaps my proof is flawed by a construction argument? Thanks for the reply. See reply #2. Thread closed.
{"url":"http://mathhelpforum.com/number-theory/104695-twin-prime-conjecture-visual-proof-print.html","timestamp":"2014-04-21T06:52:36Z","content_type":null,"content_length":"8135","record_id":"<urn:uuid:a18c9706-34db-43c0-ab84-dc247b5074b2>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
Epc is an error-propagating calculator, designed to help you evaluate formulas that are based on uncertain quantities. For example, the input $x = 10 +- 1; calculate 2 * $x; produces an indication that the result is 20 +- 2, as expected, while the input $pi = 3.14159 +- 1; calculate sin($pi/2); tells you something that might be harder for you to calculate yourself, i.e. that the error bar ranges from 0.78 to 1.0. In trying to get the gist of what epc provides, note that traditional estimates of error bars have difficulty with the sin function, which cannot exceed 1. Also, note that the lower-limit on the error bar on sin(x) is not the same as the sin of the lower limit on x.) Epc works by monte-carlo calculations of a given formulae, randomly perturbing parameters with a distribution matched to the error bars. The estimate of the result derives from analysis of the probability density function of the results of the trial calculations. For details of how epc works, and how to use it, see the online manual. Epc is written in Perl, although if it becomes more popular, it might be rewritten as a compiled application, which would make it 1 or 2 orders of magnitude faster. Copyright © 2002 by Dan Kelley This material may be distributed only subject to the terms and conditions set forth in the GNU Publication License
{"url":"http://epc.sourceforge.net/","timestamp":"2014-04-21T04:32:16Z","content_type":null,"content_length":"9711","record_id":"<urn:uuid:039d2d72-8cfd-4d61-b357-355004c27efb>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 50 - Information and Computation , 1994 "... We investigate extensions of temporal logic by connectives defined by finite automata on infinite words. We consider three different logics, corresponding to three different types of acceptance conditions (finite, looping and repeating) for the automata. It turns out, however, that these logics all ..." Cited by 250 (55 self) Add to MetaCart We investigate extensions of temporal logic by connectives defined by finite automata on infinite words. We consider three different logics, corresponding to three different types of acceptance conditions (finite, looping and repeating) for the automata. It turns out, however, that these logics all have the same expressive power and that their decision problems are all PSPACE-complete. We also investigate connectives defined by alternating automata and show that they do not increase the expressive power of the logic or the complexity of the decision problem. 1 Introduction For many years, logics of programs have been tools for reasoning about the input/output behavior of programs. When dealing with concurrent or nonterminating processes (like operating systems) there is, however, a need to reason about infinite computations. Thus, instead of considering the first and last states of finite computations, we need to consider the infinite sequences of states that the program goes through... - Annals of Pure and Applied Logic , 1991 "... The mathematical framework of Stone duality is used to synthesize a number of hitherto separate developments in Theoretical Computer Science: • Domain Theory, the mathematical theory of computation introduced by Scott as a foundation for denotational semantics. • The theory of concurrency and system ..." Cited by 231 (10 self) Add to MetaCart The mathematical framework of Stone duality is used to synthesize a number of hitherto separate developments in Theoretical Computer Science: • Domain Theory, the mathematical theory of computation introduced by Scott as a foundation for denotational semantics. • The theory of concurrency and systems behaviour developed by Milner, Hennessy et al. based on operational semantics. • Logics of programs. Stone duality provides a junction between semantics (spaces of points = denotations of computational processes) and logics (lattices of properties of processes). Moreover, the underlying logic is geometric, which can be computationally interpreted as the logic of observable properties—i.e. properties which can be determined to hold of a process on the basis of a finite amount of information about its execution. These ideas lead to the following programme: - FUNDAMENTAL APPROACHES TO SOFTWARE ENGINEERING, NUMBER 1783 IN LNCS , 2000 "... KIV is a tool for formal systems development. It can be employed, e.g., – for the development of safety critical systems from formal requirements specifications to executable code, including the verification of safety requirements and the correctness of implementations, – for semantical foundations ..." Cited by 49 (26 self) Add to MetaCart KIV is a tool for formal systems development. It can be employed, e.g., – for the development of safety critical systems from formal requirements specifications to executable code, including the verification of safety requirements and the correctness of implementations, – for semantical foundations of programming languages from a specification of the semantics to a verified compiler, – for building security models and architectural models as they are needed for high level ITSEC [7] or CC [1] evaluations. Special care was (and is) taken to provide strong proof support for all validation and verification tasks. KIV can handle large scale formal models by efficient proof techniques, multi-user support, and an ergonomical user interface. It has been used in a number of industrial pilot applications, but is also useful as an educational tool for formal methods courses. Details on KIV can be found in [9] [10] [11] and under http://www.informatik.uni-ulm.de/pm/kiv/. , 1992 "... this paper is to provide a framework in which one particular class of social activity can be formalised and ultimately analysed: namely that in which a group of autonomous agents (at least two) decides they wish to work together as a team to solve a common problem. A comprehensive theory describing ..." Cited by 39 (5 self) Add to MetaCart this paper is to provide a framework in which one particular class of social activity can be formalised and ultimately analysed: namely that in which a group of autonomous agents (at least two) decides they wish to work together as a team to solve a common problem. A comprehensive theory describing this class of social interaction would need to cover at least the following aspects: when to initiate team activity, how to go about assembling the team, how to plan and distribute work within the team, how to behave once team activity has been initiated and how to complete team activity. The framework described herein defines the prerequisites for such action and also prescribes how agents should behave (both in their own problem solving and with respect to other group members) once the problem solving has been established. Typically in a community of autonomous agents, one of the primary motives for joint action is when no individual is capable of achieving a desired objective alone; only by combining and coordinating with others can the target be reached. Joint action is usually a reciprocal process in which participating agents augment their objectives and problem solving to comply with those of others - hence it is a fairly sophisticated form of cooperation. It requires greater knowledge, awareness and reflection by an agent both with respect to its own problem solving objectives and about their compatibility with the objectives of others, than simpler forms of social interaction (such as task and result sharing [19]). Joint action, by definition, requires an objective the group wishes to achieve - it is the glue which binds the team together. As a consequence of the autonomous nature of the agents, team members will only participate if they can derive some benefit from ... - KORSO: METHODS, LANGUAGES, AND TOOLS FOR THE CONSTRUCTION OF CORRECT SOFTWARE – FINAL REPORT, LNCS 1009 , 1995 "... This paper presents a particular approach to the design and verification of large sequential systems. It is based on structured algebraic specifications and stepwise refinement by program modules. The approach is implemented in Kiv (Karlsruhe Interactive Verifier), and supports the entire desig ..." Cited by 34 (6 self) Add to MetaCart This paper presents a particular approach to the design and verification of large sequential systems. It is based on structured algebraic specifications and stepwise refinement by program modules. The approach is implemented in Kiv (Karlsruhe Interactive Verifier), and supports the entire design process starting from formal specifications and ending with verified code. Its main characteristics are a strict decompositional design discipline for modular systems, a powerful proof component, and an evolutionary verification model supporting incremental error correction and verification. We present the design methodology for modular systems, a feasible verification method for single modules, and an evolutionary verification technique based on reuse of proofs. We report on the current performance of the system, compare it to others in the field, and discuss future perspectives. - In , 1996 "... Abstract. In this paper we present a formal framework for social agents. The social agents consist of four components: the information component (containing knowledge and belief), the action component, the motivational component (where goals, intentions, etc. play arole) and the social component (co ..." Cited by 30 (6 self) Add to MetaCart Abstract. In this paper we present a formal framework for social agents. The social agents consist of four components: the information component (containing knowledge and belief), the action component, the motivational component (where goals, intentions, etc. play arole) and the social component (containing aspects of speech acts and relations between agents). The main aim of this work was to describe all componentsin a uniform way, such that it is possible to verify each component separately but also formally describe the interactions between the different components. E.g. the effect of a speech act on the believes of an agent or on the commitment to a goal it pursues. 1 - JUCS , 2001 "... Abstract: This paper describes a generic proof method for the correctness of refinements of Abstract State Machines based on commuting diagrams. The method generalizes forward simulations from the refinement of I/O automata by allowing arbitrary m:n diagrams, and by combining it with the refinement ..." Cited by 28 (6 self) Add to MetaCart Abstract: This paper describes a generic proof method for the correctness of refinements of Abstract State Machines based on commuting diagrams. The method generalizes forward simulations from the refinement of I/O automata by allowing arbitrary m:n diagrams, and by combining it with the refinement of data structures. , 1997 "... The paper presents a logic for action theory based on a modal language, where modalities represent actions. Persistency is achieved by using a nonmonotonic formalism which maximizes persisitency assumptions. The problem of ramification is tackled by introducing a modal causality operator which i ..." Cited by 27 (13 self) Add to MetaCart The paper presents a logic for action theory based on a modal language, where modalities represent actions. Persistency is achieved by using a nonmonotonic formalism which maximizes persisitency assumptions. The problem of ramification is tackled by introducing a modal causality operator which is used to represent causal rules. Assumptions on the value of fluents in the initial state allow to reason with incomplete initial states and to do postdiction. The action theory can also deal with non-minimal change and nondeterministic actions. 1 Introduction Reasoning about action and change is one of the main topics which must be addressed in building intelligent agents. Among the various approaches to reasoning about actions, one of the most popular is still the situation calculus. The situation calculus represents states of the world (situations) as sequences of actions, and fluents are relations whose truth values vary from state to state. The situation calculus is formulated in - JOURNAL OF SEMANTICS , 2000 "... There is currently a broad interest in dialogue acts and dialogue act taxonomies, and new uses, taxonomies, and standardization efforts continue to be proposed. This paper presents a discussion of issues that must be addressed in order to facilitate the shared understanding and use of taxonomies. ..." Cited by 27 (4 self) Add to MetaCart There is currently a broad interest in dialogue acts and dialogue act taxonomies, and new uses, taxonomies, and standardization efforts continue to be proposed. This paper presents a discussion of issues that must be addressed in order to facilitate the shared understanding and use of taxonomies. The discussion is framed in terms of 20 questions, the answers to which will help make the meanings of taxonomy elements more clear to different communities of users.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=90740","timestamp":"2014-04-17T14:16:10Z","content_type":null,"content_length":"37755","record_id":"<urn:uuid:eded1a67-c9f0-4e3c-b0c8-b6a72b8c2bf1>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
Trig proof - urg It is given that: $<br /> Sin7\theta = 7Sin{\theta} - 56Sin^3\theta + 112Sin^5\theta - 64Sin^7\theta<br />$ Hence, prove that the only real solution of the equation $Sin 7 \theta = 7Sin\theta$ are given by $<br /> \theta = n\pi<br />$ Where n is an integer. I am really stuck with this Any help would be greatly appreciated.
{"url":"http://mathhelpforum.com/trigonometry/88836-trig-proof-urg.html","timestamp":"2014-04-18T01:41:24Z","content_type":null,"content_length":"43528","record_id":"<urn:uuid:bbbc2a23-aadc-4fe1-aafc-45af87ff8d73>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
"physical" degrees of freedom Starting with the Lagrangian for EM, it looks like there are four degrees of freedom for the four-vector potential. But one term is not physical in that it can be expressed completely in terms of the other degrees of freedom (so it is not a freedom itself), and there is another "freedom" that is not physical because it doesn't effect the equations of motion (the "gauge" freedom). For interactions with higher symmetries (like the weak force SU(2), or the strong force SU(3)), is there an easy "symmetry argument" for how many of the components of their "potentials" will actually be physical freedoms? For example, there are 8 gluons. How many physical degrees of freedom are there actually amongst these 8?
{"url":"http://www.physicsforums.com/showthread.php?p=2645126","timestamp":"2014-04-19T04:30:12Z","content_type":null,"content_length":"23418","record_id":"<urn:uuid:45bc790e-f546-4906-9624-32d7bbb26af9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
Venn Diagram - Choose One of Three Options Date: 01/24/99 at 06:44:09 From: Remi Subject: Venn Diagrams In a computer science class of 150, it was found that when students filled out their option forms each student chose at least one option from the list: A) Database Design B) System Building C) Computer Methods When the registers for each of the options were assembled, there were 102 names on the database design option, 70 on the system building option, and 40 on computer methods. By cross-checking between the lists in pairs, it was found that 25 were doing database design and system building, 27 were taking database design and computer methods, and 30 were studying computer methods and system building. A) Find out how many students were taking all three options by using a Venn diagram and all the information provided. B) Use your completed Venn diagram to find out how many chose formal methods only. I drew 3 circles into which I put all the numbers of the students who were doing two courses. For example 30 were doing CM and SB, 27 were doing DD and CM, 25 were doing DD and SB. I then used those numbers compared with the first set of numbers to find out what those numbers would be. For example in the DD circle, there were the numbers 25 and 27 in half circles (as they were combined with other courses), and I knew that the overall total for DD was 102. Therefore, I put 50 in the circle as 25 + 27 + 50 = 102. I did this for the SB circle too - but I am not sure what goes in the CM circle, as the overall total for that circle is 40, but with the combined figures already there it totals 57. Do I do an x-17 in there? Or do I do a 3, as that would bring the overall student total to 150, as it should be. Obviously I cannot answer either of the two questions, as I am unsure what goes in each circle. Date: 01/24/99 at 08:28:00 From: Doctor Anthony Subject: Re: Venn Diagrams Let D = Design, S = System Building, and C = Computer methods. Draw the Venn diagram as described and let x = the number who study all three. Then the rest of the overlap between D and S is 25-x, between D and C is 27-x, and between S and C is 30-x. Thus: those in D only = 102 - 25 - (27-x) = 50+x those in S only = 70 - 25 - (30-x) = 15+x those in C only = 40 - 27 - (30-x) = -17+x From this we can see that x > 17, but to find x we now use the fact that the total of all students is 150. That is: 102 + 15+x + 30-x -17+x = 150 130 + x = 150 x = 20 And so 20 were taking all three subjects. Since the Venn diagram is now complete, you can answer any question. For example, Those taking D only are 50+x = 70 Those taking S only are 15+x = 35 Those taking C only are -17+x = 3 - Doctor Anthony, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/52435.html","timestamp":"2014-04-20T09:20:40Z","content_type":null,"content_length":"7923","record_id":"<urn:uuid:d408924a-339f-4e17-845e-cbb78e6a2995>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
Continuity definiton problem September 26th 2008, 08:56 AM Continuity definiton problem Show that $f: A \rightarrow R^{m}$ is continuous at $x_0$ iff for every $\epsilon > 0$, there exist $\delta > 0$ such that $|| x-x_0 || \leq \delta$, we have $||f(x)-f(x_0)|| \leq \epsilon$ So I can replace the definition < with $\leq$? September 26th 2008, 10:08 AM I am not sure that I follow your question. Do note that the tradition definition is for “<”. However, it is also $\left( {\forall \varepsilon > 0} \right)\left[ {\frac{\varepsilon }<br /> {2} < \varepsilon } \right]$, that is for all. So $\left\| {f(x) - f(x_0 )} \right\| \leqslant \frac{\varepsilon }<br /> {2} \Rightarrow \quad \left\| {f(x) - f(x_0 )} \right\| < \varepsilon$ September 27th 2008, 09:26 AM I asked the professor today and he said this problem is suppose to show that the two definitions for continuity are equiv. Well, from the last post I understand how to go from $\leq$ to < How to do this the other way around is killing me... Any hints? Thanks.
{"url":"http://mathhelpforum.com/calculus/50718-continuity-definiton-problem-print.html","timestamp":"2014-04-21T13:43:11Z","content_type":null,"content_length":"8411","record_id":"<urn:uuid:ff8d8e3d-d056-4808-bbf8-b15f97252df7>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Find Absolute Extrema on a Closed Interval Every function that’s continuous on a closed interval has an absolute maximum value and an absolute minimum value (the absolute extrema) in that interval — in other words, a highest and lowest point — though there can be a tie for the highest or lowest value. A closed interval like [2, 5] includes the endpoints 2 and 5. An open interval like (2, 5) excludes the endpoints. Finding the absolute max and min is a snap. All you do is compute the critical numbers of the function in the given interval, determine the height of the function at each critical number, and then figure the height of the function at the two endpoints of the interval. The greatest of this set of heights is the absolute max; and the least, of course, is the absolute min. Here’s an example: 1. Start by finding the critical numbers of h in the open interval, 2. Compute the function values (the heights) at each critical number. 3. Determine the function values at the endpoints of the interval. So, from Steps 2 and 3, you’ve found five heights: 1.5, 1, 1.5, –3, and 1. The largest number in this list, 1.5, is the absolute max; the smallest, –3, is the absolute min. an endpoint extremum. The graph of h(x) = cos (2x) – 2 sinx. A couple observations: However, if you only want to find the absolute extrema on a closed interval, you don’t have to pay any attention to whether critical points are local maxes, mins, or neither. And thus you don’t have to bother to use the first or second derivative tests. All you have to do is determine the heights at the critical numbers and at the endpoints and then pick the largest and smallest numbers from this list. Second, the absolute max and min in the given interval tell you nothing about how the function behaves outside the interval.
{"url":"http://www.dummies.com/how-to/content/how-to-find-absolute-extrema-on-a-closed-interval.navId-403862.html","timestamp":"2014-04-16T11:10:40Z","content_type":null,"content_length":"55215","record_id":"<urn:uuid:698b31de-9978-4c34-8a3b-0da3c8b6f3ed>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
A Particle Of Mass M Is At Rest At X=1. The Potential ... | Chegg.com A particle of mass M is at rest at x=1. The potential energy is given by U=-Ax+C. Assume that the force associated with U is the only force acting on the particle. a) Find the force acting on the block. b) Find the acceleration of the block.
{"url":"http://www.chegg.com/homework-help/questions-and-answers/particle-mass-m-rest-x-1-potential-energy-given-u-ax-c-assume-force-associated-u-force-act-q3056090","timestamp":"2014-04-18T14:59:45Z","content_type":null,"content_length":"20427","record_id":"<urn:uuid:08d104a1-5fa4-40e9-90c3-fc6e0e9c88ab>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Queens, NY Algebra 2 Tutor Find a Queens, NY Algebra 2 Tutor ...I have Undergraduate, Masters and PhD degrees in Computer Science and I was an Assistant Professor of Computer Science. I went to High School in Israel and have lived there at other times. I am fluent in spoken Hebrew and very advanced in Written and Typed Hebrew. 36 Subjects: including algebra 2, English, reading, writing ...Having studied mathematics at a variety of levels (high school and college) and institutions (Princeton and Imperial College, London) I'm very comfortable with calculus. I can tutor students in any form of calculus from high school, to AP (either AB or BC), to college. I've tutored calculus, AP... 40 Subjects: including algebra 2, chemistry, English, reading ...I have extensive tutoring experience tutoring Chemistry and for the SAT exam, AP exam and university level courses. Received A's in calculus, multivariable calculus, linear algebra, differential equations. Was a tutor for Calculus 1 in Cornell University's Department of Mathematics. 17 Subjects: including algebra 2, chemistry, algebra 1, MCAT ...I have taught children and adults phonics in the context of reading and EFL classes. I feel comfortable teaching this subject and my students have had successful results in the past. I have four years of teaching and one-one tutoring experience. 21 Subjects: including algebra 2, reading, English, calculus ...My overall grade in Russian and Russian literature in high school was 5 (the highest possible grade), and I received a Gold Medal for outstanding academic performance. I continue to actively use Russian in my everyday life. Besides, my daughter attends the School at Russian Mission in the Unite... 24 Subjects: including algebra 2, physics, GRE, Russian Related Queens, NY Tutors Queens, NY Accounting Tutors Queens, NY ACT Tutors Queens, NY Algebra Tutors Queens, NY Algebra 2 Tutors Queens, NY Calculus Tutors Queens, NY Geometry Tutors Queens, NY Math Tutors Queens, NY Prealgebra Tutors Queens, NY Precalculus Tutors Queens, NY SAT Tutors Queens, NY SAT Math Tutors Queens, NY Science Tutors Queens, NY Statistics Tutors Queens, NY Trigonometry Tutors
{"url":"http://www.purplemath.com/Queens_NY_algebra_2_tutors.php","timestamp":"2014-04-17T21:46:20Z","content_type":null,"content_length":"23977","record_id":"<urn:uuid:d9f752f1-06b3-4a7c-92f6-5072bfdb8c5c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
exponents and powers October 3rd 2012, 02:59 AM #1 Sep 2012 exponents and powers i got this question in my exam today find x if 16^3x=32^5x-13 i keep getting it as 65/17 which i know is wrong.i really want to find my mistake before the papers are out. a friend of mine got the answer as 8 but he's not sure either.please help. thanks in advance Re: exponents and powers You are both incorrect. We are given: Since both bases are powers of 2, we may write: Now, equate the exponents, and solve for $x$. Re: exponents and powers yes i did get that part. then since the base is common, hmmmm is there anything wrong with this? cuz actually i wasn't present when this topic was being taught. Re: exponents and powers Re: exponents and powers How could I be so stupid?! Why does It always happen to me.?!! Last edited by NormalKid; October 3rd 2012 at 08:12 PM. Re: exponents and powers yes it is Re: exponents and powers Yes, that is the correct solution. As for not making simple mistakes, I will let you know if I ever stop making them! Re: exponents and powers Ah............no one is perfect. October 3rd 2012, 03:07 AM #2 October 3rd 2012, 09:18 AM #3 Sep 2012 October 3rd 2012, 09:22 AM #4 October 3rd 2012, 08:07 PM #5 Sep 2012 October 3rd 2012, 08:13 PM #6 Sep 2012 October 3rd 2012, 08:13 PM #7 October 3rd 2012, 08:14 PM #8 Sep 2012
{"url":"http://mathhelpforum.com/math-topics/204558-exponents-powers.html","timestamp":"2014-04-17T15:10:10Z","content_type":null,"content_length":"48065","record_id":"<urn:uuid:122da46c-c6a1-4537-9419-2391782a866f>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
Eiken Manga From Media Blasters: Zashono Academy is unique in all the world. It has 54,000 students and this allows for some VERY specialized clubs. There are clubs for just about everything you can think of... and then there's the Eiken club. No one seems to know just what it's about, but they do know that it's filled with some of the most bodacious babes ever to be brought together! ...and then there's Densuke. Densuke is a little awkward and clumsy and he always seems to be in the most compromising positions with all the other club members. And all he really wants is to be with the cutest girl on campus, the lovely Chiharu. Will he ever be able to get away from the uncomfortable club activities and let Chiharu know what's really on his mind? Eiken Forum Please support us by liking our page! Eiken chapters Can't find your chapter here? Try the Eiken manga at mangawall.com. Manga by MATSUYAMA Seiji Recommended Manga
{"url":"http://manga.animea.net/eiken.html","timestamp":"2014-04-17T08:37:20Z","content_type":null,"content_length":"38560","record_id":"<urn:uuid:438ec1d7-60f7-49e1-ade0-43c1c85c1bd4>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 772 compare and contrast the strengths and weaknesses of two different approaches to study the following issue: You have developed a new drug that you believe is effective in reducing appetite. As a result of some preliminary research, you have been given a small grant to design a... what does the F and g mean at the beginning of any function g(x)=10/x+7 what this things domain The formula h=48t-16t^2 gives the height, h, in feet, of an object projected into the air after t seconds. If an object is propelled upward, at what time(s) will it reach a height of 32 feet above the ground? What types of technology do people utilize to communicate? How does technology change the way people communicate? What does technology bring to communication other than the technology itself? Business Communication The way business communication is conducted [Passive voice] in the workplace experiences regular changes with the advances in technology that makes communication faster and easier. this is a optimization problem A construction company has been offered a contract for $7.8 million to construct and operate a trucking route for five years to transport ore from a mine site to a smelter. The smelter is located on a major highway, and the mine is 3 km into a he... Ethanol, C2H5OH or C2H6O, is mixed with gasoline and sold as gasohol. Given the following thermo-chemical reaction, calculate the kilograms (kg) of CO2 produced when enough ethanol is combusted to provide (or give off) 369 kJ of heat: C2H5OH (l) + 3 O2 (g) --> 2 CO2 (g) +3 ... Suppose 2 balls of same mass are held by 4 ropes, 2 for each ball. One ball is held by 2 ropes fastened at same height to something and the other at different heights. If the angles made by each respective pair of ropes(eg. right to right) are equal, are tensions in each respe... Solve: sin2xsinx - cos2xcosx = 1, x domain [-pi, pi]. Find side c if side a = 10 inches and side b = 6 inches. If Brad invests $2700 in an account paying 11% compounded quarterly. How much is in the account after 6 months? 1/3x+5(12-x)=2x what would "x" equal? how did you solve it? Why does the inequality sign change when both sides are multiplied or divided by a negative number? Does this happen with equations? Why or why not? There is no choices Suppose I reported the length f a book shelf to be 2.12 meters. What range would you expect the actual length of the bookshelf to lie in? What is the antiderivative of the cube root of (x^2)+2 What is the antiderivative of the cube root of (x^2)+2 Simple interest physics hw A 300. kg roller coaster car is traveling at a constant speed of 15 m/s over a hill with a radius of curvature of 30. m. b. What is the normal force acting on the car at the top of the hill? c. Assume that the car is at the intersection of the radius and the hill and that a fr... Amy has $200 less invested at 9% than she does at 6.5%. If the annual return from the two investments is the same, how much is invested at each rate? DEFINE AND GIVE EXAMPLES 1. Sexual reproduction 2. asexual reproduction Thank you sir:) MrMath, please do not post aany answers to this question. My daughter, thinking she would help me with hoemwork posted this. I appreciate your time but do not need any assistance as I have a good understanding of the material. I have asked the site to delete this post. Sorry f... 1- A SQL statement is needed to create a stored procedure that has one input parameter of data type int. The stored proc selects the supplierId field and the total, or sum, of all Count field values for each group of supplierId fields from the Part table. The Part table was de... Inorganic Chem If you forget to add phenolphthalein solution to a vinegar solution(after adding NaOH solution), how might you salvage the titration? A STONE MEASURED 25KG WAS ACTULLAY 20 KG WHAT WAS THE PERCENT DEVIATION OF THIS MEASURMENT SCI207:Dependence of Man on the Environment A farmer planted a field of BT 123 corn and wants to estimate the yield in terms of bushels per acre. He counts 22 ears in 1/1000 of an acre. He determines that each ear has about 700 kernels on average. He also knows that a bushel contains about 90 000 kernels on average. Wha... A ball is rolling in a straight line at a constant speed. Another ball is motionless. what do they have in common? a 20.0-L vessel at 700 K initially contains HI (g) at a pressure of 6.20 atm;at equilibrium, it is found that the partial pressure of H2 (g) is .600 atm. What is the partial pressure of HI (g) at a 2 kg block slides down a frictionless incline from point A to point B. A force(magnitude P=3N) acts on the block between A and B, as shown. Points A and B are 2 m apart. If the kinetic energy of the block at A is 10 J, and the work done by gravity is 20J, what is the KE of t... A 305 kg piano slides 4.0 m down a 30° incline and is kept from accelerating by a man who is pushing back on it parallel to the incline. The effective coefficient of kinetic friction is 0.40. (a) Calculate the force exerted by the man. (b) Calculate the work done by the ma... social studies social studies This site looks not bad, but I personally not like this blood pressure chart. It hard to understand, and i using mostly bloodpressuremagazine[dot]com to check my blood pressure readings. It nice what all in one place, like normal blood pressure and hypertension. An NFL punter at the 15-yard line kicks a football with an initial velocity of 95 feet per second at an angle of elevation of 28 degrees. Let (t) be the elapsed time since the football is kicked. a. Write the parametric equations of this problem. b. With (t)=0 at the time of t... how many grams of manganese (4) oxide are needed to make 5.6 liters of a 2.1 solution Physics concept question solved it. Physics concept question A long straight horizontal wire carries a current I = 2.10 A to the left. A positive 1.00 C charge moves to the right at a distance 4.50 m above the wire at constant speed v = 2250 m/s. What are the magnitude and the direction of the magnetic force on the charge? h t tp:// pos... Physics - Current Loop Solved it. Physics - Current Loop A current loop in a motor has an area of 0.85 cm^2. It carries a 240 mA current in a uniform field of 0.62 T. What is the magnitude of the maximum torque on the current loop? Physics- Charge in electromagnetic field Solved it. Thanks. Physics- Charge in electromagnetic field An antiproton (which has the same properties as a proton except that its charge is -e) is moving in the combined electric and magnetic fields of the figure: h tt p://post image. org/image/2sh3s3hs4/ What are the magnitude and direction of the antiproton's acceleration at t... or to be more specific, one half the base times the height. so one half of 16 times 31, or 8 times 31. A= (1/2)(b*h) i figured out the f^-1(x) is the ln ((-13x-6)/(17x-5)) i just need a and b plz help before 11 pm f(x) = (5e^x-6)/(17e^x+13) find f^-1(x). since the domain of f^-1(x) is in the open interval (a, b) what is a and and b? Iron(II) can be oxidized by an acidic K2Cr2O7 solution according to the net ionic equation. Cr2O72− + 6 Fe2+ + 14 H+ 2 Cr3+ + 6 Fe3+ + 7 H2O If it takes 44.0 mL of 0.0250 M K2Cr2O7 to titrate 25.0 mL of a solution containing Fe2+, what is the molar concentration of Fe2+? apples- 55 cents oranges- 75 cents The results of a medical test show that of 66 people selected at random who were given the test, 3 tested positive and 63 tested negative. Determine the odds in favor of a person selected at random testing positive on the test. Life orientation Identify 5 practices in contravention of the basic conditions of employment act? the coefficient of restitution between the ball and the floor is 0.60 If the ball is drop from rest at the height of 6.6 m from the floor. Find a) what is the maximum height will the ball attain after the first bounce. b) how much kinetic energy is lost duirng the impact if t... Physical science The acceleration due to gravity on earth world history some parts of spain remained under muslim control for how many years Physics with Calc A point charge of 1.8uC (microcoulumbs) is at the center of a Gaussian cube 55 cm on edge. What is the net electric flux through the surface? Need help balancing this equation, just the coeffecients, do not need to balance the charges. The numbers are subscripts: Ca0.41Mg0.36Mn0.50Fe1.76Al2Si3O12 + H + H20 <---> Al(OH)3 + FeOOH + MnO2 + Ca + Mg + Si(OH)4 How to find the area of a KITE when the length of only one diagonal is given, with two pairs of congruent sides but with no length described 7. Marcus can paint a garage in 3 hours. Gina can paint the same garage in 2 hours. How long will it take Marcus and Gina to paint the garage if they work together Thank you so much!!!(: How many pounds of apples costing .64 per pound must be added to 30 pounds of apples costing .49 per pound to create a mixture that would cost .58 per pround x = adult tkts y = student tkts 3x = value of adult tkts 1.5y = value of student tkts x + y = 105 3x + 1.5y = 250 x = adult tkts y = student tkts 3x = value of adult tkts 1.5y = value of student tkts x + y = 105 3x + 1.5y = 250 How do you solve the equations? ): Michelle sold tickets for the basketball game. Each adult ticket costs 3$ and each student ticket costs 1.50. There were 105 tickets sold for a total of 250$. How many of each type of ticket were 3rd grade math how do explain how to break apart 3 x 9 to help me multipy Name some symbols for science fiction? US History It would help him throughout his presidency,because that was his advisors or cabinet members. In hitting a stationary pool ball with the mass of 170 g, a billiards player gives the ball impulse of 6.0 N-s. At what speed will the pool ball move toward the hole? Thank you! Can't figure it out. Does it have something to do with finding w? w=2pi/T = square root of k/m but I don't have K or T? A 1125 kg car carrying four 80 kg people travels over a rough "washboard" dirt road with corrugations 4.0 m apart which causes the car to bounce on its spring suspension. The car bounces with maximum amplitude when its speed is 17 km/h. The car now stops, and the fou... 6/10 is how long he taught 8th grade so 10 - 6 = 4 is how long he taught something else. The new fraction would be 4/10 but if you simplify it, 2 being the common factor in both you would get 2/5 4 = 2 * 2 10 = 2 * 5 After reading your comment and picturing a see saw, now I get it. Thank you soooo much and have a happy holiday :) After reading your comment and picturing a see saw, now I get it. Thank you soooo much and have a happy holiday :) Another question in regards to CT. I know CT refers to the clockwise turn of the meter stick but how can you picture it going in that direction? With no weight on the ruler, isn't it at an Ahhh, thank you, a lot clearer :) Thank you so much. I'm confused about one thing though, what does CCT and CT stand for? is it Torque? Or is that another way for center of mass A meter stick balances horizontally on a knife-edge at the 50.0 cm mark. With two 5.00 g coins stacked over the 7.0 cm mark, the stick is found to balance at the 41.0 cm mark. What is the mass of the meter stick? The answer is 38g but I don't understand how to get it. Do an ESSAY Using Kohlberg's three levels and six stages of moral reasoning, create a character that passes through each stage and explain how she/he responds to moral dilemmas present at each stage. Physics ( check answers) warren moon throws a footbal with a velocity of 20 m/s at an angle of 30 degrees with the horizontal. air resistance may be ignored. how much time is required for the football to reach the highest point of the trajectory How to write 5,800 million metric tons in standard form and in scientific notation? How to write 5,800 million metric tons in standard form and in scientific notation? Algebra 7th Grade is it correct? Algebra 7th Grade Algebra 7th Grade what is -8(700-3)= (using distributive property) Algebra 7th Grade what is -8(700-3)= (using distributive property) Algebra 7th Grade what is -8(700-3)= (using distributive property) a dog toy is constructed in the shape of a cylinder with a length of 6.8 in. The cylinder has a hemisphere at each end. The diameter is 1-9 in. Find the volume. Never mind. I realized that you want the force of friction to equal the force of tension. Thus when you set it up, you find the normal force and subtract the current way. Ft=Ff Ft=mu x N N-Ma=Mc In Fig. 6-25, blocks A and B have weights of 49 N and 21 N, respectively. (a) Determine the minimum weight of block C to keep A from sliding, if µs between A and the table is 0.20. (b) Block C suddenly is lifted off A. What is the acceleration of block A, if µk bet... (6x-7) 5x algebra II find the area of a triangle fgh f(1,4) g(-2,-3) h(3,-2) using matricies Physics earth & Space toward what direction, north or south would you look to the sun at noon on june 21-22 if you lived at the following latitudes algebra 2 What is the factorization of the trinomial 3x2 plus 27x plus 60? Honors Algebra what is the area of a square that has 48 x Dean is 1/4 that of Emily. In six years Emily will be two times older than Dean. How old is Dean? If a car is at a stop and it has to travel 2 miles and its speed is zero to sixty in 9 seconds asuming that you give it full power how fast will it be traveling when it travels 2 miles? Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=robert&page=4","timestamp":"2014-04-21T13:24:01Z","content_type":null,"content_length":"26324","record_id":"<urn:uuid:bb4b13c5-bffd-422f-a189-01277c6f8fe5>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
Lester, PA Find a Lester, PA Precalculus Tutor ...See my cancellation policy below. Let's Get Going! I'm a friendly guy with a lot of energy. 14 Subjects: including precalculus, chemistry, algebra 1, algebra 2 ...My background is in engineering and business, so I use an applied math approach to teaching. I find knowing why the math is important goes a long way towards helping students retain information. After all, math IS fun!In the past 5 years, I have taught differential equations at a local university. 13 Subjects: including precalculus, calculus, algebra 1, geometry ...I taught Intro to Discrete Mathematics at Rochester Institute of Technology. I hold a PhD in Algorithms, Combinatorics and Optimization. I have 14 years' experience as a practicing actuary. 18 Subjects: including precalculus, calculus, statistics, geometry ...I have experience tutoring math at the levels of pre-algebra through calculus, and would also be able to tutor probability, statistics, and actuarial math. I graduated with a degree in Russian Language, and spent a full year living in St. Petersburg, Russia. 14 Subjects: including precalculus, Spanish, calculus, statistics ...At college level, he has tutored students from the Universities of Princeton, Oxford, Pennsylvania State, Drexel, Temple, Phoenix, and the College of New Jersey. Dr Peter offers assistance with algebra, pre-calculus, SAT, AP calculus, college calculus 1,2 and 3, GMAT and GRE. He is a retired Vice-President of an international Aerospace company. 10 Subjects: including precalculus, calculus, algebra 1, GRE
{"url":"http://www.purplemath.com/lester_pa_precalculus_tutors.php","timestamp":"2014-04-18T05:56:00Z","content_type":null,"content_length":"23946","record_id":"<urn:uuid:fd8eeddf-d317-4663-8555-80110ae4f6fa>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
User Alon Amit bio website affinemess.com location Los Altos, CA age 44 visits member for 4 years, 6 months seen 20 hours ago stats profile views 3,540 Math Circler, ex-Googler and Dad. Now at Facebook. Mar Properties of Graphs with an eigenvalue of -1 (adjacency matrix)? 16 comment If G->B is a covering map (in the topological sense), then G inherits all the eigenvalues of B (just choose an eigenfunction that is uniform on the fibers). So, for instance, any graph which covers K_n for any n has -1 as an eigenvalue. The cycle of length 3k covers the triangle, which is another way to explain Kevin's example. 16 awarded Nice Answer 16 awarded Popular Question 14 awarded Enlightened 14 awarded Nice Answer Mar Books you would like to see translated into English. 12 comment That's +10 from me, too. 10 answered A historical question: Hurwitz, Luroth, Clebsch, and the connectedness of M_g Mar Why are the sporadic simple groups HUGE? 9 revised added 2 characters in body Mar If 2^x and 3^x are integers, must x be as well? 9 comment This answer was given by Gerry. I just added the link to Wikipedia. Mar If 2^x and 3^x are integers, must x be as well? 9 revised Added wikipedia link Mar If 2^x and 3^x are integers, must x be as well? 9 comment Very interesting! Just to make sure I understand - does this generalize the "n^x for all n" version only, or can it be applied to "2,3,5" and "2,3" as well? 9 accepted If 2^x and 3^x are integers, must x be as well? Mar If 2^x and 3^x are integers, must x be as well? 9 comment @jef: the best hint I can think of is "calculus of differences". 9 awarded Good Question 9 awarded Nice Question 9 asked If 2^x and 3^x are integers, must x be as well? Criteria for accepting an invitation to become an editor of a scientific journal Mar I think the following criterion is somewhat rational: when the publishers of a new journal solicit help for the editorial board, they understand that people will be asking themselves 8 comment exactly the kind of question you are asking. Therefore they make an effort to establish the reputation of the new journal in the email - providing references to editors who have already joined, explaining why a new journal in this field is called for, contrasting it with similar journals etc. On the other hand they don't say anything about the submission process since this is completely irrelevant. 6 answered Teaching Methods and Evaluating them Mar Diameter of m-fold cover 3 comment I must be missing something - why do you say that L(bi) < L(ai) in the second paragraph? Why is that a strict inequality? Feb revised How should I approximate real numbers by algebraic ones? 19 deleted 1 characters in body
{"url":"http://mathoverflow.net/users/25/alon-amit?tab=activity&sort=all&page=6","timestamp":"2014-04-19T12:20:18Z","content_type":null,"content_length":"46047","record_id":"<urn:uuid:e3d8b331-8b92-4889-a518-a2ad79580fdb>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
- In proceedings of ACM SIGMOD Conference on Management of Data , 2002 "... Similarity search in large time series databases has attracted much research interest recently. It is a difficult problem because of the typically high dimensionality of the data.. The most promising solutions' involve performing dimensionality reduction on the data, then indexing the reduced data w ..." Cited by 235 (28 self) Add to MetaCart Similarity search in large time series databases has attracted much research interest recently. It is a difficult problem because of the typically high dimensionality of the data.. The most promising solutions' involve performing dimensionality reduction on the data, then indexing the reduced data with a multidimensional index structure. Many dimensionality reduction techniques have been proposed, including Singular Value Decomposition (SVD), the Discrete Fourier transform (DFT), and the Discrete Wavelet Transform (DWT). In this work we introduce a new dimensionality reduction technique which we call Adaptive Piecewise Constant Approximation (APCA). While previous techniques (e.g., SVD, DFT and DWT) choose a common representation for all the items in the database that minimizes the global reconstruction error, APCA approximates each time series by a set of constant value segments' of varying lengths' such that their individual reconstruction errors' are minimal. We show how APCA can be indexed using a multidimensional index structure. We propose two distance measures in the indexed space that exploit the high fidelity of APCA for fast searching: a lower bounding Euclidean distance approximation, and a non-lower bounding, but very tight Euclidean distance approximation and show how they can support fast exact searchin& and even faster approximate searching on the same index structure. We theoretically and empirically compare APCA to all the other techniques and demonstrate its' superiority. - SIGKDD'02 , 2002 "... ... mining time series data. Literally hundreds of papers have introduced new algorithms to index, classify, cluster and segment time series. In this work we make the following claim. Much of this work has very little utility because the contribution made (speed in the case of indexing, accuracy in ..." Cited by 220 (50 self) Add to MetaCart ... mining time series data. Literally hundreds of papers have introduced new algorithms to index, classify, cluster and segment time series. In this work we make the following claim. Much of this work has very little utility because the contribution made (speed in the case of indexing, accuracy in the case of classification and clustering, model accuracy in the case of segmentation) offer an amount of "improvement" that would have been completely dwarfed by the variance that would have been observed by testing on many real world datasets, or the variance that would have been observed by changing minor (unstated) implementation details. To illustrate our point - In 7 th Hellenic Conference on Informatics, Ioannina , 1999 "... Time-series, or time-sequence, data show the value of a parameter over time. A common query with time-series data is to find all sequences which are similar to a given sequence. The most common technique for evaluating similarity between two sequences involves calculating the Euclidean distance betw ..." Cited by 13 (0 self) Add to MetaCart Time-series, or time-sequence, data show the value of a parameter over time. A common query with time-series data is to find all sequences which are similar to a given sequence. The most common technique for evaluating similarity between two sequences involves calculating the Euclidean distance between them. However, many examples can be given where two similar sequences are separated by a large Euclidean distance. In this paper, instead of calculating the Euclidean distance directly between two sequences, the sequences are transformed into a feature vector and the Euclidean distance between the feature vectors is then calculated. Results show that this approach is superior for finding similar sequences. 2. - Data Mining in Time Series Databases "... We describe a procedure for identifying major minima and maxima of a time series, and present two applications of this procedure. The first application is fast compression of a series, by selecting major extrema and discarding the other points. The compression algorithm runs in linear time and takes ..." Cited by 11 (3 self) Add to MetaCart We describe a procedure for identifying major minima and maxima of a time series, and present two applications of this procedure. The first application is fast compression of a series, by selecting major extrema and discarding the other points. The compression algorithm runs in linear time and takes constant memory. The second application is indexing of compressed series by their major extrema, and retrieval of series similar to a given pattern. The retrieval procedure searches for the series whose compressed representation is similar to the compressed pattern. It allows the user to control the trade-off between the speed and accuracy of retrieval. We show the effectiveness of the compression and retrieval for stock charts, meteorological data, and electroencephalograms. Keywords. Time series, compression, fast retrieval, similarity measures. 1 "... We formalize the notion of important extrema of a time series, that is, its major minima and maxima; analyze basic mathematical properties of important extrema; and apply these results to the problem of time-series compression. First, we define numeric importance levels of extrema in a series, and p ..." Cited by 2 (0 self) Add to MetaCart We formalize the notion of important extrema of a time series, that is, its major minima and maxima; analyze basic mathematical properties of important extrema; and apply these results to the problem of time-series compression. First, we define numeric importance levels of extrema in a series, and present algorithms for identifying major extrema and computing their importances. Then, we give a procedure for fast lossy compression of a time series at a given rate, by extracting its most important minima and maxima, and discarding the other points. "... We address the problem of efficient similarity search based on the minimum distance in large time series databases. To support minimum distance queries, most of previous work has to take the preprocessing step of vertical shifting. However, the vertical shifting has an additional overhead in buildin ..." Cited by 1 (0 self) Add to MetaCart We address the problem of efficient similarity search based on the minimum distance in large time series databases. To support minimum distance queries, most of previous work has to take the preprocessing step of vertical shifting. However, the vertical shifting has an additional overhead in building index. In this paper, we propose a novel dimensionality reduction technique for indexing time series based on the minimum distance. We call our approach the SSV-indexing (Segmented Sum of Variation Indexing). The proposed method can match time series of similar shape without vertical shifting and guarantees no false dismissals. Several experiments are performed on real data (stock price movement) to measure the performance of the SSV-indexing.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1018278","timestamp":"2014-04-17T23:27:54Z","content_type":null,"content_length":"27393","record_id":"<urn:uuid:8b05c25c-7866-4c32-b2cd-b7573b324f80>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
HPS 0410 Einstein for Everyone Back to main course page Is Special Relativity Paradoxical? John D. Norton Department of History and Philosophy of Science University of Pittsburgh Background reading: J. Schwartz and M. McGuinness, Einstein for Beginners. New York: Pantheon.. pp. 109 - 116. What's the Problem? Relativity theory tells us that a moving clock is slowed down and a moving rod is shrunk in the direction of its motion. If I am an inertial observer, I will find the effect to come about for the clocks and rods of a spaceship moving past at rapid speed. But if that spaceship is moving inertially, then, by the principle of relativity, the spaceship's observer must find the same thing for my clocks and rods. Relative to that observer, my clocks and rods move past at great speed. So that observer would find my clocks to be slowed and my rods to be shrunk in the direction of my motion. Each finds the other's clocks slowed and rods shrunk. How can both be possible? Is there an inconsistency in the theory? If I am bigger than you, then you must be smaller than me. You cannot also be bigger than me. That's the problem. The Car and the Garage That each finds the other's clocks slowed and rods shrunk is troubling. But is it a real paradox in the sense of there being a logical contradiction? If I walk away from you, simple perspective effects make it look to each of us that the other is getting smaller. I judge you to grow smaller; and you judge me to grow smaller. No one should think that this is a paradox. That perspectival effect should not worry anyone. The car in the garage problem is an attempt to show that the relativistic effects are more serious than this simple perspectival effect. There is, it tries to show, a real contradiction; and we should not tolerate contradictions in a physical theory. Here is how we might try to get a contradiction out of the relativistic effect of each observer judging the other to have shrunk. Imagine a car that fits perfectly into a garage. The garage is a small free standing shed that is just as long as the car. There is a door at the right and a door at the left of the garage. The car fits exactly--as long as it is at rest. Now image that we drive the car at 86.6% speed of light through the garage from right to left. The doors have been opened at the right and the left of the garage to allow passage of the car. There is a garage attendant, who stands at rest with respect to the garage. Can the garage attendant close both doors so that, at least for a few brief moments, the car is fully enclosed within the garage? According to the garage attendant, there is no problem achieving this. At 86.6% the speed of light, the car has shrunk to half of its length at rest. It fits in According to the car driver, however, matters are quite different.The car is at rest and the garage moves. The the garage handily. The garage attendant can close both doors and trap the car garage approaches the car at 86.6% the speed of light. So the car driver finds that it is the garage and not the inside. car that has shrunk to half its length. The garage is now half as long as the car. The car driver says that there is no way the garage attendant can shut both doors and trap the car fully inside. Now this is a serious problem. Either the car can or cannot be trapped fully within the garage, but not both. (Or so it would seem.) More formally, we have a true paradox. The term paradox has multiple meanings. It might just designate something that is unexpected in an amusing way; or something so unexpected as to be unbelievable. In its strongest form it is the appearance of a logical contradiction is system we thought free of contradiction. (We have such a contradiction when we can deduce both some proposition A and its negation not-A.) That seems to be what is happening here. We seem to be able to deduce both of: It IS the case that car is fully trapped with in the garage. It IS NOT the case that the car is fully trapped within the garage. It is usually take to be a fatal problem when a theory is shown to harbor contradictions. I say "usually" since there are exceptions. We shall see later that, when quantum theory first emerged, it harbored a highly visible logical contradiction. Dealing with it was an urgent problem. Relativity of Simultaneity... There is a solution. It depends upon our remembering that that there is more in special relativity than the slowing of clocks and the shrinking of rods. We have already seen the relativity of simultaneity which will take on greater and greater importance in our assessment of the theory. It tells us that observers in relative motion can disagree on the timing of spatially separated events. ...Solves the Problem The possibility of that disagreement is the key to the problem of the car and the garage. A judgment of the simultaneity of events is essential to any judgment of whether the car was trapped in the garage by the closing of doors. The car driver and the garage attendant disagree on whether the car is ever fully enclosed in the garage simply because they disagree on the time order of two events. The garage attendant says: There are two events: "Left door shut": I closed the left door before the car struck it. "Right door shut": I closed the right door after the car passed. And these events happened at the same time. Therefore the car was fully enclosed. The car driver says: "There are two events. "Left door shut": You closed the left door before the car struck it. "Right door shut": You closed the right door after the car passed. But these events did not happen at the same time. You closed the left door first. Then--later--you closed the right door after the front of the car had already burst through the closed left door. Therefore the car was never fully enclosed. Both agree that the two events "left door shut" and "right door shut" happened. They disagree on the time order in which they happened. But that time order is what is needed to decide whether the car was fully enclosed in the garage. In a nutshell: • The car can only be said to have been fully enclosed in the garage if both doors were shut at the same time. • There is no observer independent fact of the matter as to timing of these events. • Therefore there is no observer independent fact as to whether the car was ever fully enclosed in the garage. Relativity of Simultaneity and the Measurement of Lengths The problem of the car and the garage shows how judgments of lengths are entangled with judgments of simultaneity. This entanglement runs throughout special relativity. Indeed, one can understand all the odd kinematical effects as derived from it; for this reason, it was the first effect Einstein discussed in his 1905 paper. For example, the relativity of simultaneity lies behind relativistic length contraction. To see this, consider how we might measure the length of a moving object. Take a car moving along a freeway at fancifully high speeds, so that relativistic effects come into play. I am standing by the roadside and want to know the car's length--or at least its length relative to me. I cannot just hold up a measuring rod and proceed in the normal way: that is, check which marks on the rod align with each end of the car. For the car is zooming past. By the time I have noted the alignment of the front of the car with, say, the 0 mark on the measuring rod, the car has long since zoomed off into distance. I will have had no chance to check where the rear of car aligned. I need a more refined procedure. Here's one: as the car zooms by, I stand with a friend at the roadside, each of us holding a raised flag, ready to plant into the roadside. As the front of the car passes, I plant my flag into the roadside; as the rear of the car passes my friend, my friend plants his flag into the roadside. The car zooms away. But that doesn't matter anymore. I have the information I need in the locations of the flags. I can use my measuring rod to determine the distance between the flags. That is the length of the moving car. What is essential to this procedure is that I and my friend plant our flags at the same time. Otherwise the distance between the two marks will not properly reflect the length of the car. But there's the catch. The car driver will disagree with my judgments of which events are simultaneous. The car driver will agree, of course, that there are two events, the planting of the two flags. But the car driver will not agree that I and my friend placed the marks simultaneously. Rather the car driver will find my friend and I to be rushing toward the car and the two flag plantings to have happened at different times. As the figure shows, the car driver will judge the planting of my flag at the front to have happened first; and the planting of my friend's flag at the rear to have happened later. Here's an animated version of this process. Since my friend delayed the planting of the flag at the rear (in the car driver's judgment), the rear of the car advanced for some short time after I'd planted my flag at the front. Therefore (in the car driver's judgment) the distance we staked out with the flags is shorter than the length of the car and our determination of the length of the car is wrong. Hence we end up disagreeing about the length of the car. The important point is that neither of us (driver and roadside observer) has made an error. There is no absolute fact as to which of us is really moving. Therefore there is no absolute fact as to which of our judgements of the timing of the two events is correct. Just as in the case of the car and the garage, we each judge the other as shrunken because we judge the simultaneity of events Relativity of Simultaneity and the Measurement of the Rates of Clocks Similar considerations arise in judgments of the slowing of moving clocks. To see how the relativity of simultaneity underlies the relativistic slowing of clocks, we attend to a procedure we might use to measure the effect. To judge the rate of a clock that passes me I need to be able to compare its reading with my wristwatch now and then compare its reading again later with my wristwatch after some time has passed. If the clock is running slow, I'll notice that its rate lags behind my wristwatch. The catch in this simple procedure is that the clock is moving. I might find that both it and my wristwatch read the same time now, at the moment the clock passes. But the clock is moving rapidly. So after some time has elapsed, it has moved off into the distance. How can I find out what the moving clock reads an hour from now when it is no longer anywhere near me? Here's one procedure: I set up many clocks at rest with respect to me throughout space. Then, one hour later, as the moving clock passes one of those clocks, a friend notes what the moving clock reads and what the local resting clock reads. From my friend's report, I can figure out whether the moving clock has slowed or not. The figure shows the bare essentials of the moving clock and all the other clocks spread out through space. The moving clock agrees with the reading of the leftmost clock--my wristwatch--as it passes by. However when it passes the rightmost, it now reads much less. So I judge it to have slowed. This procedure seems quite sound. So does that mean an observer who travels with the moving clock would agree and judge the moving clock to have slowed? No! We have seen that relativity theory requires that observer to judge my array of clocks to be running more slowly! How can that be? By now you know the answer. An essential part of the procedure is that all the clocks I laid out through space must be synchronized. That means that the events of each clock reading say "12 noon" must be simultaneous events. The relativity of simultaneity tells us that observers in relative motion may disagree on whether those events are simultaneous. Therefore observers in relative motion may disagree on whether clocks separated in space are properly synchronized. And that is what happens in this case. The moving observer will judge my clocks not to be properly synchronized. As a result, the moving observer will regard my judgments of the rate of the moving clock to be defective. As before, there is no absolute fact as to whether the clocks are properly synchronized. Therefore there is no absolute fact as to whether the moving clock slows with respect to my clocks; or whether my clocks slow with respect to the moving clock. Are the Relativistic Effects Illusory Artefacts of Measurement? Once you recognize how fully the relativity of simultaneity is bound up in the relativistic length contraction and clock slowing effects, it is easy to fall into a new misunderstanding. One might think that the effects are not really part of the world at all, but that they somehow come about solely because of the way we set our clocks. An analogy: it is possible to board a transpacific flight in Sydney, Australia, on one day and, after 16 hours of travel, disembark in Los Angeles the day before! Is this time travel? Of course not. During the flight, you crossed the international date line. That the calendar reads a day earlier in Los Angeles is purely an artefact of how we set our clocks and calendars across the world. http://en.wikipedia.org/wiki/ http://aa.usno.navy.mil/faq/docs/international_date.php Historical positions of the International Date Line from "Notes on the History of the Date or File:Qantas_a380_vh-oqa_takeoff_heathrow_arp.jpg Calendar Line," in The New Zealand Journal of Science and Technology, Vol. XI, pp. 385 - 388 In the early 1910s, this issue entered the physics literature in discussion of the geometry of a rotating disk. in 1911, Vladimir Varicak offered the following diagnosis of the origin of relativistic length contraction: It "is only an illusory, subjective appearance, caused by the manner of our regulation of clocks and measurement of length" and "a psychological and not physical effect." Einstein's reply of the same year read: "The question of whether the Lorentz contraction really exists or not is misleading. ...[it is] not real in so far as it does not exist for a co-moving observer. ...[it is] real in so far as it can be demonstrated in principle by physical means by an observer that is not co-moving" Einstein's reply is terse. What I think he is getting at is this. He is accusing Varicak of conflating two distinctions: Real Observer independent versus versus unreal observer dependent That we age is real. That we travel backwards in time when flying That an object spins on it axis is observer independent; it is verified by the presence of inertial forces. That an asteroid from Sydney to Los Angeles is unreal. moves uniformly in space must be judged relative to another object. What Varicak supposes that being real goes with being observer independent; and that being unreal goes with being observer dependent. Hence, when we find an observer dependent effect, we have found an unreal effect. Einstein's response is that we should decouple the two distinctions. That an effect is observer dependent does not determine whether it is real or not. The simplest way to see how this works is to consider an asteroid rapidly approaching us. That it is moving rapidly is an observer dependent effect. For an observer on the asteroid, the asteroid is at rest. The effect is observer dependent, but the motion of the asteroid is very real for us. We should view relativistic length contraction in the same way. As the asteroid speeds past Earth--a fortunate near miss--we judge its length to be shortened. It is an observer dependent effect since it is not shared by another observer on the asteroid. However the contraction of its length is still a real effect. Separating Real from Unreal Some observer dependent effects will be real; others will be unreal. What complicates our separation of the two cases is that changes in the observer's measurement procedures will create corresponding changes in the effect. We can use this complication to our advantage. We separate the real from the unreal by checking whether changes in the observer's procedures can eradicate the effect or not. Recall the appearance of time travel that arises when one flies across the Pacific Ocean and crosses the international date line. The appearance of time travel everywhere can be eradicated merely by setting our world clocks differently. If clocks in Sydney and Los Angeles are set to read the international standard of Greenwich time, then the arrival in Los Angeles would always come later than the departure. There would be no appearance of time travel for any flight anywhere in the world. Thus the effect is unreal. Take the case of the relativistic slowing of clocks. Recall the earlier arrangement. A rapidly moving clock is judged to have slowed since it is found to show earlier times than the clocks it passes of the inertial frame of reference from which the effect is judged. We can eradicate the effect merely by introducing a non-standard synchronization of those frame clocks so that they agree with the readings of the rapidly moving clock as it passes each of them. The slowing will be eradicated for this particular clock. But what of others? That's the catch. The slowing effect will not be eliminated for all clocks. Take a clock moving rapidly in the opposite direction at the same speed, where its speed is judged by the original synchronization of the frame clocks. It will be judged to slow at twice the rate of the original unconfounded effect. This clock slowing effect is a real effect. It is manifested to an observer in an inertial frame by means of the clocks the observer has arranged through the frame. The mode of synchronization of those clocks will alter how the rate of slowing manifests. But the effect is not produced by the synchronization. For it cannot be removed completely by altering the mode of synchronization. We may eradicate it for one motion, but it comes back with greater strength for another. It is always there, even if in an obscrured or confounded form. It is helpful to compare this with the corresponding problem in ordinary Newtonian physics. There, rapidly moving clocks do not slow. However we could set up the clocks of some frame so as to give the illusion of a slowing of rapidly moving clocks. There would be a corresponding speeding up of clocks moving in the other direction. We would affirm that the whole effect is just an illusion. For by setting the clocks back to their normal synchronizations, we would find the entire effect of the speeding up and slowing down of clocks to have been eradicated. What You Need to Know • The relativity of simultaneity • How it solves the car in the garage problem. • How the relativity of simultaneity is involved in judgments of the length of moving bodies and rates of clocks. • Why this doesn't mean that the relativistic effects are illusions. Copyright John D. Norton. February 2001, September 2002; July 2006; January 2, 2007, January 10, August 21, 200, January 18, 2012; January 18, 2013.
{"url":"http://www.pitt.edu/~jdnorton/teaching/HPS_0410/chapters/Reciprocity/index.html","timestamp":"2014-04-21T00:02:58Z","content_type":null,"content_length":"32297","record_id":"<urn:uuid:5ee27944-5017-47d6-9759-f838519783db>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00231-ip-10-147-4-33.ec2.internal.warc.gz"}