content
stringlengths
86
994k
meta
stringlengths
288
619
weak star convergence up vote -2 down vote favorite if $F$ is a Banach space and $f_n \subset F^* $ weak star convergent to $f\in F^*$. If further $x\in F$ is the weak limit of $(x_n)_n \subset F$ does then $f_n(x_n) \longrightarrow f(x)$ hold? We know that for all $n$: $\lim_m f_n(x_m) = f_n(x)$ and for all $m$: $\lim_n f_n(x_m) = f(x_m)$ so what can I infer about $\lim_n\lim_m f_n(x_m)$? I thought as the limit of $f_n$ is again in $F^*$ i could just put $\lim_n(\lim_m f_n(x_m)) = \lim_n f_n(x)$?! 2 This site is for research-level questions, which this question is not. It would be fine at math.stackexchange.com however. – Nate Eldredge Jun 6 '13 at 12:59 add comment closed as off topic by Nate Eldredge, Emil Jeřábek, Andreas Blass, Willie Wong, Bill Johnson Jun 6 '13 at 16:41 Questions on MathOverflow are expected to relate to research level mathematics within the scope defined by the community. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about reopening questions here.If this question can be reworded to fit the rules in the help center, please edit the question. 1 Answer active oldest votes The answer is no. Consider the simple case of an infinite dimensional Hilbert space with a sequence $(x_n)_n$ of the unit sphere weakly converging to zero. up vote 1 down vote So I could just use $x^n = (0,...,0,1,1,...)$ converging weakely to $0$, and $f_m(y) = sum_{i=1}^m y_i$ converging to $f(y) = sum_{i=1}^\infty y_i$ then $\lim_n f_n(x^n) = 1$. thx – Bohem Jun 6 '13 at 12:31 However, what kind of convergence do i need for the limits to be interchangable? – Bohem Jun 6 '13 at 12:32 It's enough if $X_n \to x$ in norm. Proof: by the uniform boundedness principle $\|f_n\|$ is bounded. Now write $|f_n(x_n) - f(x)| \le \|f_n\| \|x_n - x\| + |f_n(x) - f(x)| $. – Nate Eldredge Jun 6 '13 at 13:00 add comment Not the answer you're looking for? Browse other questions tagged convergence or ask your own question.
{"url":"http://mathoverflow.net/questions/132932/weak-star-convergence","timestamp":"2014-04-19T17:23:06Z","content_type":null,"content_length":"47145","record_id":"<urn:uuid:07de19df-57a8-42e4-935a-508ec8f55dcd>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
Glendale, CA SAT Math Tutor Find a Glendale, CA SAT Math Tutor ...I base my teaching on students' learning styles. It takes instruction in a right direction from the start. My students learn how to treat the material with creativity and make it work for themselves, and how to deal with time-consuming math workouts without spending a lot of time on it. 14 Subjects: including SAT math, calculus, geometry, algebra 1 ...I am very familiar with linear algebra. I taught linear algebra to freshman students in college at my school Rensselaer Polytechnic Institute in my junior and senior years. In my sophomore year I was a TA. 21 Subjects: including SAT math, reading, English, chemistry ...My Education : I earned a B.S in Psychology at University of Washington (focus on perception and memory), pursuing my MBA at USC (Youngest student in my MBA class year) and M.S at USC My Tests: I received a 2400 on the SAT and scored in the 90th percentile on the GMAT My Tutoring experience: I... 25 Subjects: including SAT math, reading, English, writing ...I taught HS chem for several years and have tutored several students over the years. Chemistry is a really fun subject. I am ready to pass on my chemistry knowledge to either you or your 24 Subjects: including SAT math, chemistry, English, calculus ...More than 10 years of experience in teaching math and calculus from middle school-level, high school-level and college-level students. Earned Ph.D. degree in physical Chemistry. B.Sc. degree in chem. eng. 10 Subjects: including SAT math, chemistry, calculus, statistics Related Glendale, CA Tutors Glendale, CA Accounting Tutors Glendale, CA ACT Tutors Glendale, CA Algebra Tutors Glendale, CA Algebra 2 Tutors Glendale, CA Calculus Tutors Glendale, CA Geometry Tutors Glendale, CA Math Tutors Glendale, CA Prealgebra Tutors Glendale, CA Precalculus Tutors Glendale, CA SAT Tutors Glendale, CA SAT Math Tutors Glendale, CA Science Tutors Glendale, CA Statistics Tutors Glendale, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Glendale_CA_SAT_math_tutors.php","timestamp":"2014-04-16T04:25:09Z","content_type":null,"content_length":"23820","record_id":"<urn:uuid:e6c96486-8efe-4cc9-845d-1489e1c2f338>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
Homogeneous Helmholtz Equation with Variable Coefficient June 19th 2010, 08:56 PM Homogeneous Helmholtz Equation with Variable Coefficient How does one go about solving a two dimensional (or more) homogeneous helmholtz equation with a variable coefficient, i.e. $\Delta$u(x,y) + u(x,y)*f(x,y) = 0 Where in the standard Helmholtz equation, f(x,y) = k (constant). Knowing some boundary conditions. I am at a loss as to what method to even use, having tried separation of variables, green's functions, method of characteristics. Any hints? Can this equation even be solved? Thank You June 20th 2010, 04:30 AM Is $f(x,y)$ arbitrary or does it have a specific form? June 20th 2010, 07:03 AM June 21st 2010, 06:31 PM It is possible that f(x,y) could be a number of step functions. Would that improve the situation?
{"url":"http://mathhelpforum.com/differential-equations/148936-homogeneous-helmholtz-equation-variable-coefficient-print.html","timestamp":"2014-04-20T20:38:38Z","content_type":null,"content_length":"5591","record_id":"<urn:uuid:bbac1c30-3bb6-4ea0-8453-1ec7fbcaa9ad>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
Differential equations book - for Engineering Please suggest a book which recaps first order ODEs , then focuses on Higher order ODEs and also gives introduction to partial differential equations. I need this so that I can have good background for engineering topics such as contol systems,communication engg and circuit analysis.
{"url":"http://www.physicsforums.com/showthread.php?p=3331039","timestamp":"2014-04-17T00:57:15Z","content_type":null,"content_length":"19933","record_id":"<urn:uuid:0a20f202-fddf-46b5-a380-e2665e370484>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
Resonant Frequency, Bandwidth and End-Effect of a Wire-Cage Dipole Antenna Author: R.J.Edwards G4FGQ © 14 October 2002 Each half of a cage dipole consists of a cylindrical, rectangular or triangular cage of parallel wires, the wires being connected together at their ends. The purpose of the cage is to increase the effective diameter of the antenna radiating elements. This causes an increase in capacitance and a decrease in inductance per metre of length. The characteristic impedance Zo = Sqrt(L/C) of the antenna and the equivalent L & C series tuned circuit therefore decreases. The inductive reactance is reduced and so is the Q. As with other tuned circuits, Q = Inductive-Reactance/Resistance. In this case the resistance is the equivalent radiation resistance which is assumed to be uniformly distributed along the antenna conductor. It is equal to exactly twice the feed-point resistance. In the analysis program you can download below, the dipole is specified by its length, cage diameter, number of wires in the cage, and wire diameter. If there are only two wires in the cage then they are spaced apart by the diameter of the imaginary cylinder and can be used to investigate the frequency and bandwidth of a folded dipole. When only one wire is present it can be given any diameter to represent rods and tubes. Note: The computed feed-point resistance applies only to exactly resonant dipoles. The transmit bandwidth is defined as the band between the two frequencies at which the SWR on the feed-line has risen to stated values, assuming the SWR at the band centre has been previously adjusted by some means to be 1:1. The receiving bandwidth is defined as the band between the two frequencies at which receiver input power has fallen to 1/2 the level at the band centre. The also is described as the 3dB bandwidth. For an impedance-matched receiver, the 3dB bandwidth is 2*Fc/Q where Fc is the centre frequency and Q is the intrinsic Q of the antenna. In practice the defined band-edges are not exactly asymmetrically disposed about the resonant frequency and the transmission line impedance is rarely such that SWR equals exactly 1:1 at the resonant frequency. Cones alternatively can terminate cage ends, but the program neglects such minor geometric details. The height above ground affects the antenna input resistance. The feed-line length and antenna tuner will affect the system bandwidth. Therefore high calculating accuracy is not needed. However, the program does show how insensitive bandwidth and resonant frequency are to relatively large changes in antenna diameter. At HF, the dependence of Q and bandwidth on antenna diameter is sometimes exaggerated, but for TV and other wideband transmissions, shape and diameter are important antenna design features. Run this Program from the Web or Download and Run it from Your Computer This program is self-contained and ready to use. It does not require installation. Click this link DipCage2 then click Open to run from the web or Save to save the program to your hard drive. If you save it to your hard drive, double-click the file name from Windows Explorer (Right-click Start then left-click Explore to start Windows Explorer) and it will run. Discuss, debate and ask questions about wire-cage dipole antennas in the Ham Radio Forum. Search other ham radio sites with Ham Radio Search
{"url":"http://www.smeter.net/antennas/wire-cage-dipole.php","timestamp":"2014-04-24T20:17:22Z","content_type":null,"content_length":"17684","record_id":"<urn:uuid:6faf3eeb-993c-4793-9254-8ecdca4206a9>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: solve...pic in comment box • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50255a7fe4b040cb090baf9f","timestamp":"2014-04-21T10:13:57Z","content_type":null,"content_length":"50238","record_id":"<urn:uuid:1dfd0430-8a00-42e4-b21c-3cc156a27208>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
Oakton Community College - Prealgebra I. Course Prefix/Number: MAT 060 Course Name: Prealgebra Credits: 4 (4 lecture; 0 lab) II. Prerequisite Appropriate score on Mathematics Placement Test. III. Course (Catalog) Description Course is preparation for introductory algebra course. Content includes fundamental concepts, operations, and applications of arithmetic in basic algebraic contexts, including linear equations, statistics, square roots, graphing, and polynomials. Arithmetic topics treated include rational numbers, decimals, percents, and measurement. IV. Learning Objectives Module 1 Objectives: Perform the fundamental operations with whole numbers. Solve application problems with whole numbers. Module 2 Objectives: Perform the fundamental operations with integers. Solve simple linear equations using integers. Solve application problems with integers. Module 3 Objectives: Perform the fundamental operations with fractions. Solve simple linear equations using fractions. Solve application problems with fractions. Module 4 Objectives: Perform the fundamental operations with decimals. Solve simple linear equations using decimals. Solve basic and application problems using ratios, rates, and proportions. Interpret and apply simple statistical concepts such as the mean, median and mode. Calculate square roots and apply them to formulas such as the Pythagorean Theorem. Module 5 Objectives: Perform the fundamental operations with percents. Solve application problems with percentages. Calculate measurements of geometric figures. Graph and interpret points on a Cartesian coordinate system. V. Academic Integrity Students and employees at Oakton Community College are required to demonstrate academic integrity and follow Oakton's Code of Academic Conduct. This code prohibits: • cheating, • plagiarism (turning in work not written by you, or lacking proper citation), • falsification and fabrication (lying or distorting the truth), • helping others to cheat, • unauthorized changes on official documents, • pretending to be someone else or having someone else pretend to be you, • making or accepting bribes, special favors, or threats, and • any other behavior that violates academic integrity. There are serious consequences to violations of the academic integrity policy. Oakton's policies and procedures provide students a fair hearing if a complaint is made against you. If you are found to have violated the policy, the minimum penalty is failure on the assignment and, a disciplinary record will be established and kept on file in the office of the Vice President for Student Affairs for a period of 3 years. Details of the Code of Academic Conduct can be found in the Student Handbook. VI. Sequence of Topics Module 1 (no calculators) Whole Numbers (Chapter 1) 1. Place value and number names 2. Addition, subtraction, and fundamental properties 3. Multiplication, division, and fundamental properties 4. Rounding and estimating 5. Order of operations 6. Exponential notation 7. Introduction to variables, algebraic expressions, and equations 8. Applications including area and perimeter Module 2 (no calculators) Integers and Algebraic Equations (Chapters 2 and 3) 1. Integers and number lines 2. Addition and subtraction 3. Multiplication and division 4. Evaluating algebraic expressions 5. Order of operations 6. Solving algebraic equations 7. Applications using linear equations Module 3 (no calculators) Fractions (Chapter 4 skip complex fractions) 1. Understanding fractions 2. Equivalent fractions and simplifying fractions 3. Factors, multiples, primes, and divisibility rules 4. Multiplying and dividing 5. Adding and subtracting 6. Operations with mixed numbers 7. Comparing and ordering 8. Solving equations using fractions 9. Applications Module 4 Decimals and Ratios/Proportions (Chapter 5 & 6 skip 6.5) 1. Understanding decimals, ratios, and rates 2. Place value: reading and writing decimal numerals 3. Comparing and ordering 4. Rounding and estimating 5. Adding and subtracting 6. Multiplying and dividing 7. Conversions: fractions, mixed numerals, decimals 8. Solving equations using decimals 9. Applications including mean, median, and mode 10. Proportions and problem solving 11. Square Roots and the Pythagorean Theorem Module 5 Percents, Introduction to Graphing, and Geometry Review (Chapter 7 skip 7.3 and compound interest in 7.6, Chapter 8 skip 8.4 & 8.5, Chapter 9 ONLY 9.2 & 9.3 1. Understanding percent 2. Conversions: fractions, decimals, percent 3. Solving percent problems using equations 5. Applications 6. Tables, pictographs, bar, and line graphs 7. Ordered pairs and linear equations in two variables VII. Methods of Instruction Methods of instruction include one-on-one and/or small group discussion, and required website ancillaries. Calculators/computers will be used for modules 4 and 5 only. Course may be taught as face-to-face, media-based, hybrid or online course. VIII. Course Practices Required Mathematics 060, 070 and 110 are sequential courses utilizing a classroom instructor and an interactive computer website. Course participants must attend scheduled class hours as well as one computer lab hour per week. Students may be dropped from the course upon missing more than three class sessions or three lab hours. Each course is divided into five modules. Each module must be completed with a minimal posttest score of 85% to proceed to the next module. All course work must be completed in a notebook. Students may complete a course at any time during the semester. Upon completion of a course, the student can start the next sequential course. A new access code must be purchased at that time. If all modules of a course are not successfully completed within a semester, the student can re-enroll in the same course the following semester beginning with their first uncompleted module. Course may be taught as face-to-face, media-based, hybrid or online course. IX. Instructional Materials Current textbook information for each course and section is available on Oakton's Schedule of Classes. Textbook information for each course and section is available on Oakton's Schedule of Classes. Within the Schedule of Classes, textbooks can be found clicking on an individual course section and looking for the words "View Book Information". Textbooks can also be found at our Mathematics Textbooks page A scientific calculator, notebook, and earphones are required. X. Methods of Evaluating Student Progress Students must complete the following work with the following minimal scores: Homework, class work, and study plans (unlimited attempts) = 100% Quizzes (unlimited attempts) = 90% Module Posttest = 85% XI. Other Course Information Individual instructors will establish and announce specific policies regarding attendance, due dates and make-up work, incomplete grades, etc. If you have a documented learning, psychological, or physical disability you may be entitled to reasonable academic accommodations or services. To request accommodations or services, contact the Access and Disability Resource Center at the Des Plaines or Skokie campus. All students are expected to fulfill essential course requirements. The College will not waive any essential skill or requirement of a course or degree program.
{"url":"http://www.oakton.edu/academics/academic_departments/math/syllabi/mat060.php","timestamp":"2014-04-20T02:09:44Z","content_type":null,"content_length":"21875","record_id":"<urn:uuid:e7637444-70bf-43ff-9aad-be6abd25c7da>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
Population structure and eigenanalysis When analyzing genetic data, one often wishes to determine if the samples are from a population that has structure. Can the samples be regarded as randomly chosen from a homogeneous population, or does the data imply that the population is not genetically homogeneous? We show that an old method (principal components) together with modern statistics (Tracy-Widom theory) can be combined to yield a fast and effective answer to this question. The technique is simple and practical on the largest datasets, and can be applied both to genetic markers that are biallelic or to markers that are highly polymorphic such as microsatellites. The theory also allows us to estimate the data size needed to detect structure if our samples are in fact from two populations that have a given, but small level of differentiation.
{"url":"http://www.newton.ac.uk/programmes/SCB/abstract2/patterson.html","timestamp":"2014-04-20T23:46:13Z","content_type":null,"content_length":"2825","record_id":"<urn:uuid:d8bbb439-4cf4-4192-abee-25071db9a4db>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: glossary of terms and concepts #1297 New Physics #1417 ATOM TOTALITY 5th ed Replies: 0 glossary of terms and concepts #1297 New Physics #1417 ATOM TOTALITY 5th ed Posted: Mar 17, 2013 4:42 AM I do so much science, that I cannot remember well enough where I last left off, when it comes to revision. So these last pages are more for my memory sake when starting the 6th edition. Now here is a Glossary of Terms used in this book: (1) Charge = geometry of either Euclidean, Elliptic, Hyperbolic. Charge is the geometry of Space, so that -1 electrical charge is hyperbolic geometry, such as a torus or ring or closed loop wire. +1 charge is elliptic geometry such as a sphere or ellipsoid, and 0 charge ?is Euclidean flat plane geometry. The photon with its double transverse wave is a Euclidean flat plane geometry. (2) Magnetic Monopole = two poles of either Elliptic or Hyperbolic geometry. Now a magnetic monopole is far different than a charge, for a charge is a whole geometry of the three possible geometries while a magnetic monopole is a feature of those three possible geometries. Magnetic monopoles are related to spin. I need more details for this in the 6th (3) Spin = direction of motion of the magnetic monopole, whether clockwise ?or counterclockwise Spin is merely the superposition of both the Right Hand Rule and ?the Left-hand rule. The proton, electron and neutron are all 1/2 spin because they are the superposition of the Right-hand-rule. The photon has 0 spin because it is the superposition of two Right-hand-rules where they cancel out each other to that of 0. ?Two electrons in a suborbital such as 3d6 for iron has superposition ?of a one Right hand rule along with one Left hand rule. The unpaired ?electrons in iron 3d6 are all aligned and parallel with one Right hand ?rule, and it is the magnification of the spins that yields the ?permanent bar magnet of iron. (4) Rest-mass: this is where the front edge of the wavefront curls around ?and becomes a closed loop so that the wave is a standing wave inside ?that closed loop. Counting up all the ridges and troughs of the ?standing wave is the rest-mass. A photon has 0 rest-mass because the ?wavefront never curls around into a closed loop. (5) Space: space is the toughest because it draws together all the other ?concepts. Space is Faraday's Lines-of-Force as magnetic monopoles of ?both M+ and M-. The magnetic monopoles can be either a transverse wave ?or a longitudinal wave. I need to really elaborate on Space in the 6th edition and feel that Space is the most difficult (6) Some units relationships of physics: E = FD, energy = force x distance F = MA, force = mass x acceleration E = MAD V = D/T, velocity = distance/time A = V/T, acceleration = velocity/time A = D/TT E = M(DD/TT) = Mcc P = MV, momentum = mass x velocity P = FT E = FD E = Fc F = ma F = mc E = mcc Energy = 1/2 m*v^2 Force = rate of change of energy, since acceleration is rate of change ?of velocity. Google's (and Bing's) searches and archives are top-heavy in hate-spew generated by search-engine-bombing. And the Google archive stopped functioning properly by about May 2012 to accommodate Google's New- Newsgroups. And recently Niuz.biz (Docendi.org) threatens to harm your ?computer if opening a post of mine. The solution to the sci. newsgroups is to have the sciences hosted by colleges and universities such as Drexel University hosting sci.math, not by corporations like Google out to make money. Science belongs in education, not in money motivated corporations. Do I hear a University doing sci.physics, sci.chem, sci.biology, sci.geology, etc Only Drexel's Math Forum has done a excellent, simple and fair archiving of AP posts for the past 15 years as seen here: Archimedes Plutonium whole entire Universe is just one big atom where dots of the electron-dot-cloud are galaxies
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2441540&messageID=8654507","timestamp":"2014-04-20T00:51:03Z","content_type":null,"content_length":"17757","record_id":"<urn:uuid:1a685b09-ad39-4510-9851-eee42a1daa1f>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
s and S James East I was a Postdoctoral Research Fellow in the School of Mathematics and Statistics at the University of Sydney during 2008-2010. I am still an honorary associate in the School. I began a position as lecturer in Pure Mathematics at the University of Western Sydney in 2011. Here is my personal page there -- it will hopefully be updated some time soon. Current postal address: Dr James East School of Computing and Mathematics University of Western Sydney Locked Bag 1797, Penrith NSW 2751 Office: Building EN, Room 1.33, Parramatta campus Email: j.east@uws.edu.au Phone: +61 2 9685 9108 Department Fax: +61 2 9685 9557 Here are the last few courses I taught at the University of Sydney: • Second Semester 2010 - PHAR1822 - Physical Pharmaceutics and Formulation (Mathematics Component) - old webpage • First Semester 2010 - PHAR2813 - Therapeutic Principles (Mathematics Component) - old webpage • Summer School 2010 - MATH2061 - Vector Calculus • First semester 2009 - MATH1902 - Linear Algebra (Advanced) • Summer School 2009 - MATH2061/2067 - Vector Calculus • Second Semester 2008 - MATH2968 - Algebra (Advanced) I am a member of the Algebra Research Group. My main interest lies in combinatorial semigroup theory, the study of semigroups (including monoids and groups) via presentations (generators and relations). The kinds of semigroups and monoids I am particularly interested in are all somehow related to: • braid groups; and • symmetric groups and generalizations such as: □ transformation semigroups; □ partition monoids/algebras and dual symmetric inverse monoids; and □ Coxeter groups and Artin monoids/groups. My PhD thesis, supervised by David Easdown and entitled ``On Monoids Related to Braid Groups and Transformation Semigroups'', may be found here. Preprint versions of all papers will be available for download soon. In print 1. With James Mitchell and Yann Péresse: Maximal subsemigroups of the semigroup of all mappings on an infinite set. Trans. Amer. Math. Soc., to appear. 2. Infinity minus infinity. Faith and Philosophy, to appear. 3. Defining relations for idempotent generators in finite partial transformation semigroups. Semigroup Forum, to appear. 4. Partition monoids and embeddings in regular *-semigroups. Period. Math. Hungar., to appear. 5. Defining relations for idempotent generators in finite full transformation semigroups. Semigroup Forum (2012), DOI 10.1007/s00233-012-9447-6. 6. With Des FitzGerald: The semigroup generated by the idempotents of a partition monoid. J. Algebra 372 (2012), 108--133. 7. Generation of infinite factorizable inverse monoids. Semigroup Forum 84 (2012), no.2, 267--283. 8. Generators and relations for partition monoids and algebras. J. Algebra 339 (2011), 1--26. 9. With Peter McNamara: On the work performed by a transformation semigroup. Australas. J. Combin. 49 (2011), 95--109. 10. On the singular part of the partition monoid. Internat. J. Algebra Comput. 21 (2011), no. 1-2, 147--178. 11. Braids and order-preserving partial permutations. J. Knot Theory Ramifications 19 (2010), no. 8, 1025--1049. 12. A presentation of the singular part of the full transformation semigroup. Semigroup Forum. 20 (2010), no. 2, 357--379. 13. Presentations for singular subsemigroups of the partial transformation semigroup. Internat. J. Algebra Comput. 20 (2010), no. 1, 1--25. 14. Embeddings in coset monoids. J. Aust. Math. Soc. 85 (2008), no. 1, 75--80. 15. On a class of factorizable inverse monoids associated with braid groups. Comm. Algebra 36 (2008), no. 8, 3155--3190. 16. With David Easdown and Des FitzGerald: A presentation of the dual symmetric inverse monoid. Internat. J. Algebra Comput. 18 (2008), no. 2, 357--374. 17. Vines and partial transformations. Adv. Math. 216 (2007), no. 2, 787--810. 18. Braids and partial permutations. Adv. Math. 213 (2007), no. 1, 440--461. 19. The factorizable braid monoid. Proc. Edinb. Math. Soc. (2) 49 (2006), no. 3, 609--636. 20. Factorizable inverse monoids of cosets of subgroups of a group. Comm. Algebra 34 (2006), no. 7, 2659--2665. 21. A presentation of the singular part of the symmetric inverse monoid. Comm. Algebra 34 (2006), no. 5, 1671--1689. 22. Birman's conjecture is true for I[2](p). J. Knot Theory Ramifications 15 (2006), no. 2, 167--177. 23. Cellular algebras and inverse semigroups. J. Algebra 296 (2006), no. 2, 505--519. 24. With David Easdown and Des FitzGerald: Presentations of factorizable inverse monoids. Acta Sci. Math. (Szeged) 71 (2005), no. 3-4, 509--520. 25. With David Easdown and Des FitzGerald: Braids and factorizable inverse monoids. Semigroups and languages, 86--105, World Sci. Publ., River Edge, NJ, 2004. 1. A symmetrical presentation for the singular part of the symmetric inverse monoid. Preprint. 2013. 2. Can God count to infinity? Preprint. 2013. 3. Infinite dual symmetric inverse monoids. Preprint. 2013. 4. Infinite partition monoids. Preprint. 2013. 5. Singular braids and partial permutations. Preprint. 2010. 6. With Bob Gray: Idempotent generators in finite partition monoids and related semigroups. In preparation. 7. With James Mitchell and Yann Péresse: Maximal subsemigroups of the symmetric inverse monoid. In preparation. 8. With James Mitchell and Yann Péresse: Maximal subsemigroups of the dual symmetric inverse monoid. In preparation. 9. Dual reflection monoids I--III. Three papers, in preparation. 10. Cellularity of inverse semigroup algebras. Proceedings of the Special Interest Meeting on Semigroups, University of Sydney, 2006. To appear. 11. Coset monoids and embeddings. Proceedings of the Special Interest Meeting on Semigroups, University of Sydney, 2006. To appear. Conference Talks and Seminars Algebra Seminar I was the organizer of the Sydney University Algebra Seminar in 2009--2010. For information about upcoming talks, or to be added to the mailing list, please visit the website.
{"url":"http://www.maths.usyd.edu.au/u/jamese/","timestamp":"2014-04-19T04:34:15Z","content_type":null,"content_length":"11182","record_id":"<urn:uuid:86f1e40c-1228-490f-adab-950d0dd1d7f9>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
subset sum problem when the number of integers in the sum is known up vote 3 down vote favorite Hi everyone, I'm trying to solve a variation of the subset sum problem (http://en.wikipedia.org/wiki/Subset_sum_problem) in which all the integers that I'm using are strictly positive and (most importantly) I know in advance the number of integers that make up the sum. Does anyone know if there is any solution that takes advantage of the second fact? For example, for the set {7,2,5,1} and target sum = 7, both {7} and {2,5} are solutions but if I already know that my solution needs to use only 2 integers, then only {2,5} is a valid solution. Thanks in advance! I suspect that you can reduce the general problem to your version as follows: given an arbitrary set T and target sum s, add big number M to each element of T, and perhaps a few more M's as members, and make kM+s the target sum for a reasonable choice of k. Gerhard "Ask Me About System Design" Pasman, 2012.07.11 – Gerhard Paseman Jul 11 '12 at 18:29 Thanks Gerhard! That was very helpful. – Victor Jul 11 '12 at 19:10 add comment 1 Answer active oldest votes Garey and Johnson, page 223: given a finite set $A$ and a positive integer $s(a)$ for each $a$ in $A$, the question, is there a subset $A'$ of $A$ such that $$\sum_{a\in A'}s(a)=\sum_{a\in A-A'}s(a)$$ is NP-complete. It remains NP-complete even if we require $\\#(A')=\\#(A)/2$. up vote 3 Of course, if you know that the number of integers making up the sum is small, e.g., 2, then the number of possible sets of that size is polynomial in the size of that problem, and just down vote looking at all the subsets of that size is a polynomial time algorithm. The result above suggests that if the number of integers making up the sum is proportional to the total number of integers, so there are exponentially many subsets to try, then NP-completeness comes in. add comment Not the answer you're looking for? Browse other questions tagged metamathematics or ask your own question.
{"url":"http://mathoverflow.net/questions/101971/subset-sum-problem-when-the-number-of-integers-in-the-sum-is-known","timestamp":"2014-04-21T02:14:57Z","content_type":null,"content_length":"49169","record_id":"<urn:uuid:f087c98e-f07b-4746-a6e4-aef3ba900dbd>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Chi square, linear regression, correlation. EXERCISE 40 CHI SQUARE... - (solved) | Transtutors Chi square, linear regression, correlation EXERCISE 40 CHI SQUARE (?^2) I Chi Square (?^2) is an inferential statistical test used to examine differences among groups with variables measured at the nominal level. ?^2 compares the frequencies that are observed with the frequencies that were expected. ?^2 calculations are compared with values in an ?^2 table. If the result is > or = (=) to the value in the table, significant differences exist. If values are statistically significant, the null hypothesis is rejected (Burns & Grove, 2005). These results indicate that the differences are an actual reflection of reality and not due to random sampling error. If more than two groups are being examined, ?^2 does not determine where the differences lie; it only determines that a statistically significant difference exists. A post hoc analysis will determine the location of the difference. ?^2 is one of the weaker statistical tests used, and results are usually only reported if statistically significant values are found. Source: Salsberry, P. J. (2003). Why are some children still uninsured? Journal of Pediatric Health Care, 17 (1), 32-8. In an effort to understand why children remain uninsured, Salsberry (2003) interviewed low-income parents in Ohio and compared children with and without insurance. This cross-sectional survey design included a sample of 392 low-income parents. Subjects were chosen from two groups, those with a Medicaid history (n = 305) and those without a Medicaid history (n = 120). Those without a Medicaid history were chosen randomly. Results indicated specific profiles for different levels of insurance. These levels of insurance include uninsured, Medicaid-enrolled, and privately insured. “Statistically significant differences were found across the three groups in income, working status of the adults, education, health status of the adult and child, and in the utilization of health care” (Salsberry, 2003, p. 38). “Parents of the uninsured children were less knowledgeable about the application process. … Parents of uninsured children face multiple life challenges that may interfere with the enrollment process. Health problems, work schedules, and lack of knowledge may all need to be addressed before we can decrease the number of uninsured children in our nation” (Salsberry, 2003, p. 32). Relevant Study Results In Table 1, Salsberry (2003) presents the demographic characteristics of the sample by level of insurance (uninsured, Medicaid-enrolled, and privately insured). TABLE 1 Demographics of the Sample Uninsured (N = 62) Medicaid (N = 219) Privately Insured (N = 111) Gender–% female 34 (55%) 112 (51%) 55 (50%) Race of child 19 (31%) 80 (37%) 53 (48%) African American 35 (56%) 118 (54%) 50 (45%) 8 (13%) 21 (10%) 8 (7%) Education (of adult) Less than 9 11 (5%) ^ ^ 10, 11, 12 14 (23%) 66 (30%) 15 (14%) High school grad 30 (48%) 83 (38%) 44 (40%) College or above 18 (29%) 59 (27%) 51 (46%) Marital status (%) Married/living with partner 18 (29%) 43 (20%) 49 (44%) ^ ^ Living alone 44 (71%) 176 (80%) 62 (56%) Working (%) 41 (66%) 104 (47%) 85 (77%) ^ ^ Not employed outside the home 21 (34%) 115 (53%) 26 (23%) No. in household 8 (13%) 38 (17%) 11 (10%) 17 (27%) 51 (23%) 30 (27%) More than 3 37 (60%) 130 (60%) 70 (63%) Sample group 39 (63%) 219 (100%) 51 (46%) ^ ^ 23 (23%) 60 (54%) Adult health status Mean PCS ^ ^ Mean MCS Mean household income $ 19,267 $ 16,833 ^ ^ Salsberry, P. J. (2003). Why are some children still uninsured? Journal of Pediatric Health Care, 17 (1), p. 34, Copyright © 2003, with permission from The National Association of Pediatric Nurse Note: Adult health status and mean household income were compared using ANOVA. Working (%) varied among different levels of insurance. For example, in the Full/part-time group, 66% of parents were uninsured, 47% received Medicaid, and 77% were privately insured, [?^2 (2,N = 392) = 27.39; p = .001]. In the Not employed outside the home group, 34% were uninsured, 53% received Medicaid, and 23% received private insurance. * Not all cells add to the noted totals, because of missing data. Percents determined on valid responses. _ p = .05. _ p = .001. * (=) = Less than or equal to the value 1. What was the sample size for this study? 2. What is the ?^2 value for Race of Child? 3. How many null hypotheses were accepted? Provide a rationale for your answer. 4. What is the ?^2 value for Marital Status (%): Married/living with partner or Living alone? Is the ?^2 value statistically significant? If significant, at what level? 5. The three groups (uninsured, Medicaid, and privately insured) are significantly different for the demographic variable Education (of adult) at p = .05. Are the groups also significantly different at p = .001? Provide a rationale for your answer. 6. Which has a greater statistically significant difference, Education (of adult) or Mean household income? Provide a rationale for your answer. 7. Mean household income is reported statistically significant at p = .001. Is it also statistically significant at p = .05? Provide a rationale for your answer. 8. State the null hypothesis for Mean household income and level of insurance (uninsured, Medicaid, and privately insured). 9. Should the null hypothesis for Question 8 be accepted or rejected? Provide a rationale for your answer. 10. In your opinion, when compared to level of insurance, what do the values reported for Working (%): Full/part-time and Not employed outside the home mean? Provide a rationale for your answer. 1. The sample size was 392 (N = 392) as indicated in the “Introduction.” 2. In Table 1, the ?^2 = 6.45 for the Race of Child. 3. Four null hypotheses were accepted since four ?^2 values were not significant, as indicated in Table 1. Nonsignificant results indicate that the null hypotheses are supported or accepted as an accurate reflection of the results of the study. 4. The ?^2 = 21.95 for Marital Status (%): Married/living with partner or Living alone. The symbol next to this ?^2 value indicates that it is statistically significant at p = .001. If alpha (a) = 0.05 for this study, then p is less than a indicates that the ?^2 value is statistically significant. 5. No, p = .001 has a greater significance than p = .05, since the smaller the p value, the more significant the findings. Thus, p = .05 is not as significant as at p = .001. 6. Mean household income has a greater statistically significant value because it is reported significant at p = .001, whereas Education (of adult) is reported significant at p = .05. The smaller the p value, the more significant the findings. 7. Yes, Mean household income is also significant at p = .05, since p = .001 has a greater significance than p = .05. What is significant at p = .001 is also significant at p = .05. 8. There is no difference in Mean household income among the three groups determined by levels of insurance (uninsured, Medicaid-enrolled, and privately insured). 9. The null hypothesis should be rejected. Mean household income is reported statistically significant at p = .001. Thus, the null hypothesis is rejected when statistical significance is found. 10. Answers may vary. As parents begin to work either full- or part-time and earn a paycheck, the chances that they will be able to qualify for Medicaid decrease, whereas the ability for them to afford private insurance may not always increase. Often parents will make enough money to disqualify them for Medicaid but not enough money to afford private insurance. This is one explanation for a higher uninsured percentage, a lower Medicaid percentage, and a higher private insurance percentage than the Not employed outside the home group reported in Table 1. Name:____________________________________________ Class: ____________________ Date: _________________________________________________________________________________ _ EXERCISE 40 Questions to be Graded 1. According to the “Introduction,” what categories were reported to be statistically significant? 2. In Table 1, is the No. in household reported as statistically significant among the three groups (uninsured, Medicaid, and privately insured)? Provide a rationale for your answer. 3. Should the null hypothesis for Marital Status (%) be rejected? Provide a rationale for your answer. 4. How many null hypotheses were rejected in the Salsberry (2003) study? Provide a rationale for your answer. 5. Does Marital Status or Education (of adults) have a greater statistically significant difference among the three groups (uninsured, Medicaid, or privately insured)? Provide a rationale for your 6. Was there a significant difference in Working status for the three levels of insurance (uninsured, Medicaid-enrolled, and privately insured)? Provide a rationale for your answer. 7. State the null hypothesis for level of insurance and Gender–% female. 8. Should the null hypothesis for Question 7 be accepted or rejected? Provide a rationale for your answer. 9. In your own opinion, were the outcomes of this study what you expected? 10. In your own opinion, should the results of this study be generalized to other State Children's Health Insurance Programs (SCHIPs)? Provide a rationale for your answer. (Grove 297) Grove, Susan K.. Statistics for Health Care Research: A Practical Workbook. W.B. Saunders Company, 022007. . Posted On: Nov 02 2012 08:48 PM Tags: Statistics, Hypothesis Testing, Chi square distributions, University Solution Preview: ?2 is one of the weaker statistical tests used, and results are usually only reported if statistically significant values are found. RESEARCH ARTICLE Source: Salsberry, P. J. (2003). Why are some children still uninsured? Journal of Pediatric Health Care, 17 (1), 32-8. Introduction In an effort to understand why children remain uninsured, Salsberry (2003) interviewed low-income parents in Ohio and compared children with and without insurance... Related Questions in Chi square distributions “1” Hypothesis Testing experts Online Ask Your Question Now Copy and paste your question here... Have Files to Attach? Questions Asked Questions Answered Topics covered in Statistics
{"url":"http://www.transtutors.com/questions/chi-square-linear-regression-correlation-233445.htm","timestamp":"2014-04-16T18:59:31Z","content_type":null,"content_length":"69120","record_id":"<urn:uuid:07c75a22-b6a8-43e0-b2a4-901d8c383e7e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
ODDFPRICE function This article describes the formula syntax and usage of the ODDFPRICE function (function: A prewritten formula that takes a value or values, performs an operation, and returns a value or values. Use functions to simplify and shorten formulas on a worksheet, especially those that perform lengthy or complex calculations.) in Microsoft Excel. Returns the price per $100 face value of a security having an odd (short or long) first period. ODDFPRICE(settlement, maturity, issue, first_coupon, rate, yld, redemption, frequency, [basis]) Important Dates should be entered by using the DATE function, or as results of other formulas or functions. For example, use DATE(2008,5,23) for the 23rd day of May, 2008. Problems can occur if dates are entered as text. The ODDFPRICE function syntax has the following arguments (argument: A value that provides information to an action, an event, a method, a property, a function, or a procedure.): ● Settlement Required. The security's settlement date. The security settlement date is the date after the issue date when the security is traded to the buyer. ● Maturity Required. The security's maturity date. The maturity date is the date when the security expires. ● Issue Required. The security's issue date. ● First_coupon Required. The security's first coupon date. ● Rate Required. The security's interest rate. ● Yld Required. The security's annual yield. ● Redemption Required. The security's redemption value per $100 face value. ● Frequency Required. The number of coupon payments per year. For annual payments, frequency = 1; for semiannual, frequency = 2; for quarterly, frequency = 4. ● Basis Optional. The type of day count basis to use. Basis Day count basis 0 or omitted US (NASD) 30/360 1 Actual/actual 2 Actual/360 3 Actual/365 4 European 30/360 ● Microsoft Excel stores dates as sequential serial numbers so they can be used in calculations. By default, January 1, 1900 is serial number 1, and January 1, 2008 is serial number 39448 because it is 39,448 days after January 1, 1900. ● The settlement date is the date a buyer purchases a coupon, such as a bond. The maturity date is the date when a coupon expires. For example, suppose a 30-year bond is issued on January 1, 2008, and is purchased by a buyer six months later. The issue date would be January 1, 2008, the settlement date would be July 1, 2008, and the maturity date would be January 1, 2038, which is 30 years after the January 1, 2008, issue date. ● Settlement, maturity, issue, first_coupon, and basis are truncated to integers. ● If settlement, maturity, issue, or first_coupon is not a valid date, ODDFPRICE returns the #VALUE! error value. ● If rate < 0 or if yld < 0, ODDFPRICE returns the #NUM! error value. ● If basis < 0 or if basis > 4, ODDFPRICE returns the #NUM! error value. ● The following date condition must be satisfied; otherwise, ODDFPRICE returns the #NUM! error value: maturity > first_coupon > settlement > issue ● ODDFPRICE is calculated as follows: Odd short first coupon: ● A = number of days from the beginning of the coupon period to the settlement date (accrued days). ● DSC = number of days from the settlement to the next coupon date. ● DFC = number of days from the beginning of the odd first coupon to the first coupon date. ● E = number of days in the coupon period. ● N = number of coupons payable between the settlement date and the redemption date. (If this number contains a fraction, it is raised to the next whole number.) Odd long first coupon: ● Ai = number of days from the beginning of the ith, or last, quasi-coupon period within odd period. ● DCi = number of days from dated date (or issue date) to first quasi-coupon (i = 1) or number of days in quasi-coupon (i = 2,..., i = NC). ● DSC = number of days from settlement to next coupon date. ● E = number of days in coupon period. ● N = number of coupons payable between the first real coupon date and redemption date. (If this number contains a fraction, it is raised to the next whole number.) ● NC = number of quasi-coupon periods that fit in odd period. (If this number contains a fraction, it is raised to the next whole number.) ● NLi = normal length in days of the full ith, or last, quasi-coupon period within odd period. ● Nq = number of whole quasi-coupon periods between settlement date and first coupon. The example may be easier to understand if you copy it to a blank worksheet. 1. Select the example in this article. If you are copying the example in Excel Online, copy and paste one cell at a time. Important: Do not select the row or column headers. Selecting an example from Help 2. Press CTRL+C. 3. Create a blank workbook or worksheet. 4. In the worksheet, select cell A1, and press CTRL+V. If you are working in Excel Online, repeat copying and pasting for each cell in the example. Important: For the example to work properly, you must paste it into cell A1 of the worksheet. 5. To switch between viewing the results and viewing the formulas that return the results, press CTRL+` (grave accent), or on the Formulas tab, in the Formula Auditing group, click the Show Formulas After you copy the example to a blank worksheet, you can adapt it to suit your needs. 1 A B 2 Data Description 3 November 11, 2008 Settlement date 4 March 1, 2021 Maturity date 5 October 15, 2008 Issue date 6 March 1, 2009 First coupon date 7 7.85% Percent coupon 8 6.25% Percent yield 9 100 Redemptive value 10 2 Frequency is semiannual (see above) 11 1 Actual/actual basis (see above) Formula Description (Result) =ODDFPRICE(A2, A3, A4, A5, A6, A7, A8, A9, A10) The price per $100 face value of a security having an odd (short or long) first period, for the bond with the above terms (113.5977) Note In Excel Online, to view the result in its proper format, select the cell, and then on the Home tab, in the Number group, click the arrow next to Number Format, and click General Applies to: Excel 2010, Excel Web App, SharePoint Online for enterprises, SharePoint Online for professionals and small businesses
{"url":"http://office.microsoft.com/en-us/starter-help/oddfprice-function-HP010342734.aspx?CTT=5&origin=HA010342655","timestamp":"2014-04-21T14:43:59Z","content_type":null,"content_length":"31959","record_id":"<urn:uuid:be448cbf-75fa-45e6-b9e8-4b5913272084>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
Causality: Some statistical aspects Results 1 - 10 of 12 , 2002 "... This paper considers causal inference and sample selection bias in non-experimental settings in which: (i) few units in the non-experimental comparison group are comparable to the treatment units; and (ii) selecting a subset of comparison units similar to the treatment units is difficult because uni ..." Cited by 228 (1 self) Add to MetaCart This paper considers causal inference and sample selection bias in non-experimental settings in which: (i) few units in the non-experimental comparison group are comparable to the treatment units; and (ii) selecting a subset of comparison units similar to the treatment units is difficult because units must be compared across a high-dimensional set of pretreatment characteristics. We discuss the use of propensity score matching methods, and implement them using data from the NSW experiment. Following Lalonde (1986), we pair the experimental treated units with non-experimental comparison units from the CPS and PSID, and compare the estimates of the treatment effect obtained using our methods to the benchmark results from the experiment. For both comparison groups, we show that the methods succeed in focusing attention on the small subset of the comparison units comparable to the treated units and, hence, in alleviating the bias due to systematic differences between the treated and comparison units. - Proc. of the Eighth Conference on Uncertainty in Artificial Intelligence , 1992 "... In a previous paper [8] we presented an algorithm for extracting causal influences from independence information, where a causal influence was defined as the existence of a directed arc in all minimal causal models consistent with the data. In this paper we address the question of deciding whether t ..." Cited by 60 (1 self) Add to MetaCart In a previous paper [8] we presented an algorithm for extracting causal influences from independence information, where a causal influence was defined as the existence of a directed arc in all minimal causal models consistent with the data. In this paper we address the question of deciding whether there exists a causal model that explains ALL the observed dependencies and independencies. Formally, given a list M of conditional independence statements, it is required to decide whether there exists a directed acyclic graph D that is perfectly consistent with M, namely, every statement in M, and no other, is reflected via d-separation in D. We present and analyze an effective algorithm that tests for the existence of such a dag, and produces one, if it exists. Key words: Causal modeling, graphoids, conditional independence. 1 1 Introduction Directed acyclic graphs (dags) have been widely used for modeling statistical data. Starting with the pioneering work of Sewal Wright , 1993 "... This paper provides conditions and procedures for deciding if patterns of independencies found in covariance and concentration matrices can be generated by a stepwise recursive process represented by some directed acyclic graph. If such an agreement is found, we know that one or several causal proce ..." Cited by 18 (4 self) Add to MetaCart This paper provides conditions and procedures for deciding if patterns of independencies found in covariance and concentration matrices can be generated by a stepwise recursive process represented by some directed acyclic graph. If such an agreement is found, we know that one or several causal processes could be responsible for the observed independencies, and our procedures could then be used to elucidate the graphical structure common to these processes, so as to evaluate their compatibility against substantive knowledge of the domain. If we find that the observed pattern of independencies does not agree with any stepwise recursive process, then there are a number of different possibilities. For instance, -- some weak dependencies could have been mistaken for independencies and led to the wrong omission of edges from the covariance or concentration graphs. -- some of the observed linear dependencies reflect accidental cancellations or hide actual nonlinear relations, or -- the process responsible for the data is non-recursive, involving aggregated variables, simultenous reciprocal interactions, or mixtures of several causal processes. In order to recognize accidental independencies it would be helpful to conduct several longitudinal studies under slightly varying conditions. In such studies the covariances for the same set of variables is estimated under different conditions and the variations in the conditions would typically affect the numerical values of the parameters. But, if the data were generated by a causal process represented by some directed acyclic graph, then the basic structural properties reflected in the missing edges of that graph should remain unchanged. Under such assumptions, the pattern of independencies that is "implied" by the dag (see Definitio... , 1993 "... This paper demonstrates the use of graphs as a mathematical tool for expressing independenices, and as a formal language for communicating and processing causal information in statistical analysis. We show how complex information about external interventions can be organized and represented graphica ..." Cited by 13 (10 self) Add to MetaCart This paper demonstrates the use of graphs as a mathematical tool for expressing independenices, and as a formal language for communicating and processing causal information in statistical analysis. We show how complex information about external interventions can be organized and represented graphically and, conversely, how the graphical representation can be used to facilitate quantitative predictions of the effects of interventions. We first review the Markovian account of causation and show that directed acyclic graphs (DAGs) offer an economical scheme for representing conditional independence assumptions and for deducing and displaying all the logical consequences of such assumptions. We then introduce the manipulative account of causation and show that any DAG defines a simple transformation which tells us how the probability distribution will change as a result of external interventions in the system. Using this transformation it is possible to quantify, from non-experimental data... - Journal of Management , 2006 "... The authors identify the key challenges facing strategic human resource management (SHRM) going forward and discuss several new directions in both the scholarship and practice of SHRM. They focus on a clearer articulation of the “black box ” between HR and firm performance, emphasizing the integrati ..." Cited by 5 (0 self) Add to MetaCart The authors identify the key challenges facing strategic human resource management (SHRM) going forward and discuss several new directions in both the scholarship and practice of SHRM. They focus on a clearer articulation of the “black box ” between HR and firm performance, emphasizing the integration of strategy implementation as the central mediating variable in this relationship. There are direct implications for the nature of fit and contingencies in SHRM. They also highlight the significance of a differentiated HR architecture not just across firms but also within firms. Keywords: strategy; human resources; black box; implementation; differentiation The field of strategic human resources management (SHRM) has enjoyed a remarkable ascendancy during the past two decades, as both an academic literature and focus of management practice. The parallel growth in both the research literature and interest among practicing managers is a notable departure from the more common experience, where managers are either unaware or simply uninterested in scholarly developments in our field. As the field of HR strategy begins to mature, we believe that it is time to take stock of where it stands as both a field of inquiry and management practice. Although drawing on nearly two decades of †We are grateful to Steve Frenkel, Dave Lepak, and seminar participants at Monash University for comments on an earlier version of this article. "... substantial help in recreating the original data set. We are also grateful to Joshua Angrist, George Cave, David Cutler, Lawrence Katz, Caroline Minter-Hoxby, and participants at the Harvard-MIT labor seminar, the Harvard econometrics and labor lunch seminars, the MIT labor lunch seminar, and a semi ..." Cited by 1 (0 self) Add to MetaCart substantial help in recreating the original data set. We are also grateful to Joshua Angrist, George Cave, David Cutler, Lawrence Katz, Caroline Minter-Hoxby, and participants at the Harvard-MIT labor seminar, the Harvard econometrics and labor lunch seminars, the MIT labor lunch seminar, and a seminar at the Manpower Development Research Corporation (MDRC) for many suggestions and comments. All remaining errors are the authors’ "... Despite warnings against inferring causality from observed correlations or statistical dependence, some articles do. Observed correlation is neither necessary nor sufficient to infer causality as defined by the term’s everyday usage. For example, a deterministic causal process creates pseudorandom n ..." Cited by 1 (1 self) Add to MetaCart Despite warnings against inferring causality from observed correlations or statistical dependence, some articles do. Observed correlation is neither necessary nor sufficient to infer causality as defined by the term’s everyday usage. For example, a deterministic causal process creates pseudorandom numbers; yet, we observe no correlation between the numbers. Child height correlates with spelling ability because age causes both. Moreover, order is problematic—we hear train whistles before observing trains, yet trains cause whistles. Scientific methods specifically prohibit inferring causal theories from specific observations (i.e., effects) because, in part, many credible causes are perfectly consistent with available observations. Moreover, actions inferred from effects have more unintended consequences than actions based on sound deductive causal theories because causal theories predict multiple effects. However, an often overlooked but key feature of these theories is that we describe the cause with more variables than the effect. Consequently, inductive processes might appear deductive as the number of effects increases relative to the number of potential causes. For example, in real criminal trials, jurors judge whether sufficient evidence exists to infer guilt. In contrast, determining guilt in criminal mystery novels is deductive because the number of clues (i.e., effects) is large relative to the number of potential suspects (i.e., causes). We can make inferential tasks resemble deductive tasks by increasing the number of effects (i.e., variables) relative to the number of potential causes and seeking a shared cause for all observed effects. Moreover, under some conditions, the method of seeking shared causes might approach deductive reasoning when the number of causes is strictly limited. At least, the resulting number of possible causal theories is far less than the number generated from repeated observations of a single effect (i.e., variable). , 2007 "... Views expressed in this report are not necessarily those of the Social Exclusion Task Force or any other government department. This report was funded by the Department for Communities and Local Government (DCLG) when the Social Exclusion Unit (the predecessor of the current Social Exclusion Task Fo ..." Add to MetaCart Views expressed in this report are not necessarily those of the Social Exclusion Task Force or any other government department. This report was funded by the Department for Communities and Local Government (DCLG) when the Social Exclusion Unit (the predecessor of the current Social Exclusion Task Force based at the Cabinet Office) was based at DCLG. 1 CONTENTS "... Given an arbitrary causal graph, some of whose nodes are observable and some unobservable, the problem is to determine whether the causal effect of one variable on another can be computed from the joint distribution over the observables and, if the answer is positive, to derive a formula for the ..." Add to MetaCart Given an arbitrary causal graph, some of whose nodes are observable and some unobservable, the problem is to determine whether the causal effect of one variable on another can be computed from the joint distribution over the observables and, if the answer is positive, to derive a formula for the causal effect. We introduce a calculus which, using a step by step reduction of probabilistic expressions, derives the desired formulas. 1 1 Introduction Networks employing directed acyclic graphs (DAGs) can be used to provide either 1. an economical scheme for representing conditional independence assumptions and joint distribution functions, or 2. a graphical language for representing causal influences. Although the professed motivation for investigating such models lies primarily in the second category, [Wright, 1921, Blalock, 1971, Simon, 1954, Pearl 1988], causal inferences have been treated very cautiously in the statistical literature [Lauritzen & Spiegelhalter 1988, Cox 1992,...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2835636","timestamp":"2014-04-19T19:24:59Z","content_type":null,"content_length":"38859","record_id":"<urn:uuid:cf2f29c0-e163-48d4-ad89-af01fe0faf90>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
undamental period 36 Threads found on edaboard.com: Fundamental Period have a doubt reg checking DT periodic sequence: x=cos(pi* (n)^2 / 8 ) it is periodic ofcourse, but in the solution manual its fundamental period is mentioned as 8. but i got fundamental period=N= (2*pi/ omega ) in this case, N= (2*pi / (pi/8) ) = 16. does anybody know (...) Digital Signal Processing :: 28.03.2009 01:43 :: rramya :: Replies: 10 :: Views: 10042 The greatest-common-divisor of f1 and f2 is the fundamental frequency; the least-common-multiple is the fundamental period. I think the first example is what you're looking for: Digital Signal Processing :: 08.07.2012 21:21 :: enjunear :: Replies: 1 :: Views: 1807 the scaling in the time in the angular frequency never hampers the periodicity it just alters the fundamental period of the signal that is cos t is periodic and also cos 2t is periodic both have only 1 difference that is their fundamental period differs in both the case (...) Electronic Elementary Questions :: 15.09.2012 12:46 :: jeffrey samuel :: Replies: 6 :: Views: 585 Hi all, i am new to Matlab.Could anyone help me to generate and plot the below 3 sequences using Matlab "stem" function What should i define in my script? x1=sin(0.6Пn+0.6П) x2=sin(0.68Пn) x3=3sin (1.3Пn)-4cos(0.3Пn+0.45П) thanks alot regards scdoro Digital Signal Processing :: 16.01.2006 09:46 :: scdoro :: Replies: 2 :: Views: 1952 Hi all, i am new to Matlab.Could anyone help me to generate and plot the below 3 sequences using Matlab "stem" function qn1) What should i define in my script? qn2) how do i obtain the fundamental period by just observing the graph? x1=sin(0.6Пn+0.6П) x2=sin(0.68Пn) x3=3sin(1.3Пn)-4cos(0.3Пn+0 Digital Signal Processing :: 17.01.2006 04:20 :: scdoro :: Replies: 0 :: Views: 492 Note that For a discrete-time signal to be periodic it has to satisfy x=x where N is the fundamental period and the condition on it is that it should be an integer. For a continuous-time signal to be periodic it has to satify x(t+T)=x(t) where T is the fundamental period and there is no r Electronic Elementary Questions :: 17.02.2006 02:34 :: purnapragna :: Replies: 3 :: Views: 20768 Generally, we can use two types of 'reset'. The one is syncronous reset. Another is asyncronous reset. I have a fundamental question about this reset. What is the purpose of two types' reset? When do we have to adopt sync reset to the HDL? When do we have to use async reset to the HDL? Why do the two type of resets exist? I do not ex PLD, SPLD, GAL, CPLD, FPGA Design :: 30.01.2007 08:38 :: perfectv :: Replies: 9 :: Views: 3709 Ok, I located the example problem. Frankly, I dont understand your question. Are you speaking about Figure 3.7? The figure explains the effect of different fundamental period T of the rectangular pulse. If T = 4T1, meaning, the rectangle pulse lying in the origin extents from -2T1 to 2T1 and if T = 8T1, then same lies between -4T1 to 4T1. In Digital communication :: 02.06.2007 14:56 :: cedance :: Replies: 4 :: Views: 1484 Your ECG signal has very low amplitude at the fundamental frequency, so a plain FFT would give you poor info. It would be better to apply some sort of non-linear filter to it first, such as computing fft(ECG_1 > 0.5) instead of fft(ECG_1). This example shows the first strong spectral peak at about 1.23 Hz. Zoom in to see it: clear; lo Digital Signal Processing :: 29.09.2007 06:16 :: echo47 :: Replies: 23 :: Views: 7913 Is x = cos periodic. If yes, find its period The answer given is: periodic. period =8 How to solve this problem? since ω=2Πf assume signal is cos wn so f=1/8 since f is expressed as the ratio of 2 inteer so it is periadic we have the condition of periodicity f=k/N where N is (...) Electronic Elementary Questions :: 16.01.2008 11:05 :: sundarmeenakshi :: Replies: 3 :: Views: 506 If you cascade self-multipliers you can get the higher order tones but I think you will be hard pressed to keep them in the phase you'd like, across a wide / high frequency range. You would want the natural delay through the multipliers to be much, much less than the fundamental period (few degrees, more like) if you want to see a pretty squ Analog IC Design and Layout :: 24.08.2010 10:13 :: dick_freebird :: Replies: 3 :: Views: 439 Digital Signal Processing :: 30.09.2010 00:40 :: gpffhdnzz :: Replies: 0 :: Views: 551 Why does autocorrelation function approach zero at the fundamental period of a signal? RF, Microwave, Antennas and Optics :: 25.03.2011 02:54 :: pushpanjali sharma :: Replies: 0 :: Views: 400 I'd use the cross() calculator function looking for the right-sense zero crossings, and do the arithmetic to turn time into degrees. Timebase being the fundamental period of whichever phase you call the master reference. Analog Circuit Design :: 18.06.2012 11:54 :: dick_freebird :: Replies: 9 :: Views: 506 Need help here what is the fundamental period for sinπt??? I got it down to T=6/(2n-1) and i am confuse from here on :( Digital Signal Processing :: 21.03.2013 11:22 :: yjlum :: Replies: 5 :: Views: 265 Well if you're just simulating, then it should be a matter of getting the raw data and dumping it into matlab, then doing an FFT. Or you could build the simulation entirely in matlab. Try to make sure that either the duration of the sampled data is an exact integer multiple of your fundamental period, otherwise you'll end up with an incorrect resul Power Electronics :: 03.04.2013 08:48 :: mtwieg :: Replies: 5 :: Views: 426 Hi, The calculator toll from spectre has a built in THD function which you will find if you selectthe RF option or in the "Special functions" menu. In order to use this function you have to perform a transient simulation for at least 20 periods of the input signal and then select the waveform on which you want to evaluate the distorsions, in inp Analog IC Design and Layout :: 14.01.2005 02:29 :: cristianb :: Replies: 4 :: Views: 1755 oh yeh you are rite...stupid i am... just dun realize the fundamental stuff.. Microcontrollers :: 20.01.2005 01:59 :: banh :: Replies: 5 :: Views: 496 Elektor magazine UK edition issue july/august 2004 has a circuit like you want. With a 10 MHz crystal and using a 74HC04 and some notch and high-pass filters, it can multiply the frequency by 9, obtaining 90MHz @ +13 dBm. Each stage of inverters and filters rejects the fundamental signal generated by a a squarewave oscillator and the high-pass Analog Circuit Design :: 21.03.2006 16:23 :: rkodaira :: Replies: 7 :: Views: 1438 The .MEASURE statement prints user-defined electrical specifications of a circuit and is used extensively in optimization. The specifications include propagation, delay, rise time, fall time, peak-to-peak voltage, minimum and maximum voltage over a specified period, and a number of other user-defined variables The .MEASURE statement has several Software Problems, Hints and Reviews :: 17.11.2007 11:13 :: mathuranathan :: Replies: 3 :: Views: 5376 It´s relatively simple: 1.) Perform a TRAN-Analysis and measure exactly the period resp. frequnecy. (Hint: Don´t use the PSPICE macro for this - it is not exact enough) 2.) Perform another TRAN-Analysis with FOURIER enabled (specify the fundamental carrier) 3.) Check the output file *.out. At the end of the file you will find the leve RF, Microwave, Antennas and Optics :: 09.05.2008 09:07 :: LvW :: Replies: 1 :: Views: 586 Hello everyone ,I am making a power meter based on ADE7753 IC . I started off 1 month ago ,had many problems in between.One of the major problem was SPI communication . I have used PIC16F72 as the master controller.Now am able to read all the registers and also able to calculate voltage and current. I want to know whether ADE775 Microcontrollers :: 22.12.2008 23:53 :: anishmvk :: Replies: 0 :: Views: 1390 Hi guys, i just recently succeeded with functioning stepper motor with PIC and UCN5804B, so i just did a fundamental test on stepper motor by pumping in different speed in the coding, but with 50% duty cycle. So i tried with 16ms period until 1.5 ms period, so 16 ms makes the motor slow and 1.5 ms makes it fast. i understand how it works (...) Electronic Elementary Questions :: 08.04.2009 04:55 :: thavamaran :: Replies: 2 :: Views: 917 svaidy, The method proposed by harii74 will work for sinusoidal load currents. However, for non-sinusoidal currents, the problem is more complex. By definition, power factor is equal to real power divided by apparent power. Apparent power can be obtained by measuring the True RMS current, and the true RMS voltage. Apparent power is the product Electronic Elementary Questions :: 01.09.2009 13:04 :: Kral :: Replies: 9 :: Views: 3450 innovation1, The difference can be explained from the fundamental equations. . For an inductor, v = Ldi/dt, Where V is the voltage across the inductior, L is the inductance, di/dt is the rate of change of current thru the inductor. For high frequencies, di/dt is a high value, therefore the voltage across the inductor is high. For DC, di/dt is Electronic Elementary Questions :: 02.11.2009 11:28 :: Kral :: Replies: 8 :: Views: 4614 As you could see in Illustrating the sine wave's fundamental relationship to the circle. on wikipedia... Its actually a plotting over the circle consider r=sinΘ now for all different values of Θ you will get different values of r.. now if you had to plot r versus Θ.. then one convenient way is to plot values of r (as Electronic Elementary Questions :: 27.02.2010 09:29 :: akshay_d_2006 :: Replies: 3 :: Views: 581 hi Also I have the above problem. Why there is not any newer version of "simulating switched-capacitor filters with spectreRF" document for spectreRF2008? In the above doc the netlist defines only the period and maxacfreq; while in specreRF there are so more parameters required, for example "fundamental tones" and beat frequency simultaneusly. A Analog IC Design and Layout :: 17.05.2010 10:53 :: ahmadagha23 :: Replies: 2 :: Views: 1315 This example might be nice: Radar Basics - Fourier Transformation You see how a rectangular waveform is built from the fundamental sinewave with harmonics Electronic Elementary Questions :: 10.12.2010 06:18 :: volker_muehlhaus :: Replies: 13 :: Views: 2460 Subharmonic drive is an interesting idea. but the spectral share of 5th harmonic is only 1/5 of the fundamental for a square wave (and getting absolutely smaller when reducing the duty cycle), so I fear, it won't lead to an effective design. Driving MOSFETS to ns switching speed requires strong gate drivers with several A output current, usually Analog Circuit Design :: 26.01.2011 12:25 :: FvM :: Replies: 35 :: Views: 1964 Specific resistance of power devices is very often given in these strange units of r=mOhm*mm2 so that people can quickly estimate device area A for the required Rdson value: Rdson = r / A A more fundamental parameter is specific resistance of the device in Ohms per one micron of gate width - rch. If you have SPICE model of the device, rch can be Analog IC Design and Layout :: 24.02.2011 01:13 :: timof :: Replies: 9 :: Views: 2529 I have a sampled sine wave of a 10kHz signal. It was sampled at 1Msps (1Mhz) so there is 100 samples per period. The total vector is 1024 samples long so its just over 10 periods total. I have measured the THD of the analog signal using the fft function of a tektronix scope and I know the correct THD to be less than 1%. However, when I try to Digital Signal Processing :: 28.02.2011 00:44 :: specialedster :: Replies: 1 :: Views: 1942 It could either be a sine wave or a square wave. If the reference frequency is very low (i.e. less than a few MHz) a square wave is better. It can have harmonics, as long as they are somewhat lower than the fundamental. An arbitrary waveform will have various phase and amplitude modulations, which could look like random time jitter. This will RF, Microwave, Antennas and Optics :: 05.10.2011 08:27 :: biff44 :: Replies: 3 :: Views: 533 You need to ask yourself what Omega means. It is not simply the angular frequency in the sin / cos term. The overall function x is periodic, as implied by the N=19, so you can come up with the fundamental angular frequency corresponding to this period, and it is 2*pi/19. Electronic Elementary Questions :: 01.08.2012 21:53 :: gadly :: Replies: 1 :: Views: 254 In mathematics, the discrete Fourier transform (DFT) converts a finite list of equally-spaced samples of a function into the list of coefficients of a finite combination of complex sinusoids, ordered by their frequencies, that has those same sample values. It can be said to convert the sampled function from its original domain (often time or positi Mathematics and Physics :: 25.04.2013 08:29 :: Debra :: Replies: 2 :: Views: 323 Hi I am designing a switching amplifier with 300MHz switching frequency and 20MHz input frequency. I was doing the pss analysis in spectre RF and found out that the software is considering 300MHz to 299.91MHz. Because of this software is giving the following error while running pss analysis. ERROR, (CMI-2207): V1: The fundamental frequency Analog IC Design and Layout :: 05.08.2013 02:48 :: viperpaki007 :: Replies: 0 :: Views: 204 Setup time is a flip-flop specification that is typically provided to you by a manufacturer, and is relative to a single clock edge into a flop. However, setup-violation occurs when one driving flop feeds a signal into a second receiving flop, and the propagation of the signal between them takes too long (due to logic and routing delays). There is Electronic Elementary Questions :: 09.08.2013 11:33 :: jrwebsterco :: Replies: 1 :: Views: 162
{"url":"http://search.edaboard.com/fundamental-period.html","timestamp":"2014-04-18T05:45:59Z","content_type":null,"content_length":"35667","record_id":"<urn:uuid:ba00397a-8b15-4185-88dd-7a2009c5838c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptography help. I was doing this mission on HTS(HackThisSite.org). It dealt with the XECryption algorithm. I read somewhere, that this algorithm works by taking 3 numbers separated by period(.), adding their sum, and using that number some how related to ASCII decimal value. Anyways, instead of doing this for X number of numbers. I built a simple c++ program that would create 3 vectors, put the numbers into the 3 vectors in order. Then the program would take a 4th vector that for every entry at x, it would have the sum of vector1.at(x), vector2.at(x), etc. Using this new vector, then I looked at the ranged of values, and determine it was around a 100. So I ran the program and it would find the count of every entry in this new vector. I then looked at the count, and found the highest count. I took that value, and assumed it was e, since e is the most common letter. I found the difference between this number and the ASCII decimal value of e, and used that for the key. I took each value of the sum vector and subtracted the decimal value of e, converted each number to character, and outputted the new string. That didn't work! Any tips? Re: Cryptography help. I've recently registered at HTS and find their challenges very interesting. The so called XECryption does indeed take the sum of 3 numbers separated by a period. But unfortunately it also requires a password. This really doesn't do much besides adding the sum of these ASCII values to every value in the encrypted text. So using the password abcd would mean 97 + 98 + 99 + 100 = 394. Lets say we decided to encrypt the letter a with the password abcd. The encrypted text will be; 143 + 188 + 160 = 491 491 - 394 = 97 -> 'a' At this point I thought it would be best to brute force the whole thing. Assuming there will be a password of minimal 3 or 4 letters, the sum that gets added to every value in the encrypted text would be around 48 * 4 = 192 with minimal ASCII value of '0' (zero). So we can probably start with a password of 200. You don't have to try a real password because it will be just a decimal value in the end. So you can decrypt the whole thing in a loop and keep adding 1 to the password. Every time you decrypt the text you scan it for words that will most likely be there if it were plain text. If a match was found then you break out of the loop and you keep the plain text. I wouldn't go higher than 1000. Also, it shouldn't take too long. I'm not sure if this is the most elegant solution, but it works. Last edited by zeroflaw on Tue Feb 23, 2010 7:02 am, edited 1 time in total. Re: Cryptography help. I cracked it! Using some math skills I was able to pick a range around 100 or less values for keys. I pretty much picked arbitrary numbers, and guessed the range. Then I inputted them into my program. Ran it about 50 times before I got a good message. Then my program outputted the data to a txt file and I accomplish the mission! Re: Cryptography help. Sounds interesting. I wanted a more mathematical approach myself, but I never payed much attention during math and cryptography classes, so I have to make up for that now Care to explain your method? Last edited by zeroflaw on Wed Feb 24, 2010 6:21 am, edited 1 time in total. Re: Cryptography help. This was sort of fun to program, because in my design process, I was able to find a reason to use a recursive base function. Anyways, my first few steps was to read the the encrypted message from a file (I formatted the file to make it a little easier). It would equally distribute the 3 numbers into 3 vectors. Made a 3rd vector and held all the sums. Then I wrote some functions that did some simple analysis as find min, max, and mode. I then did some mental guessing and calculations to get a rough range of values. I created one function that would brute force the whole ranges till it found a message that was readable. For example, three numbers, lets say 100, 200, 300. The some would be 600. The next numbers would be 150, 250, 350, which would be 750. So on, so on. I got a range of values around the 700-800 range. If you compare those to the ASCII table, Those numbers range from 600-700 from the decimal values. I took a loop that would start at lets say 600, and run till 700, till there was a legit ASCII message. Which went about half ways. [i]Note: Those weren't the real values I was getting, but arbitrary values.[i] Re: Cryptography help. From what I understand that's pretty much what I tried to explain. I said you should start at a value of 200 but that's way too low if you calculate the length of the encrypted message first. I also guessed a range. Here's what I used; String decrypted = String.Empty; decryptedText.Text = String.Empty; String[] strArray = encryptedText.Text.Substring(1).Split('.'); int pwd = 500; int pwdEnd = 1000; int j = 0; while (pwd < pwdEnd) while (j < strArray.Length) int x = 0; for (int i = 0; i < 3; i++) x += Convert.ToInt32(strArray[j]); catch (FormatException) { } decrypted += Convert.ToChar(x - pwd); j = 0; if (decrypted.Contains("Smith") || decrypted.Contains("Samuel") || decryptedText.Text = decrypted; decrypted = String.Empty; I tried it with a range of 500 - 1000. Used a legit ASCII string to compare, I just knew there would be a name of some sort, because it's an e-mail message. This algorithm instantly retrieves the plain text. I just don't understand your use of vectors here Re: Cryptography help. I used C++ instead of Java. C++ doesn't have a spit class in the string, but it has a sub-string and find-first-instance-of methods. Fed a line of encrypted text into my recursive function, which pulled each number into 3 vectors, took the sum of the 3, and from there I was able to subtract a range of values I suspected to be a key. I could re-write it to make it more efficient and less lines, if I wanted to. Re: Cryptography help. That's C#, I sorta dislike Java Re: Cryptography help. zeroflaw wrote:That's C#, I sorta dislike Java Better choice between the two, but can't blame me for thinking that. The two use incredible similar syntax. I'll have to pretty it up. It looks like garbage right now with comments and stuff, lol. I'll post it when I get to pretty-ing it up. This recursive function worked with my formatted data. In simple, the data was put into a .txt file. Each line ended with a period and each began without a period. Then each line was fed into this function. void getString(string s) int pos; //Position of period string str; //New string. pos = s.find_first_of('.'); //Position of first period str = s.substr (0, pos); //the number to be pulled if((pos <= 0) //If position is equal or less then 0, meaning its the last period of a line. Kill function addElement(putInt(str), vnumb); //Puts it into putInt, which converts it to integer. Vnumb (vector numb) //determines which of the 3 vectors to put them in. if(vnumb == 2) //Changed Vnumb to equally distribute between the vectors. vnumb = 0; str = s.substr(pos+1); //Sets string to be a small string excluding the first number/period(s) getString(str); // Recursive Call I had a lot of code/comments that weren't too useful in the end, so I only posted the one function. The rest were pretty trivial/basic. Last edited by Sw0rDz on Wed Feb 24, 2010 9:51 pm, edited 1 time in total. Re: Cryptography help. Good reading here fellas. I don't know about cryptography but it's been interesting reading your thought process to solve this problem. Re: Cryptography help. Thanks for posting your code, appreciate it. I pretty much understand it now. Just need to find some good math books I think At least both of our solutions worked, and it's always good to see there are several ways to accomplish something. If I know one of the missions can be solved with a different approach, I always want to try it Re: Cryptography help. This thread has me itching to spend some time playing on HTS...my technical skills, crypto in particular, have a thick coat of rust on them. Time to break out the WD-40 Reluctant CISSP, Certified ASS
{"url":"https://www.ethicalhacker.net/forums/viewtopic.php?t=5098.msg25395/","timestamp":"2014-04-19T09:29:26Z","content_type":null,"content_length":"87805","record_id":"<urn:uuid:1cb5a96b-3642-4852-9e75-828b635183b2>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
Fox Lake, IL Algebra 1 Tutor Find a Fox Lake, IL Algebra 1 Tutor I have over 14 years of experience as an educator. I have taught high school and collegiate level science courses. I am very aware that each student has a different aptitude and ability in every subject and learns in different ways, as an educator my primary goal is to use this awareness to help my students achieve their full potential. 6 Subjects: including algebra 1, chemistry, biology, ACT Science ...I specialize in tutoring math for grades 6-12. I am currently a high school math teacher and I have been teaching high school for the past 11 years. I have experience teaching students in subjects from pre-algebra through pre-calculus. 7 Subjects: including algebra 1, algebra 2, precalculus, trigonometry ...Additionally, my college courses at Northeastern Illinois University have prepared me to teach all elementary math at all elementary grade levels. I passed my math methods course with an A. I graduated in May 2012 with a Bachelor's Degree in Elementary Education from Northeastern Illinois University. 9 Subjects: including algebra 1, grammar, elementary (k-6th), elementary math ...In addition to my classes, I have had experience researching at Northwestern University in Evanston, IL and Feinberg School of Medicine in Chicago, IL. My research projects regarded estrogen receptors and tuberculosis respectively. While excelling as a student, I have had my fair share of tutoring experiences. 16 Subjects: including algebra 1, chemistry, calculus, statistics ...I served as a teaching assistant for graduate and undergraduate structural engineering classes. I can offer a structured, systematic and hopefully, rewarding learning experience with the student looking for educational assistance. My prior work background and experience for 30+ years consists o... 10 Subjects: including algebra 1, geometry, GED, algebra 2 Related Fox Lake, IL Tutors Fox Lake, IL Accounting Tutors Fox Lake, IL ACT Tutors Fox Lake, IL Algebra Tutors Fox Lake, IL Algebra 2 Tutors Fox Lake, IL Calculus Tutors Fox Lake, IL Geometry Tutors Fox Lake, IL Math Tutors Fox Lake, IL Prealgebra Tutors Fox Lake, IL Precalculus Tutors Fox Lake, IL SAT Tutors Fox Lake, IL SAT Math Tutors Fox Lake, IL Science Tutors Fox Lake, IL Statistics Tutors Fox Lake, IL Trigonometry Tutors Nearby Cities With algebra 1 Tutor Antioch, IL algebra 1 Tutors Grayslake algebra 1 Tutors Hainesville, IL algebra 1 Tutors Ingleside, IL algebra 1 Tutors Island Lake algebra 1 Tutors Johnsburg, IL algebra 1 Tutors Lake Villa algebra 1 Tutors Lakemoor, IL algebra 1 Tutors Lindenhurst, IL algebra 1 Tutors Round Lake Beach, IL algebra 1 Tutors Round Lake Heights, IL algebra 1 Tutors Round Lake Park, IL algebra 1 Tutors Round Lake, IL algebra 1 Tutors Spring Grove, IL algebra 1 Tutors Stanton Point, IL algebra 1 Tutors
{"url":"http://www.purplemath.com/fox_lake_il_algebra_1_tutors.php","timestamp":"2014-04-16T04:17:44Z","content_type":null,"content_length":"24326","record_id":"<urn:uuid:f4327ac1-c6a5-48eb-990c-e35dcca59d92>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
Characterize a continental divide up vote 7 down vote favorite Here is something I've wondered about from time to time: The continental divide in North America is commonly described as the geographic line curve seperating points where a drop of water would drain to the Atlantic from those where it would drain to the Pacific. My question is how to characterize such a curve mathematically given a "reasonable" height function described over a region of the plane. I am not concerned with applied topography but also not interested in exotic pathologies. I'll propose a crude model now but feel free to propose a better one. MODEL: The domain is the unit disk. A pre-mountain with peak at $(h,k,p)$ is a function $M=M(x,y)=\frac{p}{1+s((x-h)^2+(y-k)^2)}$ where $s>>0$ controls how steep it is and $p>>0$ how high. (note that a sum M_1+M_2 will have local maxima somewhat higher than $p_1$ and $p_2$ and somewhat displaced from $(h_i,k_i)$) The surface will be $b(x,y)(M_1+M_2+\cdots+M_n)$ where the $M_i$ are a large but finite number of pre-mountains and b(x,y) is a function such as $1-x^2$ or $1-x^2-\frac{y^2}{2}$ which is positive except at (-1,0) and (1,0) where it is 0. From each initial point the path of steepest gradient leads somewhere, usually (one might suppose) to $(1,0)$ or $(-1,0).$ Using the crude model as above, or a better one (describe it!) characterize the boundry between the basin of attraction of $(1,0)$ and that of $(-1,0)$ Comments: Of course a ring of mountains could create a pit with a sink in the middle, but that can be ignored or the problem can be changed to "characterize the boundries of the various basins of attraction". At a peak or saddle point the gradient is 0 but usually any direction one goes leads to the same sink. I imagine that there are (useful) applied approximate solutions starting from a grid of sample points with edges joining nearest neighbors. But I'd like some kind of minimax description like the solution of a continuous linear programing problem. dg.differential-geometry gt.geometric-topology 5 The boundary of each basin consists of several gradient accents from saddle points to local maxima. In the normal case (finitely many non-degenerate critical points) all you need is to find all saddles and solve the gradient transport equation starting nearby (you'll have two accents from each saddle). You'll get a planar graph that separates the plane into the basins of attraction of local minima. There isn't really much more to say here. – fedja Dec 9 '10 at 5:52 3 Did you read the chapter on continental divide in Brian Hayes' Group Theory in the Bedroom? – Thierry Zell Dec 9 '10 at 6:13 1 No, but I actually own the book so I will, in the bedroom. – Aaron Meyerowitz Dec 9 '10 at 6:21 2 He also discusses it on his blog, bit-player.org/2009/long-division and bit-player.org/2009/distant-shores – Gerry Myerson Dec 9 '10 at 8:14 Fedja: Your last sentence sounds like a challenge. You are assuming that the height function is smooth with finitely many non-degenerate critical points. What about a fractal mountain range? – Bruce Westbury Dec 9 '10 at 9:03 show 2 more comments 1 Answer active oldest votes As Thierry and Gerry mentioned, Brian Hayes wrote an article "Dividing the Continent" in American Scientist (Volume 88, Number 6, page 481), reprinted in his book, Group Theory in the Bedroom and Other Mathematical Diversions. His focus is algorithms to compute the continental divide, and so does not shed much light on the thrust of your question. But he does mention two interesting connections, which I will mention in the hope that it triggers further associations. First, there is considerable algorithmic work by those interested in watersheds. For example, he cites the work of Luc Vincent and Pierre Soille, likely this paper: "Watersheds in Digital Spaces: An Efficient Algorithm Based on Immersion Simulations," (IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 13 Issue 6, June 1991). Generally the algorithms are variants on flooding the surface from minima, preventing the merging of water from different sources. In image processing, this is called the watershed transformation. up vote 2 E.g., see the images here. The watershed transformation is apparently available in MatLab. down vote accepted Second, the problem was studied in some form by James Clerk Maxwell, although Hayes does not give enough information (in the article—I don't have the book here) for me to locate a precise reference. [S:Perhaps it is related to what I know (from the work of Bob Connelly) as Maxwell-Cremona lifts? I would be interested to learn if anyone knows.:S] (See citation in comments.) Here is what Hayes says: Maxwell relates the number of topographic peaks, pits and saddles on a surface. In the case of a sphere, the formula is $p+q–s=2$, where $p$ is the number of peaks, $q$ the number of pits and $s$ the number of saddles. Maxwell also outlines a procedure for dividing the landscape into watershed regions. Thanks. The article is good and the link above leads to it. I know that Morse Topology is relevant but not really how. – Aaron Meyerowitz Dec 10 '10 at 1:38 1 @Aaron: Yes: "Before Morse, Arthur Cayley and James Clerk Maxwell had developed some of the ideas of Morse theory in the context of topography." en.wikipedia.org/wiki/Morse_theory . Ah, here's the reference: Maxwell, James Clerk (1870). "On Hills and Dales." The Philosophical Magazine 40 (269), 421–427. – Joseph O'Rourke Dec 10 '10 at 1:57 add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry gt.geometric-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/48716/characterize-a-continental-divide","timestamp":"2014-04-17T18:53:15Z","content_type":null,"content_length":"63008","record_id":"<urn:uuid:774850cf-c00e-4460-b543-1d2862414b4e>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
Rational homotopy automorphisms of E_2-operads and the Grothendieck-Teichm Rational homotopy automorphisms of E_2-operads and the Grothendieck-Teichmüller group by Benoit Fresse Result announcement. The singular chain complex of the little 2-cubes operad defines an operad in simplicial cocommutative coalgebras. This operad has a dual structure, formed by a cooperad in cosimplicial commutative algebras, which defines model of the prounipotent completion of the topological operad of little 2-cubes. The group of homotopy automorphisms of an object in a model category is formed by the homotopy classes of self homotopy equivalences of a cofibrantfibrant replacement of this object. We prove by using the model of cooperads in cosimplicial commutative algebras that the group of homotopy automorphisms of the rational prounipotent completion of the little 2-cubes operad is the Grothendieck-Teichmüller group over Q. The notes available on this page give a detailed account of our result. The proof will be integrated in a monograph in preparation on homotopy automorphisms of operads.
{"url":"http://math.univ-lille1.fr/~fresse/E2RationalAutomorphisms.html","timestamp":"2014-04-21T02:00:27Z","content_type":null,"content_length":"1834","record_id":"<urn:uuid:9e3d1587-8686-4f6b-9a67-bce70071838b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
The Prisoner's Dilemma By Freeman Dyson The Evolution of Cooperation is the title of a book by Robert Axelrod. It was published by Basic Books in 1984, and became an instant classic. It set the style in which modern scientists think about biological evolution, reducing the complicated and messy drama of the real world to a simple mathematical model that can be run on a computer. The model that Axelrod chose to describe evolution is called “The Prisoner’s Dilemma.” It is a game for two players, Alice and Bob. They are supposed to be interrogated separately by the police after they have committed a crime together. Each independently has the choice, either to remain silent or to say the other did it. The dilemma consists in the fact that each individually does better by testifying against the other, but they would collectively do better if they could both remain silent. When the game is played repeatedly by the same two players, it is called Iterated Prisoner’s Dilemma. In the iterated game, each player does better in the short run by talking, but does better in the long run by remaining silent. The switch from short-term selfishness to long-term altruism is supposed to be a model for the evolution of cooperation in social animals such as ants and humans. Mathematics is always full of surprises. The Prisoner’s Dilemma appears to be an absurdly simple game, but Axelrod collected an amazing variety of strategies for playing it. He organized a tournament in which each of the strategies plays the iterated game against each of the others. The results of the tournament show that this game has a deep and subtle mathematical structure. There is no optimum strategy. No matter what Bob does, Alice can do better if she has a “Theory of Mind,” reconstructing Bob’s mental processes from her observation of his behavior. William Press, Professor at the University of Texas at Austin, is the author of Numerical Recipes, the cookbook for people who do serious scientific computing. He is the Julia Child of numerical cuisine. He recently invented a new class of Prisoner’s Dilemma strategies and tried them out numerically to see how they performed. He found that they behaved weirdly. They had a bad effect on his computer program, causing it to crash. He sent me an email asking whether I could understand what was going on. He is a modern calculator who works with numerical programs, while I am an ancient calculator who works with equations. So I wrote down the equations and did the math the old-fashioned way. I found a simple equation that told us when the behavior would be weird. I started a new career as a self-proclaimed expert in the theory of games. Press and I published a paper, “Iterated Prisoner’s Dilemma Contains Strategies that Dominate Any Evolutionary Opponent,” in the Proceedings of the National Academy of Sciences, May 22, 2012. This created quite a stir in the world of theoretical biology. As usual when you discover something new, the response comes in three waves. First, this is nonsense. Second, this is trivial. Third, this is important, and we did it before you did. The most interesting of Press’s new strategies are the ones that he calls extortion strategies. As usual in the mathematical discussion of games, he uses a numerical payoff scheme to represent the value to Alice and Bob of talking to the police or remaining silent. If Alice uses an extortion strategy, she can arrange things so that, no matter what Bob does and no matter how much payoff he gets, she will get three times as much. The only way for Bob to get even is to accept zero payoff, in which case Alice also gets zero. If Bob acts so as to maximize his own payoff, Alice’s payoff is automatically maximized three times more generously. In a commentary published on the Edge website, William Poundstone, author of a book on the Prisoner’s Dilemma, summarized our work as follows: “Robert Axelrod’s 1980 tournaments of iterated prisoner’s dilemma strategies have been condensed into the slogan, Don’t be too clever, don’t be unfair. Press and Dyson have shown that cleverness and unfairness triumph after all.” I am interested in a bigger question, the relative importance of individual selection and group selection in the evolution of cooperation. Individual selection is caused by the death of individuals who make bad choices. Group selection is caused by the extinction of tribes or species that make bad choices. The fashionable dogma among biologists says that individual selection is the driving force of evolution and group selection is negligible. Richard Dawkins is especially vehement in his denial of group selection. The Prisoner’s Dilemma is a model of evolution by individual selection only. That is why believers in the fashionable dogma take the model seriously. I do not believe the fashionable dogma. Here is my argument to show that group selection is important. Imagine Alice and Bob to be two dodoes on the island of Mauritius before the arrival of human predators. Alice has superior individual fitness and has produced many grandchildren. Bob is individually unfit and unfertile. Then the predators arrive with their guns and massacre the progeny indiscriminately. The fitness of Alice and Bob is reduced to zero because their species made a bad choice long ago, putting on weight and forgetting how to fly. I do not take the Prisoner’s Dilemma seriously as a model of evolution of cooperation, because I consider it likely that groups lacking cooperation are like dodoes, losing the battle for survival collectively rather than individually. Another reason why I believe in group selection is that I have vivid memories of childhood in England. For a child in England, there are two special days in the year, Christmas and Guy Fawkes. Christmas is the festival of love and forgiveness. Guy Fawkes is the festival of hate and punishment. Guy Fawkes was the notorious traitor who tried to blow up the King and Parliament with gunpowder in 1605. He was gruesomely tortured before he was burnt. Children celebrate his demise with big bonfires and fireworks. They look forward to Guy Fawkes more than to Christmas. Christmas is boring but Guy Fawkes is fun. Humans are born with genes that reward us with intense pleasure when we punish traitors. Punishing traitors is the group’s way of enforcing cooperation. We evolved cooperation by evolving a congenital delight in punishing sinners. The Prisoner’s Dilemma did not have much to do with it. Freeman Dyson, Professor Emeritus in the School of Natural Sciences, first came to the Institute as a Member in 1948 and was appointed a Professor in 1953. His work on quantum electrodynamics marked an epoch in physics. The techniques he used form the foundation for most modern theoretical work in elementary particle physics and the quantum many-body problem. He has made highly original and important contributions to an astonishing range of topics, from number theory to adaptive optics.
{"url":"https://www.ias.edu/print/about/publications/ias-letter/articles/2012-fall/dyson-dilemma","timestamp":"2014-04-16T15:59:35Z","content_type":null,"content_length":"13699","record_id":"<urn:uuid:a047b4b6-de0b-41c8-866a-d222ad2cb0e4>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
Complex numbers July 3rd 2007, 07:04 AM #1 Jul 2007 Complex numbers I am trying to solve this set of problems, any ideas: Let 1, omega, omega squared, ....., omega^n-1 be nth rooots of unity (a) Show the conjugate of any nth root of unity is another root of unity, by expressing omega(bar)^j in the form omega^k for appropriate k. (b) Find the product of the nth roots of unity (c) Find the sum Summation (n-1), k=0 for omega^k of nth roots of unity. (this is a geometric series). Let $\zeta$ to the generator of the $n$ roots of unity. That means, $\{1,\zeta,\zeta^2,...,\zeta^{n-1}\}$ contains the roots of unity. (a) Show the conjugate of any nth root of unity is another root of unity, by expressing omega(bar)^j in the form omega^k for appropriate k. Choose $j=n-k$. $\cos \left( \frac{2\pi (n-k)}{n} \right) + i \sin \left( \frac{2\pi (n-k)}{n} \right) = \cos \left( \frac{2\pi k}{n}\right) - i \sin \left( \frac{2\pi k}{n} \right)$ (b) Find the product of the nth roots of unity We have, (since $\zeta ot = 1$). $1+\zeta + \zeta^2 + \zeta^3 + ... + \zeta^{n-1} = \frac{1-\zeta^n}{1-\zeta} = 0$ Because $\zeta^n - 1=0$. (c) Find the sum Summation (n-1), k=0 for omega^k of nth roots of unity. (this is a geometric series). How is that different from (b)? Hello, PacManisAlive! Here's some help . . . Let $1,\:\omega,\:\omega^2,\:\omega^3,\:\cdots\,\:\omeg a^{n-1}$ be the $n^{th}$ roots of unity. (a) Show the conjugate of any $n^{th}$ root of unity is another root of unity. Let a root be: . $a + bi \;=\;\cos\theta + i\sin\theta$ Then: . $(a + bi)^n \;=\;(\cos\theta + i\sin\theta)^n \;=\;\cos(n\theta) + i\sin(n\theta) \;=\;1$ . . Hence: . $\begin{array}{ccc}\cos(n\theta) & = & 1 \\ \sin(n\theta) & = & 0 \end{array}$ The conjugate is: . $a - bi \;=\;\cos(\text{-}\theta) + i\sin(\text{-}\theta) \;=\;\cos\theta - i\sin\theta$ $\text{Then: }\;(a - bi)^n \;=\;(\cos\theta - i\sin\theta)^n \;=\;\underbrace{\cos(n\theta)}_{\text{this is 1}} - i\underbrace{\sin(n\theta)}_{\text{this is 0}} \;=\;1$ Therefore, the conjugate is also an $n^{th}$ root of unity. July 3rd 2007, 08:46 AM #2 Global Moderator Nov 2005 New York City July 3rd 2007, 08:58 AM #3 Super Member May 2006 Lexington, MA (USA)
{"url":"http://mathhelpforum.com/advanced-algebra/16478-complex-numbers.html","timestamp":"2014-04-16T10:13:19Z","content_type":null,"content_length":"42349","record_id":"<urn:uuid:5371dd4b-2e12-4613-aebb-3d9cd0a8e4b2>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
January 2 Hey, another kind of "over-under"!! Just some food for thought to put the number one billion and history in perspective... A billion seconds ago it was about 1976. A billion minutes ago Jesus was alive. A billion hours ago our ancestors were living in the Stone Age. A billion days ago no creature walked the earth on two feet. And a billion dollars lasts 8 hours and 20 minutes at the rate our Government spends it. There are many references for this on the web and I'm sure you've seen it before. You can check the accuracy (perhaps the last one is a little off!). I still like sharing this with students as it not only puts the concept of a billion in perspective but it does offer a wonderful application of 1-significant digit estimates, scientific notation and orders of magnitude. Can you imagine asking students to estimate that a billion seconds is roughly 32 years without a calculator!! Where are these kinds of estimates currently in our math curriculum? More likely occurring in a science class? Do they belong somewhere in our classes or are they just amusing curiosities? You can guess where my thoughts lie! The deadline for registering for the First MathNotations Math Contest on Tue Feb 3rd is drawing to a close but you still have an opportunity to register! Look here for details and email me if you want a team of your students to participate! As I've reported for some time (look here), here in NJ the Commissioner of Education has been promoting higher standards and more ambitious graduation requirements, choosing Algebra 2 as the cornerstone. I've had misgivings about this as a requirement for all students for several reasons although I'm a strong supporter of the American Diploma Project's Algebra 2 benchmarks and the End of Course Test for all students who choose to take the course because of their educational goals. A recent article (1-27-09) on the website pressofAtlanticCity.com gives an excellent account of the debate raging over this topic at the State Legislative level. I will reprint a good portion of the article and then reprint the comment I posted on the site. I strongly encourage my readers to read the entire article and all of the comments posted thus far. It is a microcosm of much of the current debate in math education. Several of the commenters provided a commonsense view of these issues and gave me food for thought. Education Commissioner Lucille Davy and a panel of education and business professionals appeared Monday before the Assembly Edu-cation Committee to discuss the Department of Education's High School Reform project. The requirement that all students take algebra II has been controversial, and on Monday it dominated a discussion that attempted to identify just what students need to know to succeed and compete in the 21st century. Davy insisted the algebra II requirement would not be so rigorous that it would lead to high rates of failure or students dropping out. She said it would be a continuation of algebra I, but schools could offer more rigorous honors courses to those who would need them. But Rutgers math professor Joseph Rosenstein, of the New Jersey Math and Science Coalition, wondered if the proposed courses might then get so watered down they would no longer really be algebra "Most of our students don't need algebra II," said Rosenstein, who supports requiring more practical applied math courses. Rosenstein said if courses were tailored just to meet state requirements, students who should take a true algebra II course might not get the higher level of work they need. The algebra II issue has also frustrated vocational high school officials, who worry that too many requirements will make it impossible for students to complete programs in high school. "These are students who benefit from applied learning," said Thomas Bistocchi, superintendent of the Union County Vocational School, adding that their goal is to have students graduate as industry-credentialed professionals. "We just want students who want to become plumbers have the time to do it," he said. Davy said there will be flexibility in how the coursework is offered, so that it could be integrated into vocational coursework, but opponents wonder if that could be done and still teach what would be tested. Stan Karp, of the Education Law Center, said reform is needed, but the state needs better education, not just more requirements. He said teachers and students will need better preparation to meet the new requirements. "Less than half of the high schools now require those courses," he said. "What is it going to take to get there?" Asked about the cost of reforms, Davy said the state already spends the most of any state and should not need more money, just a better reallocation of existing funds. Business representatives said they just need students who can perform modern jobs. Dennis Bone, president of Verizon, said students need the foundation of skills to be able to adapt to new and changing technology. "We are being revolutionized by technology," he said. "Billboards now are electronic, run by someone sitting at a computer, not climbing a ladder." "So what does algebra II have to do with that?" Education Committee Chairman Joseph Cryan, D-Union, asked. Dana Egresky, of the New Jersey Chamber of Commerce, said that if taking algebra II can help a carpenter solve more problems on the job, then that is the carpenter who would get the job. Assemblyman Joseph Malone, R-Ocean, Monmouth, Burlington, suggested asking professionals ranging from carpenters to doctors how they actually use algebra skills. "We need to do a better job at finding out what people actually need to know, not what we think they should know," he said. Here was my comment: I thoroughly agree with Prof. Rosenstein that not all students will need the skills/concepts of a more advanced algebra class. While I admire Commissioner Davy's desire to significantly raise the bar for NJ students there are some underlying issues that must be addressed first. How many of you believe that the majority of NJ students have demonstrated proficiency in the foundational arithmetic and prealgebra skills needed to be successful in a legitimate Algebra 1 course, never mind Algebra 2? As a retired math supervisor, believe me, that question was rhetorical! However, we must clearly distinguish between the issue of a graduation requirement for all and the need for consistent, clearly stated and rigorous standards for a 2nd year Algebra course. Despite opinions to the contrary, I believe the latter is necessary for most college-intending students. The American Diploma Project (NJ is a member of this consortium) has developed precisely those kinds of world-class standards and the result is the new End of Course Test in Algebra 2. This test, which many NJ students have already taken, requires a deeper conceptual understanding of topics such as mathematical modeling which separates the Algebra 2 of the 21st century from the Algebra 2 course many of us remember. And, yes, there are still some mechanical skills which students need to master away from the calculator! I strongly advocate that NJ adopt these higher standards for those students who will go on to take more advanced math courses. Clearly, it isn't for everyone and therefore we should reexamine it as a grad requirement for all. Dave Marain I felt it was important to make a clear distinction between Algebra 2 as a high school graduation requirement and the need for a high-quality curriculum which should be uniform for all students who need to take the course. Many commenter ranted about the evils of testing, the "who really needs algebra anyway" argument, allowing politicians to make educational decisions they know little about (imagine acknowledging that it should be left to math education professionals!) , the skills needed for the 21st century, etc. Fascinating stuff... Is this same discussion happening in your district or state? Your thoughts are important to me. Do you take strong exception to my comments? Do you agree with the NJ Commissioner of Education or has she gone too far? What do you say to the many adults who argue that, in their occupation, they haven't ever used any of the 'stuff 'they learned in Algebra 2? There's still time to register for MathNotation's First Math contest for Grades 7-12 to be held on Tue Feb 3rd. I've decided to extend the registration to Thu Jan 29th. We've had interest expressed from high schools, middle schools, homeschooling teams, even a chapter of an honorary math fraternity! I'd like to see 2-3 more teams compete but I understand that many students and teachers are overextended at this time of year and this was on short notice. Look here for how to register. So what's the paradox in the title? To someone with a firm grasp of probability there won't be one, but the following series of questions may lead to a surprise for some students. Overview of Problem We have two scenarios in this investigation: A set of five 4-choice multiple-choice questions and a set of five 5-choice multiple-choice questions. Of course the latter is typical of most standardized tests like SATs so this discussion may have relevance to many juniors right now! Instructional Suggestion For the following questions, ask students to first make educated guesses before attempting any calculations. The idea is to get them to trust their intuition which often is more accurate than their mathematical procedures! We know that the probability of correctly guessing, at random, the answer to a 4-choice question is 1/4 which is greater than the chance (1/5) of correctly guessing, at random, the answer to a 5-choice question. That was easy, right? When we ask questions about more than one question the situation becomes more complicated and a deeper understanding of probability concepts is needed: Multiplication of probabilities of independent events, binomial probabilities, etc... The Investigation (a) Which of the following is more likely? Randomly guessing all 5 wrong on a 5-choice multiple choice quiz or randomly guessing all 5 wrong on a 4-choice multiple choice quiz? By intuition (no calculation, respond in 10 sec or less): _________________ Explanation of Intuitive Guess (this may be worthy of class discussion): Now compute each probability and compare result to your intuitive answer. (b) Which is more likely? Randomly guessing at least one right out of five on a 5-choice multiple-choice quiz or on a 4-choice multiple-choice quiz? By intuition: ______________ Explanation of intuitive guess: By calculating: (c) How's your intuition doing so far? Let's try this one: Which is more likely: Randomly guessing exactly one right out of 5 on a 5-choice quiz or on a 4-choice quiz? By intuition: By calculating: Any surprises? In case your results don't agree with mine, I will tell roughly you what I got (actual probabilities below). The probability of guessing exactly one right out of five on a 5-choice quiz is slightly more than the probability of guessing exactly one right on a 4-choice quiz! A paradox? An anomaly of the arithmetic involved? Logical? Can you explain it? Try! (d) Back to normalcy? Compute the probabilities of getting exactly two right out of five on a 5-choice quiz and on a 4-choice quiz. Has the order of the universe been restored! Selected Answers (not the norm for this blog): (b) Approx 67.2% on a 5-choice quiz; 76.3% on a 4-choice (c) Approx 40.96% on a 5-choice quiz; 39.55% on a 4-choice (d) 20.48% on a 5-choice quiz; approx 26.37% on a 4-choice Pls check these results for accuracy!! What are the fundamental concepts in this investigation? What are the learning benefits of this series of questions? Please understand that my intent on this blog is to suggest instructional methods, never to impose. You may find far more effective ways to convey the essential concepts here but, from my experience, there's only sure way to perfect our craft. Keep experimenting and asking With MathNotation's First Math Contest less than two weeks away (look here for details), I wanted to provide another sample contest question (multi-part). By the way, we now have several middle schools, high schools and even homeschool teams registered from all overv the country! It takes only a few minutes to register and there's still time! For President Obama, the number four has special significance. The most obvious is that he's the 44th president. You can ask your students to think of several other connections between our new president and the number four. But for now, we will focus on 44... (a) Since 44 = 2^5 + 2^3 + 2^2, 44 equals 101100 in base 2 (binary representation). Let S be the set of all base-10 numbers (positive integers) whose binary representation consists of six digits, exactly three of which are 1's. Find the sum of these base-10 numbers to reveal part of the mystery behind the title of this post! Note that the leftmost binary digit must be "1". Comment: This is a fairly straightforward 'counting' problem accessible to middle schoolers as well as older students. One could simply make a list of the numbers and add them. However, there's a more systematic way to count the 'combinations' and a "different" way of adding here that may help you solve the next problem. Can you find it? (b) Consider the set of all base-10 numbers (positive integers) whose binary representation consists of ten digits, exactly three of which are 1's. Show that the sum of these base-10 numbers can be written 44(2^9) - 4 - 4. "Fours are wild!" Note: This seems like a tedious generalization of Part I, but, again, if you find the right way to count and add it won't take long! BTW, if you're wondering how I came to find all these 4's, well, it might have been serendipity. After all, serendipity has 11 letters and 11 is a factor of 44 and... (Twilight Zone music playing in the background...). Also, if you're wondering what my outside sources for these kinds of problems are, do you really think there's anyone else out there whose mind could be this warped! Don't miss registering for MathNotation's First Math Contest. Registration is as simple as emailing me (dmarain "at" "gmail dot com") to request a form and the Rules. The contest is team-based (up to 6 students), is designed for both middle and high school students and should take 45 minutes or less (extra time is provided for students to enter their answers/solutions on the official answer form in Word). Look here for further info. I would also like to thank the following blogs and/or webmasters for their graciousness in spreading the word about our first math contest: Let's Play Math! Wild About Math Note: Take a look at jd2718 to see the latest Carnival of Mathematics. Another excellent job by Jonathan! Homeschool Math Blog While we're waiting for the Inauguration on 1-20-09 (12,009 = 3 x 4003 of course), today is Dr. King's birthday, 1-19-09 and 11,909 is prime as it should be! How appropriate it is that we should be honoring today the man who paved the way for our new President... The title of this post reminds me of an old Johnny Carson routine: Which one doesn't belong with the others! In fact, we can probably make connections among all of these if you're willing to play with words... In case you thought that the Math Contest would lead to a hiatus in publishing investigations and instructional strategy articles, fear not! Today we will once again examine the raison d'etre of this Part I Consider the equation To reinforce multiple representations (Rule of Four) we can ask students to: Explain or show why this equation has no real solutions (a) Graphically (b) Numerically (TABLE) (c) Algebraically At this point I am including some ScreenShots from the TI-84. The bold graph is Y1: Part II - The Extension! Consider the equation (a) For what value(s) of k will the above equation have one real solution? In this case, also determine an expression for that solution in terms of k. Show method clearly. (b) For what value(s) of k will the above equation have no real solutions. Show method clearly. (c) Demonstrate your results in (a) and (b) by choosing specific values of k for each case. Use both a graph and a TABLE to support your argument. [Use of the graphing calculator makes sense here.] Which do you think is more helpful to students -- the graph or the TABLE? From my experience I find that both are important for comprehension and concept. They not only complement each other but each contributes something by itself. The graph not only suggests (not prove!) that the two graphs in part I do not intersect but it leads to a natural questions like: Why is the graph of y = √x + 2 above the the graph of Y1? What do the graphs suggest about the domain of each function? Explain the ERR messages! Note: I used the word "suggest" because we want our students to understand that graphs do not prove mathematical truth. When is it appropriate to use this approach? After you've taught the algebraic procedures of solving radical equations? Of course, part (c) of the activity asks for the algebraic explanation, but I've often used the graphical and numerical approach BEFORE teaching the procedure. I believe that it developed meaning for the traditional procedure but, in no way, did it replace the need for carefully explained instruction with a variety of examples! (The "balanced" approach!). Further, the common reaction I've heard to this kind of instruction is that it is too time-consuming and appropriate only for the honors students. I couldn't disagree more. Developing meaning does take time and is absolutely worth it. It's all part of the "less is more" philosophy and, that, if the foundation is properly put into place, students can develop both the skills of solving radical equations and an understanding of the underlying mathematics. Enough preaching to the choir... I hope you find this useful when building your next exploration in mathematics! Let me know... Don't forget to email me if you want your students to participate in the first MathNotations online math contest on Tue Feb 3rd. There is still time! Look here for info. There may not be a probability question on the first contest but the following gives you a flavor of the type of multi-part question I'm talking about -- an investigation in more depth. You will find many variations of the following problem in texts. From experience we know that the student needs to have numerous experiences with these. How do many students do on this topic when the exam question is slightly different from the ones reviewed in class! Five cards are numbered 1 through 5 (different number on each card). Typical scenario, right? George chooses cards randomly one at a time. After he selects a card, he marks a dot on the card, then puts it back (replacement!) in the pile of 5 cards, reshuffles them and draws the next card and so on. The game continues until he selects one of the "marked" cards. Before a technical analysis of this experiment (sample space, random variable, specific probabilities, expected value), I would typically ask students a broad intuitive question or ask them to suggest questions one might ask about this "game". Intuitively, I might ask: In the long run, how many draws would you "expect" it to take for the game to end? With five cards, what do you think most students would guess? Draw three? Draw four? I think asking this initial question is crucial. In most cases, we want the mathematical result to be reasonable and to roughly agree with our intuition (not always of course, there are paradoxes in math which are counterintuitive!). Part I What is the probability that George chooses a "marked" card on his second draw for the first time? On the 3rd draw for the first time? 4th draw? 5th draw? 6th draw? Another way to ask these are: What is the probability that the game "ends" after 2 draws? 3 draws, etc. Part II "On average", how many cards would George need to draw to get one of the marked cards for the first time? Note: In more technical language we are asking for the expected number of draws before the game ends? Normally, I don't publish answers to these questions but, in this case I will give partial results. Please check for accuracy. The probability the game ends after 3 draws is 8/25 or 32%. The expected value for the number of draws for the game to end is approximately 3.51. What does this mean! Important Updates: • Several schools have requested registration up to this point so the contest will probably run on Tue Feb 3rd as planned. • All you need to do to sign up initially is to email me! I will email you the Reg. Form and Rules/Procedures within 24 hours. Complete the form (about 5 minutes) and email it back and you're officially registered! (dmarain "at geemaill dot com") • A team of students should be able to complete most of the problems in 45 minutes or less. It is not necessary to keep students for the full 90 minutes! The extra time was provided for students to enter their answers/solutions electronically. • Scanned student solutions will be accepted if format is followed. • Return registration form ASAP even if you have not yet identified the 6 participants. The team can have fewer than 6 members (but at least 2). • The contest questions are copyrighted, therefore I will probably not publish all of them on this blog although I will provide some samples of questions and student responses for discussion purposes on this blog. • After the contest is over, participating schools will receive results, answers, suggested solutions and certificates via email. At that time, if anyone else is interested in receiving a copy of the questions, email me. • If you like the idea of this kind of contest and would be interested in signing up for the next one (probably in March), let me know via email or comments. • I will send a template for Certificates of Participation for your school and individual participants. Top-scoring schools will receive a Certificate of Merit. After getting several helpful comments and suggestions, I have now made an "official" decision (always subject to last minute changes of course!) regarding the date and details of our first contest. I chose this date to accommodate schools' exam weeks. The date is also a week before AMC-10 and -12. I will run this event if I get at least 6 schools participating. Pls spread the word to your friends in other schools. I understand there is not much time to consider this but the registration process and administration of the test should not be too burdensome. DATE OF CONTEST: TUE FEB 3rd 2009 • INTERESTED SPONSORS SHOULD EMAIL IMMEDIATELY (see address below) TO RECEIVE REGISTRATION FORM AND RULES/PROCEDURES! • DEADLINE FOR REGISTRATION: TUE JAN 27th • CONTEST WILL BE EMAILED TO SPONSORS BY JAN 30TH • SUITABLE GRADE LEVELS: 7-12 (Some questions can be handled by Middle School students) • 90 MIN TIME LIMIT - FLEXIBLE RANGE OF TIMES FOR ADMINISTRATION! TEAMS WILL BE ABLE TO PERFORM WELL EVEN IF ONLY 45 MIN ARE AVAILABLE! • CONTENT: Up to and including precalculus; emphasis on Algebra II • CALCULATORS ALLOWED • FEE: NONE! What makes this contest different? • Team event - Up to 6 participants may work together! • All answers/solutions must be submitted electronically • Some multistep and open-ended questions • FREE! (At least this first one is!) • Separate acknowledgments on MathNotations given to Middle and High School teams • All students will receive a Certificate of Participation and top-scoring schools and students will receive a Certificate of Merit via email. All interested sponsors should email me at dmarain "at" "geemail dot com" ASAP for the official registration form and detailed rules and procedures. After receiving your registration form, the contest problems and official answer form will be emailed before or on Jan 30th. Don't hesitate to contact me if you have any questions (the rules/procedures form should answer most questions). Many thoughts are running through my mind right now... Projects I'd like to move forward, some changes in this blog, perhaps a different website altogether... Working on the MathAnagram for the first quarter of 2009... I've already selected the mathematician. Writing an anagram that has embedded clues is labor-intensive... My reactions to a Commentary in this week's Education Week authored by the esteemed Steven Leinwand who is calling for a fascinating new concept heretofore unheard of -- a national K-12 Math Curriculum. I wish I had thought of that! Thoughts about an online math competition for high schools... Yes, that's right, I've already written the rough draft of the first six questions. I need to get the word out to high schools who might want to pilot this a few weeks from now. No cost for this first contest, but I would like to have at least a dozen schools express some interest in this before I formally announce it. If any of you reading this might be interested or know the math supervisor in your district, please spread the word. High schools can field one team of up to 6 students and will have a short window of time (from the moment I publish the questions) to submit their answers/solutions electronically. The contest will differ from others in that some questions may be multi-part and some parts will require explanation. Not just short answer! In other words, while there will be traditional contest problems, there will also be questions that reflect the investigations on this blog. Calculators will be permitted and a faculty sponsor would be needed to proctor the contest. Clearly there are major logistic problems (registering teams, different time zones, international participation, etc.). I need to work out many issues here. Questions will not at this time go past precalculus. I might need to have my head examined for considering this since I will have to read and evaluate every one of the responses! I may also need a separate website for all of this but I need to get a sense of interest out there before I jump in deeply. If this blog elicits few responses, I will probably have to disseminate this in some other way. I'd really appreciate suggestions/reactions to this both in the comments section and via email. As always you can email me at dmarain at geeeeemaillll dot com... Have embarked on a collaboration with a University math professor who is developing problem-solving experiences for his students. Some of these problems will be based on investigations published in So many wonderful math blogs out there not only from our regulars but new ones entering the math blogosphere every day. Exciting stuff... When a calculator displays zero as a result should students assume that is exact or only accurate to the precision the machine can store and/or display? The next time students ask you why we use the conjugate method to rationalize denominators, here's an example of why we sometimes use the method in "reverse". This happens more frequently in calculus but the following is an apparently trivial numerical computation your students can try on their graphing calculators. The results of this computation depend heavily on the specific technology used (e.g., expect different results between the TI-89 and the TI-84), but hopefully they will get the idea. This numerical issue came up as I was solving an applied problem which required finding the difference between two very large numbers (the difference between distances from the center of the earth to a point slightly above its surface and the radius of the earth). This numerical issue has come up more before on this blog. Look here if you want to see another application. Here's the computation: Let R = 2.0916 x 10^7 We need to compute the following expression (denoted by **) For the Student (a) Do the calculation directly on your calculator. You will want to store this value of R as a variable for later use: 2.0916x10^7 STO> ALPHA R Does your calculator display zero? If so, explain this "error." Note: This display depends on the calculator being used. I experimented with the -84 and -83. Let me know how the display appears on other machines. Of course, one would expect a very different outcome if using Mathematica! (b) Rewrite the above expression ** by multiplying the numerator and denominator by the conjugate of the expression. (Hint: Put the original expression "over 1"). (c) Recalculate the value of ** using the modified but equivalent form from part (b). What result do you see this time? Can you explain what may be going on? (d) Find other numerical expressions that produce an incorrectly displayed result on your calculator! Post these in the comments section pls! The end of 2008 and the beginning of a new year has ushered in many excellent posts from some of the top math bloggers out there. 1. I plead guilty to an error of omission -- not contributing to the 46th Carnival of Mathematics over at Mike's Walking Randomly. Then again I've missed the last several so I need to make another New Year's "Re-Solution"! Mike did an excellent job of putting together the last Carnival of the year. In particular, he featured his own choices for articles for each month of the year, introducing readers to some excellent sites. Great job, Mike. 2. Naturally, the new calendar year has sparked a flurry of posts about the number 2009: (i) The least "interesting" such post, "Get Ready for Happy 41*7^2" was probably mine. (ii) This was followed in quick order by Mike again with his "What is interesting about the number 2009?" post over at Walking Randomly. Mike suggested representing 2009 as sums of powers leading to extensions from some excellent commenters, Sol in particular. (iii) 360's "The number 2009" post adds a different perspective to this game -- some clever identities involving sums of fractions, not to mention some demographic info about the 2009th largest city and using extrapolation to surmise the 2009th richest person in the world! (iv) Denise has followed her "annual" tradition of a number game with the "2009 Mathematics Game," challenging her readers and students everywhere to represent as many as possible of the integers from 1 to 100 using only the digits 2, 0, 0 and 9 and standard arithmetic operations (which she clearly defines) and grouping symbols. I particularly like her use of the convention 0^0 = 1, a controversial definition to say the least. This game is addictive and will keep Denise's readers busy for the next 12 months or so! (v) Other 2009 posts and curiosities I've omitted?? Let me know in the comments... 3. Speaking of challenges, I asked my readers to solve a silly little riddle: "What do call solving an equation twice on Jan 1st?" We had three "first responders" so I will close the contest down now and announce our winners in the order in which I rec'd their email solutions. By the way the answer can be found "hidden" near the top of this post! And the winners are... SEAN HENDERSON (and his wife!) 4. Finally, I discovered by accident that MathNotations and several of the math blogs I enjoy reading are featured in a new aggregator of sorts, Alltop, All the top Math News. The developers liken it to a virtual magazine rack, in which the titles of the latest 5 posts or articles from the selected web sites are listed (you can see more detail by rolling over the titles). I am finding it useful for getting current information from some sites I had not seen before. We'll see where this goes. Ars Mathematica, Sol's Wild About Math and MathNotations are all ranked in the Top Ten whatever the significance of that ranking may be...
{"url":"http://mathnotations.blogspot.com/2009_01_01_archive.html","timestamp":"2014-04-16T11:00:43Z","content_type":null,"content_length":"277459","record_id":"<urn:uuid:6ac4c84d-0869-4d76-9d34-72e41f98ff41>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Write the sum or difference in the standard form a + bi. ( 4 – 2i) + ( 9 + 6i) • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/505368bae4b02986d3703c3c","timestamp":"2014-04-17T00:55:15Z","content_type":null,"content_length":"46382","record_id":"<urn:uuid:1fb62eb4-0836-4555-9dd8-15eb426a8aa3>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US7317752 - Method and system for locating a GPS correlated peak signal 1. Technical Field The present invention relates generally to a global positioning system (GPS), in particular, a receiver for use in a GPS and a method thereof. 2. Discussion of Related Art A Global Positioning System (GPS) receiver determines its position by computing the distance from and relative times of arrival of signals transmitted simultaneously from a number of GPS satellites. These satellites transmit, as part of their message, both satellite positioning data including pseudo-random codes as well as data on clock timing. Using the received pseudo-random codes, the GPS receiver determine pseudoranges to the various GPS satellites, and computes the position of the receiver using these pseudoranges and satellite timing and data on clock timing. The pseudoranges are time delay values measured between the received signals from each satellite and a local clock signal. Usually GPS signals from four or more satellites are received. The satellite data on clock timing and signature data is extracted from the GPS signals once a satellite is acquired and tracked. Acquiring GPS signals can take up to several seconds and must be accomplished with a sufficiently strong received signal to achieve low error rates. GPS signals contain high rate repetitive signals called pseudorandom (PN) codes. The codes available for civilian applications are called C/A (coarse/acquisition) codes, and have a binary phase-reversal rate, or “chipping” rate, of 1.023 MHz and a repetition period of 1023 chips for a code period of 1 millisecond. The codes sequences belong to a family known as Gold codes, and each GPS satellite broadcasts a signal with a unique Gold code. Most GPS receivers use correlation methods to compute pseudoranges. A correlator multiplies the received signal by a stored replica of the appropriate Gold code contained within its local memory, and then integrates the product to obtain a correlation or sampling value, which is used as indication of the presence of the satellite signal. By sequentially adjusting the relative timing of this stored replica relative to the received signal, and observing the correlation output, the receiver can determine the time delay between the received signal and a local clock. The initial determination of the presence of such an output is termed “acquisition.” Once acquisition occurs, the process enters the “tracking” phase in which the timing of the local reference is adjusted in small amounts to maintain a high correlation output. Global Position Satellite Systems utilize a multiplicity of satellites (constellation) to simultaneously transmit signals to a receiver to permit position location of the receiver by measurement of time-differences of arrival between these multiple signals. In general, the signals from the different satellites do not significantly interfere with one another, since they utilize different pseudorandom spreading codes that are nearly orthogonal to one another. This low interference condition depends upon the power levels (amplitudes) of the received signals being similar to one To reduce acquisition time, a GPS receiver uses several channels to handle signals that may come from several satellites. Each channel includes multi-correlation taps for use in the correlation operations. Typically, the data received at each correlation tap is stored in a memory. The stored data is processed and correlated. The size of the memory is proportional to the number of channels and taps. To reduce acquisition time, memory having sufficient capacity and speed is needed. However as a memory component ratio in the GPS receiver increases, it becomes more difficult to miniaturize the GPS receiver. FIG. 1 shows a block diagram of a conventional GPS receiver having an antenna 1, a down converter 2, a local oscillator 3, and an A/D converter 4, receiver channels 5, a receiver processor 6, a navigation processor 7, and a user interface 8. In operation, the antenna 1 receives signals through the air transmitted from a constellation of satellites. The down converter 2 converts the high frequency signal received at the antenna 1 to a lower intermediate frequency (IF) signal by mixing the signals with a local oscillation signal generated by the local oscillator 3. The A/D converter 4 converts the analog IF signals to digital signals for processing by the receiver channels 5. The IF signals received at the receiver channels 5 is processed by the receiver channels 5, the receiver processor 6 and the navigation processor 7. The receiver channels 5 have N channels and the N channels can be set by a manufacturer. The primary functions of the receiver processor 6 include generating a plurality of pseudoranges for each satellite and perform the correlation operation with the in-phase (I) and quadrature-phase (Q) data of each channel. The navigation processor 7 sets a position value using different pseudoranges for the different satellites. The user interface 8 is used to display the position data. FIG. 2 shows a block diagram of one of the N channels in receiver channel 5 of FIG. 1. The digital IF signals received from the AND converter 4 of FIG. 1 are fed to in-phase/quadrature-phase multipliers 10 wherein the IF signals are multiplied with signals generated by in-phase sine map 11 and quadrature-phase cosine map 12 or quadrature-phase sine map 11 and in-phase cosine map 12, each of which is in turn generated by a Numerical Code Oscillator (NCO) 19. The output of the in-phase/quadrature-phase multipliers 10 are in-phase IF signals corresponding to the phase of sine map 11 and quadrature-phase IF signals corresponding to the phase of cosine map 12 or the output of the in-phase/quadrature-phase multipliers 10 are quadrature-phase IF signals corresponding to the phase of sine map 11 and in-phase IF signals corresponding to the phase of cosine map 12. The receiver processor 6 generates the numerical code for controlling the NCO 19 for generating a Doppler frequency. The receiver processor 6 also generates a clock control signal input to code NCO 18 for interlocking the PN code generator 16. Pseudo-random codes associated with the satellites are generated by the PN code generator 16. The PN codes are shifted by code shifters 17 and output to a plurality of correlators 13. Correlation is performed using the correlators 13 by comparing the phase shifted PN codes to the I and Q data received from the in-phase/quadrature-phase multipliers 10. The correlated I and Q data from the correlators 13 are output to an integrator 14 wherein the correlated I and Q values are integrated. The integrated values, also known as sampling values, are stored in memory 15. Typically, each channel of the N channels of receiver channels 5 stores in memory 15 all sampling values sampled by the integrator 14 for a given duration, such as 1 millisecond for each tap. Upon collection of a predetermined number of samples, the sampled values are forwarded to an FFT unit 20 wherein fast Fourier transform is performed to determine if a peak (correlation) exists for this tap. If a peak is found, the receiver processor 6 extracts the frequency and code value information from the tap to calculate pseudoranges for acquisition. If it is determined that a peak does not exist in the sampled tap, the sampling, correlation, and FFT processing is repeated for each tap until the peak tap is located. It can be seen from this process that a large amount of data need to be stored in the receiver memory 15. Thus, memory having sufficient capacity is needed. Further, because of the need to access memory data for processing, memory access time is an important factor affecting acquisition speed and thus performance of the receiver. A global positioning system (GPS) receiver is provided, comprising a converter for converting received GPS signals to in-phase (I) and quadrature-phase (Q) digital signals; a correlator for generating expected codes and correlating the I and Q digital signals with the expected codes to output sampled I values and sampled Q values for a tap; a filter for filtering the sampled I values and sampled Q values to modified I values and modified Q values, and for summing the modified I values and modified Q values to output variation data; a memory for storing the variation data; a domain transformer for performing domain transform on the variation data to output a transformed value; and a comparator for comparing the transformed value to a threshold value for determining the presence of a peak at the tap. The sampled I values and sampled Q values are modified by assigning a positive value to the sampled I value or sampled Q value when a present sample I value or Q value has a different sign from the immediately prior sample I value or sample Q value. Preferably, the modified I values and modified Q values are fractional reductions of respective sampled I values and sampled Q values, the fractional reduction being the same for both the sampled I values and the sampled Q values, wherein the fractional reduction is one half. According to an aspect of the invention, the filter includes a pair of delay elements and a pair of single bit comparators, wherein the delay elements delay a sign bit of the sampled I value and the sampled Q value to output a prior sign value, and the single bit comparators compare a sign of the present sampled Q value with the prior sign value to provide a positive output if the present and the prior sign values are different, wherein the filter further includes an adder for performing the summing operation on the modified I value with the modified Q value, including the sign bits. Preferably, the domain transformer is a Fast Fourier Transformer. The memory further stores the sampled I and Q values of the tap identified as having a peak, wherein the memory is one of a SRAM and a DRAM. According to another aspect of the invention, a global positioning system (GPS) receiver is provided, comprising a converter for converting received GPS signals of a tap to in-phase (I) and quadrature-phase (Q) digital signals; a correlator for correlating the I and Q digital signals with expected codes to output sampled I values and sampled Q values, each of the sampled I values and sampled Q values having a sign bit for signifying direction; a filter for filtering at least the sign bits of the sampled I values and sampled Q values and determining whether a potential peak exists at the tap based on the number of change in directions in the sign bits of the sampled I values and sampled Q values; a domain transformer for performing domain transform on data derived from the sampled I values and sampled Q values of the tap determined to have a potential peak and outputting a transformed value; and a comparator for comparing the transformed value to a threshold value for determining the presence of a peak at the tap. A memory is provided for storing the data derived from the sampled I values and sampled Q values of the tap determined to have a potential peak, wherein the memory is one of a SRAM and a DRAM. Preferably, the data is derived from the sampled I values and sampled Q values by adding sign-modified I values to sign-modified Q values, wherein the sampled I values and sampled Q values are filtered by assigning a positive value to the sampled I value or sampled Q value when a present sample I value or Q value has a different sign from the immediately prior sample I value or sample Q value. The filtered I values and filtered Q values can be fractional reductions of respective sampled I values and sampled Q values, the fractional reduction being the same for both the sampled I values and the sampled Q values. The filter includes a pair of delay elements and a pair of single bit comparators, wherein the delay elements delay a sign bit of the sampled I value and the sampled Q value to output a prior sign value, and the single bit comparators compare a sign of the present sampled Q value with the prior sign value to provide a positive output if the present and the prior sign values are different. A method for processing global positioning system (GPS) signals is also provided for determining position, comprising of a receiving GPS signals from one or more satellites; converting the received GPS signals to in-phase (I) and quadrature-phase (Q) digital signals of a tap; generating expected codes and correlating the I and Q digital signals with the expected codes to output sampled I values and sampled Q values; filtering the sampled I values and sampled Q values to modified I values and modified Q values, and summing the modified I values and modified Q values to output variation data; storing in memory the variation data; performing domain transform on the variation data to output a transformed value; and comparing the transformed value to a threshold value for determining the presence of a peak at the tap. The sampled I values and sampled Q values can be modified by assigning a negative value to the sampled I value or sampled Q value when a present sampled I value or Q value has a different sign from the immediately prior sampled I value or sampled Q value. The modified I values and modified Q values can be fractional reductions of respective sampled I values and sampled Q values, the fractional reduction being the same for both the sampled I values and the sampled Q values, wherein the fractional reduction is one half. The step of filtering can include delaying a sign bit of the sampled I value and the sampled Q value to output a prior sign value, and comparing a sign of the present sampled Q value with the prior sign value to provide a negative output if the present and the prior sign values are different, and further includes summing the modified I value with the modified Q value, including the sign bits. The method further includes storing in the memory the sampled I and Q values of the tap identified as having a peak and discarding sampled I and Q values of other taps. Another method for processing global positioning system (GPS) signals is provided, comprising of a converting received GPS signals of a tap to in-phase (I) and quadrature-phase (Q) digital signals; correlating the I and Q digital signals with expected codes to output sampled I values and sampled Q values, each of the sampled I values and sampled Q values having a sign bit for signifying direction; filtering at least the sign bits of the sampled I values and sampled Q values and determining whether a potential peak exists at the tap based on the number of change in directions in the sign bits of the sampled I values and sampled Q values; performing domain transform on data derived from the sampled I values and sampled Q values of the tap determined to have a potential peak and outputting a transformed value; and comparing the transformed value to a threshold value for determining the presence of a peak at the tap. The method preferably further includes storing in a memory the data derived from the sampled I values and sampled Q values of the tap determined to have a potential peak. According to still another aspect of the invention, a stored program device having stored codes executable by a processor to perform method steps for processing GPS signals is also provided, the method comprising of a correlating I and Q digital signals with expected codes to output sampled I values and sampled Q values, each of the sampled I values and sampled Q values having a sign bit for signifying direction; filtering at least the sign bits of the sampled I values and sampled Q values and determining whether a potential peak exists at the tap based on the number of change in directions in the sign bits of the sampled I values and sampled Q values; performing domain transform on data derived from the sampled I values and sampled Q values of the tap determined to have a potential peak and outputting a transformed value; and comparing the transformed value to a threshold value for determining the presence of a peak at the tap. The embodiments of the present invention will become more apparent when detail description of embodiments are read with reference to the accompanying drawings in which: FIG. 1 shows a block diagram of a conventional GPS receiver; FIG. 2 shows a block diagram of one of the N channels in receiver channels 5 of FIG. 1; FIG. 3 shows a block diagram of a GPS receiver according to an embodiment of the present invention; FIG. 4 shows an exemplary implementation of filter 30 of FIG. 3; FIG. 5 shows another exemplary implementation of filter 30 of FIG. 3; FIG. 6 shows still another exemplary implementation of the filter 30 of FIG. 3; FIG. 7 shows a plot of the 16 sets of I and Q sampled values listed in Table I. FIG. 8 shows a plot of 16 sets of I and Q sampled values listed in Table II. FIG. 9 is a flow diagram of a method of processing GPS signals according to an embodiment of the present invention; FIG. 10 is a flow diagram of a method of processing GPS signals according to an embodiment of the present invention; FIG. 11 is a flow diagram of a method of processing GPS signals according to an embodiment of the present invention; and FIG. 12 is a graph showing plots of the fractional variation values extracted from Tables I to III for a non-peak tap (Table 1) and a peak tap (Table III). Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. It should be noted that like reference numerals are used for designation of like or equivalent parts or portion for simplicity of illustration and explanation. FIG. 3 shows a block diagram of a GPS receiver according to an embodiment of the present invention. The components of the receiver shown in FIG. 3, other than a filter 30, perform the functions described above for the components of FIG. 2. The filter 30 is configured to receive the sampling I and Q values output from the integrator 14. According to at least one embodiment of the present invention, the filter 30 modifies the sampled I and Q values so that a data set that is reduced from the sampled values are chosen for storage in memory 15. According to another embodiment of the present invention, the filter 30 extracts the correlation characteristics of the sampled I and Q values and selectively stores the I and Q values or modified I and Q values based on a screening process. The I and Q sampling values of a tap determined to not have a peak are discarded and they are not stored in memory. Processing of the stored data by the FFT unit 20 to determine the existence of a peak tap is more efficient because of the reduced data set and the capacity requirement of the memory 15 is also reduced, thereby reducing power consumption and the physical size of memory 15. FIG. 4 shows an exemplary implementation of the filter 30 of FIG. 3. The sampled I and Q values output from integrator 14 is input to a pair of delay elements 23, 24 and sign bit comparators 25, 26. For purposes of illustrating embodiments of the present invention, the sampled I and Q values are selected to be 16 bits, the duration of sampling is selected to be 1 millisecond, and each sample frame is chosen to be 16 samples. It is to be appreciated that different bit numbers, sampling durations, and sampling frame can be used without departing from the present invention. As shown in FIG. 4, each of 16 bits of data representing sampled I and Q values plus one sign bit is input to one of n-taps of the filter 30. The circuit of tap 0 is shown in FIG. 4. The sign bit is input to the delay element 23 which delays for one clock period the sign bit before it is entered into the sign bit comparator 25. The sign bit comparator 25 facilitates the comparison of the sign values of the prior sampled I data with the present sampled I data. The sign bit comparator 25 outputs a logic 0, signifying a positive number, if the present sign bit is different from the prior sign bit. The delay element 24 and the sign bit comparator 26 perform the above described functions for the sampled Q data. Thus, the sampled I and Q data are modified in their sign (or direction) depending on the direction of the sampling data with respect to time. The modified I and Q data are input to an accumulator 27 wherein the modified I and Q data including their sign bits are added. The accumulated data is ‘variation data’. According to the present embodiment, the 16 variation data accumulated from the modified I and Q values are output for storage in memory 15. The stored data is then used by the FFT unit 20 to perform a Fourier transform for determining whether an actual peak exist in this tap. The sign bit comparators 25 and 26 are preferably implemented by using an exclusive nor (xnor) logic. It is to be appreciated that the sign bit comparators can also be implemented using an exclusive or (xor) logic and in such embodiment the comparison will result in a negative value (logic 1). When the present sample value and the prior sample value have same signs with using the xnor logic or when the present sample value and the prior sample value have different signs with using the xor logic, the counter 28 counts the number of logic 1. If a peak is not formed for this tap, the above described process is repeated for the next tap. FIG. 5 shows another exemplary implementation of the filter 30 of FIG. 3 according to an alternative embodiment of the present invention. Referring to FIG. 5, whenever a negative value result from the accumulation of the modified I and Q values in the accumulator 27, a logic one signal is output to a counter 28 to increment a count for this tap. The counter 28 is reset to zero at the beginning of data sampling for each tap. Upon completion of a sampled frame, e.g. 16 samples, the final count is compared against a preset threshold in the logic circuit 29. If the count value exceeds the preset threshold, for example, 12 out of 16, data from tap 0 is considered as a potential peak. In such case, the sampled I and Q values of the tap in question is stored in memory 15. The stored data is then processed by the FFT unit 20 and the receiver processor 6 to determine if a peak exists in this tap. If the count value for any particular tap does not exceed the preset threshold, the sampled I and Q values, the modified I and Q data, and the variation data are not stored in the memory 15. These data can be discarded. FIG. 6 shows still another exemplary implementation of the filter 30 of FIG. 3 according to an alternative embodiment of the present invention. Referring to FIG. 6, upon indication of a count value exceeding the present threshold, as determined by the logic 29, the variation data output from the accumulator 27 is output to be stored in memory 15 instead of the sampled I and Q values. According to this embodiment, the variation data of a potential peak tap is stored and processed by the FFT unit 20 and the receiver processor 6. Thus, the dataset stored in memory 15 is a further reduction from the dataset of the sampled I and Q values output from the integrator 14. Table 1 lists exemplary data received from a tap and the processing of the data by the filter 30 according to an embodiment of the present invention. TABLE 1 The variation value generation table in the case a peak tap does not exist. I (quadrature I′ Q′ Variation Count value (same phase phase sample modified modified Value (Sign bit value — sample value) value) I + Q value value I′ + Q′ of (I′ + Q′)) 1 174 −6 168 −174 −6 −180 1 2 −214 280 66 214 280 494 0 3 360 −88 272 360 88 448 0 4 −297 154 −143 297 154 451 0 5 353 43 396 353 −43 310 0 6 −84 289 205 84 −289 −205 1 7 −95 −255 −350 −95 255 160 0 8 −4 −172 −176 −4 −172 −176 1 10 −11 −267 −278 11 267 278 0 11 −267 −19 −286 −267 −19 −286 1 12 −44 −152 −196 −44 −152 −196 1 14 −346 21 −325 346 −21 325 0 15 −167 −24 −188 −167 24 −143 1 16 20 −276 −256 20 −276 −256 1 Number of 7 In table 1, 16 samples of sampled I and Q values output from the integrator 14 and received at the filter 30 are shown in columns I and Q. The modified I and Q values are shown in columns I′ and Q′, respectively. As shown therein, the sign of each sample value is assigned a positive value when there is a change in the sign between the present sample value and the prior sample value of I and Q. The I and Q values are modified by the delay element 23 and the sign comparator 25 for the sampled I values and the delay element 24 and sign comparator 26 for the Q sample values. The modified I and Q values, I′ and Q′, are added by accumulator 27 to output variation value. This sum is shown in Table 1 in the column labeled as Variation Value (I′+Q′). The sum operation in accumulator 27 adds the magnitude of I′ and Q′, taking into account the sign of both I′ and Q′ values. For each occurrence of a negative value output from the accumulator 27, a transition is sent to the counter 28 to increment the counter value. As shown in the count value column of Table 1, the count value for the data from this tap is 7 out of a frame of 16 samples. This signifies that there were 7 negative values from the summation of the modified I′ and Q′ sample values. Table 1 shows data for a tap which does not have a peak. One ordinary skilled in the art recognizes that for the existence of a peak in a tap, the sampled I and Q values will appear as two clusters, one for the sampled I values and one for the sampled Q values. FIG. 7 shows both sampled I and Q values swinging in different directions around the zero axis. One ordinary skilled in the art viewing the plot shown in FIG. 7 would recognize that the tap under question does not have a peak. According to embodiments of the present invention and as shown in Table 1, the count value is a measure of the number of change in direction of the sampled I and Q values between the zero axis. Thus, a count of seven (7) out of sixteen (16) dataset can be construed as a dataset having data points which swing in different directions around the zero axis and are not clustered away from the zero axis. From the count value of 7 out of 16, the tap can be dismissed as one not having a peak. I (quadrature I′ Q′ Variation Count value (same phase phase sample modified modified Value (Sign bit value — sample value) value) I + Q value value I′ + Q′ of (I′ + Q′)) 1 11000 389 11389 −11000 −389 −11389 1 2 11100 −363 10737 −11100 363 −10737 1 3 11300 −717 10583 −11300 −717 −12017 1 4 10900 −1670 9230 −10900 −1670 −12570 1 5 10800 −2620 8180 −10800 −2620 −13420 1 6 10700 −2990 7710 −10700 −2990 −13690 1 7 10500 −3440 7060 −10500 −3440 −13940 1 8 10200 −4440 5760 −10200 −4440 −14640 1 9 9920 −5300 4620 −9920 −5300 −15220 1 10 9790 −5220 4570 −9790 −5220 −15010 1 11 9240 −6000 3240 −9240 −6000 −15240 1 12 8670 −6910 1760 −8670 −6910 −15580 1 13 8200 −7410 740 −8200 −7410 −15610 1 14 8070 −7460 610 −8070 −7460 −15530 1 15 7240 −8120 −880 −7240 −8120 −15360 1 16 6590 −8870 −2280 −6590 −8870 −15460 1 Number of 16 Table II shows sampled I and Q values from a tap having a peak. As can be seen from Table II, the sampled I and Q values are clustered largely in the same direction throughout the 16 samples. It can also be seen that the modified I and Q sampled values (I′ and Q′), as modified by the filter shown in FIG. 4, result in modified I and Q values with a negative sign essentially throughout the 16 samples for both I′ and Q′ signaling little or no change in direction of the I and Q sample values. Therefore, the output of the accumulator 27 (I′+Q′) result in a larger negative value that is clustered well above the zero axis. Since each variation value (I′+Q′) is negative, the counter is counted 16 times to result in a count value of 16. This is recognized as a tap having a peak. FIG. 8 shows a plot of the sampled I and Q values listed in Table II. It can be seen that there are two clusters of data, one for the I sampled values and the other for the Q sampled values. According to an embodiment of the present invention, to determine whether a peak existence at a particular tap, the variation values (I′+Q′) are stored in memory 15, and stored variation values are processed by the FFT unit 20 to search for the existence of a peak. Existence of a peak can be determined by the variation values transformed by the FFT unit 20 when compared against a value predetermined to define the existence of a peak. When a peak is determined to be present at a tap, the frequency, code values, and phase offsets are extracted from the sampled I and Q values and pseuodoranges are calculated. Alternatively, to further reduce the dataset, the sampled I and Q values are reduced by a fractional multiplier, such as by ½, ¼, etc. prior to its filtering and processing by the filter 30 and the FFT unit 20. The multiplier (not shown) can be part of the filter 30 or disposed between the integrator 14 and the filter 30. FIG. 9 illustrates the process flow of data received from a tap to determine the existence of a peak according to an embodiment of the present invention. As shown, the receiver according this embodiment of the present invention receives I and Q values at a tap at step 71. N samples of integrated correlation values (sampled values) are output to the filter 30 at step 72. According to the illustrative embodiment, N is equal to 16 and the duration of integration is 1 millisecond. The sampled I and Q values are received at the filter 30 at step 73. The sampled I and Q values are modified to have a positive value when there is a change in the sign from the prior sample value to the present sample value at step 74. The modified I and Q values are added by the accumulator 27 at step 75. Upon reaching N samples of I and Q value pairs, at step 76, the accumulated modified I and Q values (variation values) are stored in memory 15 (step 77). The stored data is processed by the FFT unit 20 at step 78 and the FFT transformed value is compared against a given threshold to determine whether the maximum value is a peak value (step 79). Then, the I and Q values are stored when the value is maximum for phase offset of the code NCO 18 (step 80). Upon determination that a peak exists at step 81, the navigation processor 7 calculates pseudoranges, phase offset, etc. at step 83 . When a peak does not exist at step 81, the process returns to step 71 to determine a next searching frequency and code delay value (step 82). According to another embodiment of the present invention, the count value from the counter 28 of FIG. 5 and count values shown in Tables I and II are used to determine the existence of a peak at the corresponding tap. When a peak exists at a tap, a count value will be close to the number of samples and of the sampled I and Q values received. In this embodiment, the count value of a peak tap should approach 16. Thus, a threshold of, for example, 14 can be set and if the count value exceeds 14, decision is made that a peak exists at the present tap. According to this embodiment, sampled I and Q values are stored in memory 15 for processing. The sample I and Q values of taps found to not have a count value exceeding the threshold is determined to not have a peak and the corresponding I and Q sampled values are not stored in memory 15. These I and Q sampled values are not used for the acquisition operation and they are discarded. FIG. 10 shows the exemplary process flow according to this embodiment. As shown, the receiver receives I and Q values of a tap at step 91. N samples of integrated correlation values (sampled values) output to the filter 30. The sampled I and Q values are received at the filter 30 at step 93. The sampled I and Q values are modified to have a positive value when there is a change in the sign from the prior sample value to the present sample value at step 94. The modified I and Q values are added at the accumulator 27 at step 95. Upon reaching N samples of I and Q value pairs, at step 96, the counter value is compared against a preset threshold in logic 29 at step 97. If the count value equals or exceeds the preset threshold, the tap in question is considered a tap which potentially has a peak. In such instance, the sampled I and Q values are stored in memory 15 (step 92). The stored data is processed by the FFT unit 20 at step 78 and the FFT transformed value is compared against a given peak threshold to determine whether a peak exists at the tap (step 99). Upon determination that a peak exists at step 110, post processing to calculate pseudoranges, phase offset, etc. is performed at step 120. When a peak does not exist at step 110, the process is returned to step 91 to determine a next searching frequency and a code delay value (step 130). FIG. 11 shows the exemplary process flow according to this embodiment. As shown, the receiver according this embodiment of the present invention receives I and Q values at a tap at step 211. N samples of integrated correlation values (sampled values) are output to the filter 15 at step 212. According to the illustrative embodiment, N is equal to 16 and the duration of integration is 1 millisecond. The sampled I and Q values are received at the filter 30 at step 213. The sampled I and Q values are modified to have a positive value when there is a change in the sign from the prior sample value to the present sample value at step 214. The modified I and Q values are added by the accumulator 27 at step 215. Upon reaching N samples of I and Q value pairs, at step 216, the counter value is compared against a preset threshold in logic 29 at step 217. If the count value equals or exceeds the preset threshold, the tap in question is considered a tap which potentially has a peak. In such instance, the accumulated I and Q values are stored in memory 15 (step 218). The stored data is processed by the FFT unit 20 at step 219 and the FFT transformed value is compared against a given peak threshold to determine whether a peak exists at the tap (step 220). And then, store I and Q values when the value is maximum for phase offset of the code NCO 18 (step 221). Upon determination that a peak exists at step 222, post processing to calculate pseudoranges, phase offset, etc. is performed at step 223. When a peak does not exist at step 222, determine a next searching frequency and a code delay value (step 224), and return to the step 211. According to an alternative embodiment of the present invention, using the filter 30 and the counter 28 as described above, the count value is used to determine the existence of a potential peak at the tap in question. Upon determination that the tap is a potential peak, instead of storing the sampled I and Q values as in the prior embodiment, the variation values (I′+Q′) are stored in the memory 15. The stored data is then processed by the FFT unit 20 to determine whether a peak exists at the tap. According to this embodiment, the sampled I and Q values and variation values (I′+Q′) of taps determined as not have a potential peek are not stored in memory 15 and these date are not processed. The memory 15 is a semiconductor memory, preferably one of a SRAM and DRAM. To further reduce the dataset to be stored in memory 15, the sampled I and Q values can be reduced by multiplying with a fraction, such as ½, ¼, etc., prior to their processing by the filter 15. The multiplier/shifter (not shown) can be implemented before the values are entered into the accumulator 27 of FIG. 4. Table II shows the sampled I and Q values, the fractional modified I and Q values, the variation values (I ½+Q ½), and count values of a tap having a peak. I Q I′/2 Q′/2 Variation value (same phase (quadrature phase comparison comparison (sign bit value of — sample value) sample value) value value I′/2 + Q′/2 (I′/2 + Q′/2)) 1 11000 389 −5500 −194.5 −5694.5 1 2 11100 −363 −5550 181.5 −5368.5 1 3 11300 −717 −5650 −358.5 −6008.5 1 4 10900 −1670 −5450 −835 −6285 1 5 10800 −2620 −5400 −1310 −6710 1 6 10700 −2990 −5350 −1495 −6845 1 7 10500 −3440 −5250 −1720 −6970 1 8 10200 −4440 −5100 −2220 −7320 1 9 9920 −5300 −4960 −2650 −7610 1 10 9790 −5220 −4895 −2610 −7505 1 11 9240 −6000 −4620 −3000 −7620 1 12 8670 −6910 −4335 −3455 −7790 1 13 8200 −7410 −4100 −3705 −7805 1 14 8070 −7460 −4035 −3730 −7765 1 15 7240 −8120 −3620 −4060 −7680 1 16 6590 −8870 −3295 −4435 −7730 1 Number of 16 FIG. 12 is a graph showing plots of the fractional variation values extracted from Tables I to III for a non-peak tap (Table 1) and a peak tap (Table III). It can be seen that the peak tap trend is a cluster well away from the zero axis and the non-peak tap trend have values which swing in different directions around the zero axis. It is readily appreciated by one ordinary skilled in the art that although the embodiments of the filter of the present invention are shown and described with circuit components, the filter can be implemented by software or by use of a storage device having stored codes executable by a processor, and upon execution of the codes, the filtering functions as above described are implemented. The storage device is preferably one of a flash memory and a ROM. Although illustrative embodiments of the present invention have been described herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments, and that various other changes and modifications may be affected therein by one skilled in the art without departing from the scope or spirit of the invention.
{"url":"http://www.google.com/patents/US7317752?dq=patent:4807115","timestamp":"2014-04-16T10:58:45Z","content_type":null,"content_length":"145511","record_id":"<urn:uuid:ca83307a-e38e-45b9-8357-021e745558f6>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
pecialization of Results 1 - 10 of 18 - Artificial Intelligence , 2001 "... Since the beginning of AI, mind games have been studied as relevant application fields. Nowadays, some programs are better than human players in most classical games. Their results highlight the efficiency of AI methods that are now quite standard. Such methods are very useful to Go programs, bu ..." Cited by 78 (17 self) Add to MetaCart Since the beginning of AI, mind games have been studied as relevant application fields. Nowadays, some programs are better than human players in most classical games. Their results highlight the efficiency of AI methods that are now quite standard. Such methods are very useful to Go programs, but they do not enable a strong Go program to be built. The problems related to Computer Go require new AI problem solving methods. Given the great number of problems and the diversity of possible solutions, Computer Go is an attractive research domain for AI. Prospective methods of programming the game of Go will probably be of interest in other domains as well. The goal of this paper is to present Computer Go by showing the links between existing studies on Computer Go and different AI related domains: evaluation function, heuristic search, machine learning, automatic knowledge generation, mathematical morphology and cognitive science. In addition, this paper describes both the practical aspects of Go programming, such as program optimization, and various theoretical aspects such as combinatorial game theory, mathematical morphology, and MonteCarlo methods. B. Bouzy T. Cazenave page 2 08/06/01 1. - ACM Computing Surveys , 1996 "... We present an overview of the program transformation methodology, focusing our attention on the so-called `rules + strategies' approach in the case of functional and logic programs. The paper is intended to offer an introduction to the subject. The various techniques we present are illustrated via s ..." Cited by 76 (4 self) Add to MetaCart We present an overview of the program transformation methodology, focusing our attention on the so-called `rules + strategies' approach in the case of functional and logic programs. The paper is intended to offer an introduction to the subject. The various techniques we present are illustrated via simple examples. A preliminary version of this report has been published in: Moller, B., Partsch, H., and Schuman, S. (eds.): Formal Program Development. Lecture Notes in Computer Science 755, Springer Verlag (1993) 263--304. Also published in: ACM Computing Surveys, Vol 28, No. 2, June 1996. 3 1 Introduction The program transformation approach to the development of programs has first been advocated by [Burstall-Darlington 77], although the basic ideas were already presented in previous papers by the same authors [Darlington 72, Burstall-Darlington 75]. In that approach the task of writing a correct and efficient program is realized in two phases: the first phase consists in writing an in... - In Danvy et al , 1996 "... . A partial evaluator, given a program and a known "static" part of its input data, outputs a specialised or residual program in which computations depending only on the static data have been performed in advance. Ideally the partial evaluator would be a "black box" able to extract nontrivial stati ..." Cited by 29 (3 self) Add to MetaCart . A partial evaluator, given a program and a known "static" part of its input data, outputs a specialised or residual program in which computations depending only on the static data have been performed in advance. Ideally the partial evaluator would be a "black box" able to extract nontrivial static computations whenever possible; which never fails to terminate; and which always produces residual programs of reasonable size and maximal efficiency, so all possible static computations have been done. Practically speaking, partial evaluators often fall short of this goal; they sometimes loop, sometimes pessimise, and can explode code size. A partial evaluator is analogous to a spirited horse: while impressive results can be obtained when used well, the user must know what he/she is doing. Our thesis is that this knowledge can be communicated to new users of these tools. This paper presents a series of examples, concentrating on a quite broad and on the whole quite successful application ... , 1997 "... Program specialization is a collection of program transformation techniques for improving program efficiency by exploiting some information available at compiletime about the input data. We show that current techniques for program specialization based on partial evaluation do not perform well on non ..." Cited by 25 (14 self) Add to MetaCart Program specialization is a collection of program transformation techniques for improving program efficiency by exploiting some information available at compiletime about the input data. We show that current techniques for program specialization based on partial evaluation do not perform well on nondeterministic logic programs. We then consider a set of transformation rules which extend the ones used for partial evaluation, and we propose a strategy to direct the application of these extended rules so to derive very efficient specialized programs. The efficiency improvements which may even be exponential, are achieved because the derived programs are semi-deterministic and the operations which are performed by the initial programs in different branches of the computation trees, are performed in the specialized programs within single branches. We also make use of mode information to guide the unfolding process and to reduce nondeterminism. To exemplify our technique, we show that we can... - Partial Evaluation, Int'l Seminar, Dagstuhl , 1996 "... . We revisit the main techniques of program transformation which are used in partial evaluation, mixed computation, supercompilation, generalized partial computation, rule-based program derivation, program specialization, compiling control, and the like. We present a methodology which underlines the ..." Cited by 23 (0 self) Add to MetaCart . We revisit the main techniques of program transformation which are used in partial evaluation, mixed computation, supercompilation, generalized partial computation, rule-based program derivation, program specialization, compiling control, and the like. We present a methodology which underlines these techniques as a `common pattern of reasoning' and explains the various correspondences which can be established among them. This methodology consists of three steps: i) symbolic computation, ii) search for regularities, and iii) program extraction. We also discuss some control issues which occur when performing these steps. 1 Introduction During the past years researchers working in various areas of program transformation, such as partial evaluation, mixed computation, supercompilation, generalized partial computation, rule-based program derivation, program specialization, and compiling control, have been using very similar techniques for the development and derivation of programs. Unfor... - Proc. LoPSTr '96 , 1996 "... We show that sometimes partial deduction produces poor program specializations because of its limited ability in (i) dealing with conjunctions of recursively defined predicates, (ii) combining partial evaluations of alternative computations, and (iii) taking into account unification failures. We pro ..." Cited by 7 (4 self) Add to MetaCart We show that sometimes partial deduction produces poor program specializations because of its limited ability in (i) dealing with conjunctions of recursively defined predicates, (ii) combining partial evaluations of alternative computations, and (iii) taking into account unification failures. We propose to extend the standard partial deduction technique by using versions of the definition rule and the folding rule which allow us to specialize predicates defined by disjunctions of conjunctions of goals. We also consider a case split rule to take into account unification failures. Moreover, in order to perform program specialization via partial deduction in an automatic way, we propose a transformation strategy which takes as parameters suitable substrategies for directing the application of every transformation rule. Finally, we show through two examples that our partial deduction technique is superior to standard partial deduction. The first example refers to the automatic derivation... - Proceedings of the 13th European Conference on Artificial Intelligence (ECAI'98 , 1998 "... . Knowledge about forced moves enables to select a small number of moves from the set of possible moves. It is very important in complex domains where search trees have a large branching factor. Knowing forced moves drastically cuts the search trees. We propose a language and a metaprogram to create ..." Cited by 7 (4 self) Add to MetaCart . Knowledge about forced moves enables to select a small number of moves from the set of possible moves. It is very important in complex domains where search trees have a large branching factor. Knowing forced moves drastically cuts the search trees. We propose a language and a metaprogram to create automatically the knowledge about interesting and forced moves, only given the rules about the direct effects of the moves. We describe the successful application of this metaprogram to the game of Go. It creates rules that give complete sets of forced moves. 1 INTRODUCTION Knowledge about forced moves enables to select a small number of moves from the set of possible moves. It is very important in complex domains where search trees have a large branching factor. Knowing forced moves drastically cuts the search trees. We propose a language and a metaprogram to create automatically the knowledge about interesting and forced moves, only given the rules about the direct effects of the moves. ... - In Proceedings of the 2nd ACM SIGPLAN Workshop on Continuations , 1997 "... We consider the use of continuations for deriving efficient logic programs. It is known that both in the case of functional and logic programming, the introduction of continuations is a valuable technique for transforming old programs into new, more efficient ones. However, in general, in order to d ..." Cited by 5 (3 self) Add to MetaCart We consider the use of continuations for deriving efficient logic programs. It is known that both in the case of functional and logic programming, the introduction of continuations is a valuable technique for transforming old programs into new, more efficient ones. However, in general, in order to derive programs with high levels of efficiency, one should introduce continuations according to suitable strategies. In particular, we show that it is preferable to introduce continuations in a flexible way, that is, during the process of program transformation itself, rather than at its beginning or at its end. We extend logic programs by allowing variables to range over goals, and we propose a set of transformation rules for this extended language. We propose a generalization strategy for the introduction of goal variables which may be viewed as continuations and they allow for the derivation of very efficient programs. 1 Introduction Continuation-based program transformations [22] have be... - Computational Logic: Logic Programming and Beyond (Essays in honour of Bob Kowalski, Part I), Lecture Notes in Computer Science 2407 , 2001 "... In a seminal paper [38] Prof. Robert Kowalski advocated the paradigm Algorithm = Logic + Control which was intended to characterize program executions. Here we want to illustrate the corresponding paradigm Program Derivation = Rules + Strategies which is intended to characterize program derivations, ..." Cited by 4 (2 self) Add to MetaCart In a seminal paper [38] Prof. Robert Kowalski advocated the paradigm Algorithm = Logic + Control which was intended to characterize program executions. Here we want to illustrate the corresponding paradigm Program Derivation = Rules + Strategies which is intended to characterize program derivations, rather than executions. During program execution, the Logic component guarantees that the computed results are correct, that is, they are true facts in the intended model of the given program, while the Control component ensures that those facts are derived in an efficient way. Likewise, during program derivation, the Rules component guarantees that the derived programs are correct and the Strategies component ensures that the derived programs are efficient. - Electronic Notes in Theoretical Computer Science 30(2 , 2000 "... We address the problem of specializing a constraint logic program w.r.t. a constrained atom which specifies the context of use of the program. We follow an approach based on transformation rules and strategies. We introduce a novel transformation rule, called contextual constraint replacement, to be ..." Cited by 4 (2 self) Add to MetaCart We address the problem of specializing a constraint logic program w.r.t. a constrained atom which specifies the context of use of the program. We follow an approach based on transformation rules and strategies. We introduce a novel transformation rule, called contextual constraint replacement, to be combined with variants of the traditional unfolding and folding rules. We present a general Partial Evaluation Strategy for automating the application of these rules, and two additional strategies: the Context Propagation Strategy which is instrumental for the application of our contextual constraint replacement rule, and the Invariant Promotion Strategy for taking advantage of invariance properties of the computation. We show through some examples the power of our method and we compare it with existing methods for partial deduction of constraint logic programs based on extensions of Lloyd and Shepherdson's approach.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1643348","timestamp":"2014-04-17T16:40:58Z","content_type":null,"content_length":"41334","record_id":"<urn:uuid:99e5dbcf-5cc6-479f-a260-64f57fa8771f>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
Clear Language, Clear Mind I was thinking and doing som reserch about one of my kurent projects: Making an English languaj that is syntakialy limited such that it makes posible automatik translation into lojikal formalism. I stumbled akros som prety interesting artikles listed below: One of them has a website wher one kan find an introduktion to the system. It is aktualy very good and worth reeding. It is a 80 paj powerpoint presentation turned into PDF. The 1944 paper kritikal of Basic English: How Basic Is Basic English? The benefits of using plain languaj ar rather obivus and konkreet. Using non-plain languaj makes komunikation take longer and proseed les optimal. This is mostly just waste of time but somtimes it is a mater of life and deth. My (it is shared) projekt has som on-going diskusion in my forum. However, the languaj i hav in mind is mor similar to formalism than ACE is (the one linked to erlyr). I think that it is too problematik to handle nested konditionals with quantifyrs like: F1. (∀y)(∀xFxy→Gxy)→Fy in sylogistik languaj, i.e., as in sentenses like: S1. “All men are human.” Rather, one needs sentenses that ar harder to understand and les like ordinary English but beter for formalization like: S2. “For any X, if X is a man, then X is a human.” In simple kases, such as the example sentenses with the form: F2. ∀xMx→Hx ther is no need for mor advansed sentense syntax, but in the kase of the formalization F1 ther is need for such sentenses. Let E refer to the propositions expressed by the sentence: “Everyone wants beer” “everyone” refers to the three people on the right. Let’s call them “a”, “b”, “c” from left to right. Let “Wx” =df “x wants beer”. We could try to show this but let’s just take it intuitively that the following holds: Everyone wants beer ↔ a wants beer and b wants beer and c wants beer Formalizing we get: Now, assume that to begin with, a, b, c does not know whether the two others want beer or not. This is technically ‘left open’ in the comic, but it is not irrelevant. Now, assume every person knows if he wants beer or not, or rather, he either knows that he wants beer, or he knows that he does not want beer. Without this, it doesn’t work either. Introducing “Kx(P) ” to mean “K knows that P” and formalizing: Now, interpreting “logicians” to be a group of people being perfect at making inferences in at least this case. Such people are sometimes called “ideally rational” or similar. They never make a wrong inference and never miss an inference. Let’s think about a’s position as he is first to answer the question. If he knows that he wants beer, then he does not know the truth of E. But if he knows that he does not want beer, he can know that E is false. Because: he is part of everyone, so if he does not want beer, it is not the case that everyone wants beer. If a is being truthful etc., he has to answer either “I don’t know” or Now b is in almost the same position as a. Obviously, if a has already answered “No”, there is no reason to respond or at least he should respond the same. If a’s answer is “I don’t know”, b still lacks information about whether or not c wants beer [Wc or ¬Wc^1]. Likewise, b knows his own state, so he knows either that he wants beer or that he doesn’t want beer. If he knows that he doesn’t, he can infer that E is false, similarly to a. If he knows that he does, then he can’t infer anything about E, and has to answer “I don’t know”. Now, c has all the information he needs to answer either “Yes” or “No”. Obviously, if any of the previous answers are “No”, then he should also answer “No” or simply not answer. If both earlier answers are “I don’t know”, he can infer that both a and b want beer. He also knows his own state. If he does not want beer, he will answer “No”. If he does, he can infer that E is true, and thus answer “Yes”. Pretty similar comic which does need set theory to properly formalize: mrburkemath.blogspot.com/2011/05/coffee-logic.html 1Which could also be interpret as: To go to the bathroom or not go to the bathroom? :D I am taking an advanced logic class this semester. Som of the reading material has been posted in our internal system. I’ll post it here so that others may get good use of it as well. The text in question is John Nolt’s Logics chp. 11-12. I remade the pdfs so that they ar smaller and most of the text is copyable making for easyer quoting. Enjoy Edit – A comment to the stuff (danish) Dær står i Nolt kap. 12, at: ”We have said so far that the accessibility relation for all forms of alethic possibility is reflexive. For physical possibility, I have argued that it is transitive as well. And for logical possibility it seems also to be symmetric. Thus the accessibil- ity relation for logical possibility is apparently reflexive, transitive, and symmetric. It can be proved, though we shall not do so here, that these three characteristics together define the logic S5, which is characterized by Leibnizian semantic… That is, making the accessibility relation reflexive, transitive, and symmetric has the same effect on the logic as making each world possible relative to each.” p. 343 mæn min entuisjon sagde maj strakes, at dette var forkert, altså, at de er forkert at R1. For any world, that world relates to itself. (Reflexsive) R2. For any world w1 and for any world w2, if w1 relates to w2, then w2 relates to w1. (Symmetry) R3. For any world w1, for any world w2, and for any world w3, if w1 relates to w2, and w2 relates to w3, then w1 relates to w3. (Transitive) er ækvivalænt mæd R4. For any world w1 and for any world w2, w1 relates to w2. (Omni-relevans) Mæn de er ganske rægtigt, at ves man prøver at køre diverse beviser igænnem, så kan man godt bevise fx (◊A→☐◊A) vha. en modæl som er reflexsive, symmetrical og transitive (jaj valgte 1r1, 2r2). Mæn stadig er dær någet galt. Di forskællige verdener er helt isolerede, modsat vad di er i givet R4. Jaj googlede de, og andre har osse bemærket de: ”Requiring the accessibility relation to be reflexive, transitive and symmetric is to require that it be an equivalence relation. This isn’t the same as saying that every world is accessible from every other. But it is to say that the class of worlds is split up into classes within which every world is accessible from every other; and there is no access between these classes. S5, the system that results, is in many ways the most intuitive of the modal systems, and is the closest to the naive ideas with which we started.” I’m reading Irving Copi’s Introduction to Logic, very boring and not that well written, any suggestions? Just a college level book that gives a comprehensive intro to the topic. Thanks. “Boring” is not a particularly apt criticism of a text. There are jazzier logic books out there, with bunches of cartoon and jokes (and maybe even recipes for peanut butter and jelly sandwiches) but Copi is a good, solid elementary logic text which has been through more reprints than I can count, and has set the standard for logic texts in English. If you seriously want to learn, you really have to give up the entertainment addiction. Notice how applicable this is to anything. Simply substitute logic for any serious discipline such as physics, chemistry and sociology. This will not involve many science facts as the discussion is wholly philosophical in nature. This is an epistemological, not scientific essay, it just happens to use some facts of science. I want to show the value of thinking of things as inconsistent sets of propositions (or whatever truth carrier you like, but I like propositions) or at least implausible sets of propositions (when at least one inference is inductive (or non-deductive, if you like that term better)). Consider this set of propositions: 1. Newton’s physics is correct. 2. Things are in a such and such way at time t1. 3. That Newton’s physics is correct and things are in a such and such way at time t1, implies that things will happen in such and such way at time t2. 4. A study found that such and such did not happen at time t2. 5. The study is correct. 6. That the study is correct implies that such and such did not happen at time t2. This set is plainly inconsistent; it cannot be true. At least one proposition in this set is false. Suppose we are around the time when Einstein introduced his relativity theories. At that time physicists had pretty good reason to believe (1) (among others: good explanatory power, lots of empiric confirmation), and I’m sure some people called physicists who did not stop believing in Newton’s physics even when some studies found results that are contrary to the predictions of Newton’s physics given some antecedent state of affairs dogmatic. I’m fairly sure such a claim of dogmatism is often thrown around in similar cases. My point is that it is unwise to claim someone is being dogmatic quickly. For there are many things other than some widely accepted theory that could be wrong (in this case (1)). We could be wrong about the antecedent states of affairs (in this case (2)), or wrong about what the theory predicts (in this case (3)) given that state of affairs; perhaps the scientist that make the prediction from the theory made a calculation error. Something similar applies the the study that ‘challenges’ the accepted theory (in this case (5)). So there are many things that could be wrong without the accepted theory being false. It is wise to consider that before calling people that are being epistemically conservative for dogmatic. The method of putting the relevant propositions in an inconsistent set forces us to be made aware of some perhaps not normally discussed propositions without which the set would be consistent (or not-implausible). Usually in a moderately complex case such as the one with Newton’s physics, a set of propositions that form as inconsistent set (or implausible) will contain 5-10 propositions. In more complex cases, the sets can be much longer (such as very complex cases involving the impossibility of an infinite past which involves temporal and modal logic). In general, the more propositions we can find that together forms an inconsistent set (or implausible), the better overview, and the easier to is to make a justified decision about which proposition(s) to stop believing in in the case that one actually believes all of them. If we are to avoid inconsistent beliefs (=inconsistent objects of beliefs), then we should think of the many potentially epistemically justified ways there are to deal with a such inconsistent set. In the above case, rejecting (4) would probably not be a wise decision, neither would it be to reject (6). If there is only one study and it is not exceptionally well done, then rejecting (5) is probably not a bad decision to begin with. If more studies (by competent scientists) confirm the first study, then sooner or later we should begin wondering if not our beliefs would have better coherence were we to reject the theory (1). But before we do that we should consider other alternatives such as (2) and (3). It would not be good if we rejected some theory and later found it that we had no grounds to do that because we were wrong about what the antecedent state of affair was (2). This way of solving problems (which usually involve an inconsistency of we add together the relevant propositions to a set), is applicable to every topic that I have thought of. It is especially useful to very complex situations where it is hard to get an overview and it seems hard to settle on a specific solution (that is, hard to find out which proposition is the epistemically most justified to deny). Using the formalization system I wrote of earlier, let’s take a look at this famous question. First we should note that this is a yes/no question which is different from the questions that I have earlier formalized. The earlier questions sought to identity a certain individual, but yes/no questions do not. Instead they ask whether something is the case or not. So this time I cannot use the (x=?) question phrase from earlier, since there is no individual to identify similarly to the earlier cases. One idea is to simply add a question mark at the end. Like this: F1. (∃x)(∃y)(Wxy∧Bxy∧Cxy∧x=a)? Wxy ≡ x’s wife is identical with y Bxy ≡ x beats y Cxy ≡ x used to beat y^1 a ≡ you But this fails to capture that it is not all of the things that are being asked whether they are the case or not. It is only the Bxy part that is being asked. The rest is stipulated as true. We can change the formalization to capture this, like this: F1*. (∃x)(∃y)(Wxy∧Cxy∧x=a)∧(Bxy)? The question mark is now understood as a predicate that works on whatever is before/to the left of it. (In parentheses for clarity.) Not to the right like with the other monadic predicates and propositional connectors (¬, ◊, □, etc.). In this case the question mark only functions on (Bxy) and not the rest of the formula. Translated into LE: There exists an x and there exists an y such that x’s wife is y and x used to beat y and x is identical with you and is it the case that x beats y? Answering yes/no questions When answered in the positive, the answered version simply removes the question mark. Producing: F2. (∃x)(∃y)(Wxy∧Cxy∧x=a)∧(Bxy) Answering in the negative removes the question mark and adds a negation sign to the part of the formula that the question predicate is working on. Producing this: F3. (∃x)(∃y)(Wxy∧Cxy∧x=a)∧¬(Bxy) If the produced formula is true, then the question has been answered correctly. However since this question is loaded. Both of the produced formulas are false, that is, it is both false that: F2. (∃x)(∃y)(Wxy∧Cxy∧x=a)∧(Bxy) There exists an x and there exists an y such that x’s wife is y and x used to beat y and x is identical with you and x beats y. and that: F3. (∃x)(∃y)(Wxy∧Cxy∧x=a)∧¬(Bxy) There exists an x and there exists an y such that x’s wife is y and x used to beat y and x is identical with you and it is not the case that x beats y. Since they both imply the falsehood: There exists an x and there exists an y such that x’s wife is y and x used to beat y and x is identical with you. 1Alternatively one could deepen to formalization to formalize the temporal aspect of this predicate. Though it doesn’t seem important here so I will leave it out. I earlier wrote of the logical interpretation of subjects.^1 There I suggested, following Russell, that the subject of a descriptive, active, meaningful (DAM) sentence should be interpreted as an existential quantifier (∃x) but I now believe that that this seems to depend on who made the utterance and in which situation. Suppose for instance that a positive atheist^2 makes this utterance: E1. God is omnipotent. Do we really want to interpret this as: E1′. (∃x)(Gx∧Ox)^3 If we did, then the atheist would have contradicted himself since from (E1′) the existence of God follows. From this I conclude that this interpretation is implausible. The utterer and the situation One idea is to let (E1) represent a conditional when uttered by an atheist: E1”. (∀x)(Gx→Ox) Such a conditional is consistent with an atheistic position; It is not possible to deduce that God exists from (E1”). How should we think of sentences that are like (E1)? Should they always be interpreted as existential claims, should they always be interpreted as conditional claims or should the interpretation depend on the utterer and the circumstances in which it was uttered? The first option has already been dealt with and found implausible. Let’s consider the second option. Consider this everyday sentence: E2. The door is open. If I said this to my roommate while we were both out in the garden, I think that he would think that I was silly or talking about some door far away. He would never interpret this sentence as a conditional which in that case is true. Is it true not because there is a door and it is open but it is true because there is no door at all in the garden. I imagine that it is like this in many other everyday situations. Suppose that is true, that is, everyday sentences like (E2) are most often best interpret as existential claims. We may allow that sentences involving non-everyday terms like “God” are often best interpret as conditionals. These considerations indicate that the same sentence form Subject – sentence verb – subject predicate may yield different logical forms depending on which words are used. So, there is a disconnection between language form and logic form. This is undesirable. Return to the first example. Suppose that a theist said the same sentence. Should it be interpreted as an existential claim or a conditional? I suppose that it is best to interpret it as an existential claim. But for the theist it would not make much of a difference since he also believes that God exists, and from that God exists, and the conditional, it follows that God is omnipotent.^ But even an atheist’s utterance of (E1) may best be interpreted as an existential claim. Suppose that the current american president is a closet atheist, that he is making a public speech and that the public believes that he is a theist. In that case it would be best for the public to interpret his words as an existential claim and not a conditional. 2One who believes that there is no God or no gods. 3Where “Gx” means x is God, and “Ox” means x is omnipotent. 4In symbols: From (∃x)(Gx) and (∀x)(Gx→Ox), (∃x)(Gx∧Ox) follows. I invent and explore a terminology about degrees of sentences, I explore how to negate sentences and sentence parts in english, I distinguish between verbs that can be used in sup-sentence parts and verbs that cannot, I discuss some problems with the verb “ought”, and lastly I explore the relationship between this terminology and predicate logic. Negating sentences in english (PDF, 15 pages)
{"url":"http://emilkirkegaard.dk/en/?tag=logic","timestamp":"2014-04-16T05:21:43Z","content_type":null,"content_length":"68878","record_id":"<urn:uuid:0c1af1c5-6108-4bc1-bc18-abbd4288f9ad>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
Measuring the Long-Run Profitability of the Firm Counting since 15.10.2005 Timo Salmi and Ilkka Virtanen Measuring the Long-Run Profitability of the Firm; A Simulation Evaluation of the Financial Statement Based IRR Estimation Methods • 1.1. Background • 1.2. Overview of Research Problem and Methodology • 1.3. Problem Statement • 2.1. Generation of the Capital Investment Time Series • 2.2. Cash Inflows Produced by the Capital Investments • 2.3. Contribution Distribution • 2.4. Depreciation Methods • 2.5. Accountant's vs. Economist's Profits and Annuity Depreciation • 3.1. Kay's Method □ 3.1.1. Presentation of the Method □ 3.1.2. Discussion of the Method • 3.2. Ijiri-Salamon Method • 3.3. Ruuhela's Method • 3.4. Discussion of the Model-Oriented IRR Estimation Methods • 3.5. Averaged Accountant's Rate of Return Method • 3.6. Discussion of Market-Based Methods APPENDIX 1. List of Symbols APPENDIX 2. Annuity Depreciation under Anton Contribution Distribution Please use the following reference to this publication: Salmi, Timo and Ilkka Virtanen (1997). Measuring the Long-Run Profitability of the Firm; A Simulation Evaluation of the Financial Statement Based IRR Estimation Methods. Acta Wasaensia, No. 54, University of Vaasa, Finland. Also available from World Wide Web: <URL:http://www.uwasa.fi/~ts/smuc/smuc.html>. Timo Salmi Professor of Accounting and Business Finance Ilkka Virtanen Professor of Operations Research and Management Science Measuring the Long-Run Profitability of the Firm: A Simulation Evaluation of the Financial Statement Based IRR Estimation Methods Four methods for estimating the firm's long-term profitability as the internal rate of return (IRR) of the firm's capital investments are revisited and evaluated using simulated financial statements. The methods of Kay, Ijiri-Salamon, Ruuhela and the averaged accountant's rate of return (ARR) are analyzed. Our findings indicate that the methods are disrupted by large deviations between the firm's growth and profitability, but in most cases they are insensitive to cyclical fluctuations and to major capital investment shocks. Kay's method fares marginally best in numerical performance, and it is theoretically very well founded. The average ARR method comes a close second. The Ijiri-Salamon method fares reasonably well numerically, but its error is unpredictable. Theoretically, it is the most ad-hoc of the methods. Ruuhela's method has a strong theoretical background, but when its strict assumption of steady state growth is violated, numerically it fares the worst. In the literature's long-standing dispute about the validity of the ARR as a proxy for the IRR, our simulation results strongly support the school of thought siding with the validity. Our research conclusion is to recommend applying the average ARR method in the practice of financial analysis. It is in a class of its own as regards pragmatic applicability because the ARR is based on well-established accounting practice. Key words: Long-term profitability, accountant's rate of return, internal rate of return, return on investment, IRR estimation methods, Kay, Ijiri, Salamon, Ruuhela, simulation. JEL: M40 G31 C88 C15 Acknowledgments: Our thanks are due to Professor Erkki K. Laitinen and Professor Reijo Ruuhela for their useful comments. We gratefully acknowledge the financial support for our research project from the Jenny ja Antti Wihurin Rahasto foundation. Published as Salmi, Timo and Ilkka Virtanen (1997). Measuring the Long-Run Profitability of the Firm; A Simulation Evaluation of the Financial Statement Based IRR Estimation Methods. Acta Wasaensia, No. 54, University of Vaasa, Finland. Also published on the Word Wide Web: <URL:http://www.uwasa.fi/~ts/smuc/smuc.html> 1.1. Background The firm's ability to find and implement successful capital investment opportunities decides its long-run profitability and financial position. There is no doubt that the questions of profitability measurement and the valuation of the firm's financial assets are the most important questions in financial accounting research. The question of a theoretically sound and pragmatic profitability measurement is of crucial importance not only to the firm but also to an economy's overall welfare. The allocation of resources in an economy is directly affected by the validity and reliability of the decision makers' measures of the firms' performance (profitability) and financial position. For example, in loan and credit decisions the creditors are not only interested in the applicant's short-term situation but in the firm's long-term ability to generate income. The firm's profitability is crucially reflected in the financial statements of the firm. The stakeholders of the firm need the profitability information for their decision making both for the short and for the long run. In the economics literature the internal rate of return (IRR) is the widely used theoretical long-run profitability concept. A recent survey by Pike (1996) in the area capital budgeting confirms that IRR is a well-established measure also among practitioners. Furthermore, the investment theory of finance recognizes IRR as a profitability measure, albeit under restrictive Strictly speaking the theory of finance states that, for example, under capital rationing only the net present value method is uniquely consistent with maximizing the value of the stockholders' wealth. See any good text-book of finance such as Copeland and Weston (1979), Levy and Sarnat (1986), Brealey and Myers (1991) for a discussion. However, under ordinary practical conditions of investment opportunities in the same size categories and conventional cash-flow patterns the internal rate of return method can in most cases be expected to give conforming evaluation for the capital investment evaluation. In this paper we accept IRR as the valid long-run profitability measure for the firm. The focus of the paper is on the theoretical consistency and numerical accuracy of the methods presented in earlier literature for estimating the IRR from financial statements. The accountant traditionally measures profitability as the ratio between the firm's annual income and the book value of its assets. This ratio is often called the accountant's rate of return (ARR) in literature. Other common terms for it are the return on the capital invested (ROI) and the book yield. This measure looks at profitability after the fact. The economist has a different definition of income. It is based on the changes in the market value of the firm defined as its discounted future cash flows. The economist's definition is based on expectations about the future. The internal rate of return (IRR) is consistent with the economist's concept of income. The internal rate of return also is prominent in the capital investment theory. One traditional way of looking at the firm is to regard it as a series of capital investments. As discussed, the IRR of the capital investments making up the firm is the well-accepted, theoretically valid measure of the firm's profitability. The problem with this theoretical notion is, however, that the IRR of the firms is not readily measurable in actual business and financial analysis practice, while the annual values of the ARR are calculated routinely for business firms. There is a considerable body of literature that discusses the possibility of analytically deriving or empirically estimating the firm's IRR. Since the mid 1960's there is a long-standing controversy, both conceptual and technical, whether it is possible successfully to estimate the firm's IRR. The discussion is too extensive to review in the presentation at hand. For the references see the review article by Salmi and Martikainen (1994), Butler, Holland and Tippet (1994) and Stark (1994). The approaches in literature to the IRR estimation can be classified into several, partly overlapping categories. The first approach is trying to establish a link from ARR to IRR. This approach is exemplified by Kay (1976) and later by Peasnell (1982a, 1982b). Kay's method has been evaluated for example by Whittington (1979), Salmi and Luoma (1981), Brief and Lawson (1992) and Salmi and Virtanen (1995). A second approach is to derive the IRR by utilizing an auxiliary estimate such as CRR (the cash recovery rate). This approach has been suggested by Ijiri (1979 and 1980), extended and tested by Salamon (1982) and Gordon and Hamer (1988). The Ijiri-Salamon method has been further tested by Shinnar, Dressler, Feng and Avidan (1989) and Stark, Thomas and Watson (1992). A third approach seeks to establish the IRR directly from the published financial statements. This category is represented by Ruuhela (1972) and its mathematically streamlined rederivation in Salmi (1982). The assumptions of Ruuhela's model and the consequences of relaxing them have theoretically been considered by Tamminen (1976). Another direct IRR estimation method has been presented in Finland by Laitinen (1980). Furthermore, Kay (1976: 455) presented how his IRR estimate could be improved if the ratio of the accountant's valuation of the firms assets and the economist's valuation of the firms assets were available. Steele (1986) suggested the use of market values from the stock market to represent the economist's valuation of the firm's assets needed in Kay's correction. Lawson (1980) presented an approach based on cash flows and market values. Artto (1980) advocates a cash-flow-based profitability estimation. Which of the various methods put forward in literature should one select? For the business practitioner, as well as for an academic researcher, facing the number of the various long-run profitability estimation methods, and the theoretical controversy of their correctness, the question becomes the following. What methods are reliable and applicable for evaluating the long-run profitability of a business firm? In other words which method or methods work both in practice and in theory? In particular, might it be, after all, that the practice of calculating a straight-forward average of the annual ARRs would be at par with the more theoretical IRR estimation methods? 1.2. Overview of Research Problem and Methodology As pointed out, the outcome of the discussion in literature on the possibility of estimating the firm's IRR has been inconclusive. There is no unique consensus on the merits of the different methods presented. The controversy has concerned both the generality of the theoretical derivations and the empirical applicability. Under the circumstances, it is our view that the various methods are best evaluated in their empirical context. Empirical investigation is not unproblematic, either. The following difficulty arises. The empirical estimates of the IRR given by the various methods have been compared in the earlier literature. However, the earlier, empirical approach does not resolve the absolute reliability of the methods compared. The reason is the following. The true IRRs of the firms under observation are needed as benchmarks for the reliability evaluation. But the true IRRs are not available when actual financial statement data are used. This dilemma can be solved by using a simulation approach. A simulation approach with a known IRR facilitates evaluating the ability of the various methods to estimate the firm's true IRR. Our paper evaluates the three financial-statement-based methods by Kay, Ijiri-Salamon and Ruuhela. In addition, we compare these IRR estimation methods to the simple practice of using the average of the annual accountant's rate of returns as the estimate of the firm's IRR. The market-value-based methods of Lawson and Steele are excluded in the present paper, since their evaluation is not readily amenable to our simulation approach. An attempt at a consistent simulation of the stock market values of the firms is beyond the present scope. The IRR estimation methods of Kay, Ijiri-Salamon and Ruuhela all are mathematically non-trivial. They are not straight-forward to apply in practice on actual financial data. The practitioner's obvious alternative would be to use the averaged accountant's rate of return as a surrogate of the IRR estimate. However, in earlier literature there are reservations on using the average ARR as the estimate. The reservations can be traced as far back as to Vatter (1966). Later e.g. Fisher and McGovan (1983: 82) stated that "accounting rates of return provide almost no information about economic rates of return". On the other hand, as pointed out by Pike (1996: 83-84) in connection with capital budgeting, the technically simple methods such as the payback period and the average ARR has been condoned by several authors starting from Weingartner (1969). We intend to revisit the question of the usefulness of the average ARR as an ex-post long-term profitability measure, since it has not been unequivocally demonstrated that the average ARR method would necessarily be markedly inferior to the more complicated IRR estimation methods presented in literature. Given the obvious fact that the business firms continuously use accounting measurement we would find it rather surprising if ARR would not be a useful concept also for the firm's long-run profitability (IRR) measurement. Hence we will consider the average ARR method together with the more complicated methods in this paper. In the literature on IRR estimation some general assumptions have become conventions. We use these same conventions. An important, established convention in the long-run profitability research is to consider the firm as a series of repetitive capital investments. Stating this research convention in Salamon's (1982: 294) words "... the firm is a collection of projects that have the same useful life, same cash-flow pattern, and same IRR". See, however, the critique of this standard assumption by Kelly and Tippet (1991). The assumption of the constant cash-flow pattern has usually been presented as a necessary, technical simplification of the business reality. However, this restriction is not an unrealistic, technical assumption. It can be posed that the assumption is in line with observing often long periods of stable business culture in individual firms. The business culture of the firm is above all created by its CEO-level management and their ability to generate and utilize capital investment opportunities. Another strong convention is the firm's access to the financial markets freely to obtain the funding for the capital investments. In other words the implied capital markets in this area of research conventionally are perfect and complete. There is no capital rationing. Therefore, the question of financing of the simulated capital investments need not be considered in this paper. 1.3. Problem Statement In the current paper we are interested, in general terms, in evaluating the accuracy of the selected long-term profitability estimation methods under different economic circumstances, under different capital investment payback profiles and under different accounting decisions on depreciation. More specifically, the following research questions will be considered. In the earlier research a constant growth approach to the capital investments has been fairly common. This restriction has meant the absence of business cycles and noise. A priori one would expect that the cycles can have a drastic effect on the ability of the methods to estimate the correct IRR. We relax the steady-state restriction. Therefore, our first research question is: 1. Are the methods sensitive to business cycles in the capital investment activities? Are the methods sensitive to ordinary irregularities in the capital investments? Second, an outside stakeholder has to base the profitability estimates on the financial data provided by the firm. In the financial statement data the capital investments and their cash flows are totally mixed. It is not possible to know the contribution pattern of the capital investments based on the external data. The question of the effect of the different contribution patterns arises as in Salamon (1982) and Gordon and Hamer (1988). Hence, our second research question is: 2. Are the methods sensitive to the underlying, alternative cash contribution patterns and life-span of the firm's capital investments? Third, it has been put forward in the earlier literature that there are some particular instances where the profitability estimates given by the accountant's rate of return theoretically become close or equivalent to the underlying, true profitability of the capital investments making up the firm. These include the case where growth equals profitability as presented by Solomon (1966: 115) and the case where the theoretical annuity method of depreciation is postulated as presented in e.g. Salmi and Luoma (1981: 28) and Peasnell (1982a: 364). The annuity depreciation is the economist's depreciation in defining the concept of economic income discussed e.g. in Bromwich (1992: 31-51). Hence, our third research question is: 3. Are the methods sensitive to disparities between the firms growth and profitability? Fourth, in accounting practice the choice between the depreciation methods such as the prevalent straight-line and the declining- balance methods affects the reported annual income figure. Our fourth question is: 4. Are the methods sensitive to the depreciation choice that the firm has used in producing its financial statements? Fifth, the IRR estimation methods are largely based on the idea of regular development uninterrupted by structural changes or other major one-time events causing exceptional capital investment peaks. Our fifth question relates to this aspect: 5. Are the methods sensitive to major capital investment shocks? An economic time series is made up by several constituents. These are the growth trend, the business cycle, the seasonal variation and the noise. Furthermore, there can be regular or irregular shocks. The growth trend and the business cycle are relevant in this paper. Seasonal variations are intra-year. Thus they do not arise in our research questions. It is true that the economic activities of the firm are continuous in nature. However, the financial data used for the profitability estimation in the methods under observation use discontinuous observations from the annual statements. 2. SIMULATION EVALUATION APPROACH This chapter presents our simulation model. First, we present the simulation engine that generates the capital investment time series. Second, we present the generation of the cash inflows from the capital investments in terms of alternative contribution distributions. Third, we present the alternative depreciation methods the simulated firm may apply. 2.1. Generation of the Capital Investment Time Series As discussed in the introduction the firm can be considered a series of cash outflows to capital investments and the cash inflows generated by these capital investments. The earlier discussion of the methods has been based on the implicit assumption of constant, exponential growth of the capital investments that make up the firm. In a previous simulation approach to analyze Kay's method Salmi and Luoma (1981) also assumed capital investments obeying constant, exponential growth. Their engine to generate the capital investments was the standard exponential growth model (1) g[t] = g[0](1+k)^t, g[0] = initial level of capital investments, g[t] = capital investments in year t, k = growth rate. Assuming a constant growth is a major simplification of the reality of capital investment decisions in business firms. To evaluate the reliability of the IRR estimation methods under observation it is of paramount importance to know whether the methods are sensitive to business cycles, noise and disruptive irregularities in the capital investment activities. To tackle these questions we extend the Salmi-Luoma simulation engine to generate the capital investments with the possibility of business cycles, noise and shocks. We use For the indexing of the years t in the simulation engine we denote T = length of the simulation period, n = length of the observation period (number of years under observation for the profitability estimation). In the simulation the index t must run all the way from year 1 to year T. The simulated firm is founded at the beginning of year 1. The transient, initial period from year 1 to year T-n represents the stage needed to reach a going-concern phase. The evaluation of the selected long-run profitability estimation methods is best conducted only after the going-concern phase has been reached. Thus the actual observation period for the evaluation of the profitability estimation methods is from T-n+1 to T. For brevity, the indexing is not presented in the formulas. The first part in Formula (2) of the simulation engine is equivalent to the constant exponential growth Formula (1) used earlier in the simulation by Salmi and Luoma (1981). We have for the trend component the same g[0], g[t] and k definitions as in Formula (1). In our extension we first incorporate a sinusoidal business-cycle component to the engine. For this augmented cyclical fluctuation component we denote A = amplitude of the cycle, C = length of the cycle, In the above, the term Seasonal variations do not arise. This is because our simulation engine is discrete with one-year intervals. In the terms of real-life business practice this is tantamount to using annual financial statements instead of the potential quarterly reports. Next, we incorporate a random component. We use white noise as the random component and denote z = random variable following the (0,1)-normal distribution. For the shock (disruption) component we have S = capital investment shock coefficient, All the new components, which are augmented into Formula (1) to arrive at the generalized capital investments generation engine presented in Formula (2), are multiplicative. In other words, the components are defined relative to the trend-level of the capital investments. This means, for example, that in the terms of statistics the standard deviation of the random fluctuation in the capital expenditures is heteroscedastic in nature. Likewise, the relative amplitude of the business cycles stays constant while the absolute magnitude of the business cycles increases over time. Compared with the constant-growth approach, the inclusion of the business cycles and noise components make the simulation engine realistic. This is attested by the fact that in simulation the extended engine produces financial time series which resemble the time series profiles observed on actual business firms. See e.g. the sample of the time series drawn in Salmi et al. (1984: 46-48). 2.2. Cash Inflows Produced by the Capital Investments The capital investments g[t] induce later, corresponding cash inflows. The relationship between the initial outlay of a capital investment and its cash inflows can be expressed in terms of a contribution distribution. Denote by b[i] an individual, relative cash-inflow contribution from a capital investment that has been made i years back. This term is called the contribution coefficient. The contribution distribution is naturally made up by the individual contribution coefficients for the life-span of the capital investment. The mathematical formulation below is based on e.g. Ruuhela (1972). The cash flow profiles in Ijiri (1979), Salamon (1982) and Gordon and Hamer (1988) represent the same idea of contributions induced by the capital investments of the firm. First consider the contributions (the cash inflows) from a single capital investment made at time point t = 0, illustrated by Figure 1. In a more general denotation, a contribution (i.e. the cash inflow) in year t from a capital investment made in year t-i is given by (3) f[ti] = b[i] g[t-1], i = 1,...,min(N,t), [Errata: (3) f[ti] = b[i] g[t-i], i = 1,...,min(N,t)], f[ti] = absolute contribution in year t from capital investment i years back, b[i] = relative contribution from capital investment i years back, N = life-span of a capital investment project (the same for all capital investments). Under the regular going-concern phase, which is to be used for evaluating the profitability estimation methods, the index i runs from 1 to N. However, during the transient initial period before the year N, i.e. t < N, there can be contributions only from t years back. Hence the term min(N,t). The total contribution in any year t (i.e. all the cash inflows in that year) is cumulated from the contributions from the capital investments made in the earlier years. Hence we have which defines f[t] = cash inflow in year t. The accumulation of the contributions (the cash inflows) in year t from all the capital investments in the previous years is illustrated by Figure 2. A comment on the mathematically discontinuous nature of the simulation model is in order at this stage. As is familiar from capital investment literature, the capital investment model involves a discretization of what basically are partly continuous events. An initial outlay made at time t = 0 is assumed to produce its corresponding contributions at times t = 1,...,N. Likewise, the depreciations for a capital expenditure made at time t = 0 will take place at t = 1,...,N. The same pattern is repeated for all capital investments for the simulation period. Our simulation model considers all the events as discrete as is common in capital investment models. In line with the standard treatment in literature the contribution distribution b[i] in our simulation engine is the same for all the capital investments. In other words, the profitability of the capital investments remains the same over the period under observation. Furthermore, constant returns of scale on the capital investments are assumed, as is the custom in growth models. When the firm grows, there are no economics of scale. See e.g. the standard reference Levhari and Shrinivasan (1969: 153). A contribution distribution b[i] fixes the internal rate of return. As was noted, the contribution distribution is assumed unchanged for all the consecutive capital investments even though the level of the capital investment outlays varies over the business cycles as defined by Formula (2). Hence the internal rate of return, i.e. the profitability of the simulated firm, is defined by the cash flows of any individual, simulated capital investment. The internal rate of return corresponding to a given contribution distribution is defined by equating the initial outlay with the sum of the future, discounted cash inflows: which is readily reduced to The r given by Formula (6) is the true internal rate of return of the simulated firm that is the benchmark in evaluating the various IRR estimation methods. It should be noted that Formula (6) is not suggested to be used as another estimation method for the long-run profitability of the firm from actual business data. Such a direct estimation would not be practical, nor maybe even possible, because the literature does not currently have adequate means readily to identify the contribution distribution of the capital investments making up the firm. In the simulation evaluation the internal rate of return r which corresponds to a chosen contribution distribution b[i] can be readily assessed from Formula (6) using the numerical analysis methods such as the bisection method. For the bisection method see any standard text-book of numerical analysis such as Conte (1965: 39-43). The discussion of the specification of the alternative contribution distributions will be postponed till the next section. It is a well-known fact that under non-conventional cash flows (more than one sign alteration) there can be multiple or no real roots for the internal rate or return r in Equation (6). See e.g. Teichroew, Robichek and Montalbano (1965). This problem does not arise in our simulations. A conventional cash-flow contribution pattern will be used. Profitability defined as the IRR in our simulation is assessed from the contributions of the capital investments only. The financing issue does not come to the fore. This separation of capital investments from financing is in line with the classic results of Modigliani and Miller. For a discussion on this issue, see for example Yli-Olli (1980). This separation also is in line with the standard usage of IRR in connection with the capital investment decision. In making the decision, the decision maker compares the IRR of the capital investment project prior interest to the cost of capital. Including the interest (i.e. the cost of financing) in the cash estimates for the project's flows would be double accounting as pointed out by any good textbook on capital investments. The question of financing and its costs do not arise in our simulations as long as it can be safely assumed that the firm remains sufficiently profitable to be able to obtain new capital as the need arises. Hence chronically declining activities (divestments) or infeasible combinations of growth and profitability will not be considered in our research, since in actual business practice this would in the long-run cause restrictions or even a cessation of the availability of capital to the firm. For a discussion of feasible growth / profitability combinations see Suvas (1994). 2.3. Contribution Distribution As discussed in the previous section the true internal rate of return r determined by Formula (6) is a function of the contribution distribution b[i] introduced in Formula (4). The true form of the contribution distribution is not generally known for real-life business firms. In order to assess the effect the different, potential contribution patterns of the firm's capital investments we will perform our simulations with three different contribution patterns from the capital investments. Figure 3 illustrates three different types of potential contribution distribution, a neutral, a typical growth-maturity-decline life-cycle pattern and a steadily declining case. The three distributions we choose are the uniform contribution distribution for the neutral case, the negative binomial contribution distribution for the growth-maturity-decline case and a linearly declining distribution (Anton distribution) for the steady decline. The uniform contribution distribution is defined by the annuity factor The uniform contribution distribution for the life-span of the investments is an obviously neutral choice. It produces the same level of contribution each year throughout the entire life-span of the capital investment. In the simulation the numerical values of the contribution coefficients which lead to the preselected true profitability r are given directly by substituting the numerical value r into Formula (7). The typical life-cycle of a product includes an early growth phase, maturity, and decline. The negative binomial distribution corresponds to this cycle. For our simulation purposes it has the further advantage of being different from the uniform contribution distribution in two important respects. It is not constant and it is not symmetrical. The general definition for the negative binomial distribution is given by where q is a shape parameter and m is a location parameter. For our simulation we choose q = 0.15 and m = 2 which leads to a typical life-cycle profile. For the definition and the properties of the negative binomial distribution see Fisz (1967: 167). For our purposes, two technical adjustments to the generic negative binomial distribution are needed. First, the distribution is cut from the right at the life-span instead of letting it continue to infinity. Second, the distribution is shifted to the left to coincide with the capital investments' life-span. Hence we have as our negative binomial contribution coefficients (9) b[i] = s (i+1) q^2 (1-q)^i, i = 1,...,N where s is a scaling factor inducing the desired level of true profitability. In the simulation the numerical values of the contribution coefficients which lead to the preselected true profitability r are found by finding the value of s which fulfills Formula (6). It is given by The Anton distribution presented in Anton (1956) is defined as This is a linearly declining contribution distribution with convenient theoretical properties. It has been shown (see Solomon (1971: 168 footnote and Appendix 2 of the current paper)) that if the contributions from the firm's capital investments follow the Anton distribution the theoretical annuity depreciation (to be discussed in a later section) and the practical straight-line depreciation coincide and hence lead exactly to the same reported profit for the firm. (This is tantamount to the accountant's and the economist's concepts of income agreeing under these special circumstances.) 2.4. Depreciation Methods To complete the simulation model we need the formulas for alternative depreciation methods in order to have the annual profit and book value figures. First consider the accounting relationships between these concepts. The profit p[t] is defined by the cash inflow f[t] less depreciation d[t] as (12) p[t] = f[t] - d[t]. The book value v[t] of the firm at the end of year t is defined by book value at the beginning of the year plus the capital investments g[t] less the depreciation d[t] Hence (13) v[t] = v[t-1] + g[t] - d[t]. In our simulation model the book value of the firm involves only the capital expenditures and the depreciation. For simplicity cash, inventories and other assets are not modeled separately. Next consider depreciation. The firm's choice of the depreciation method is central to profit measurement and asset valuation both in accounting theory and practice. We build into our simulation model the possibility of three alternative depreciation methods to be employed by the simulated firm in its financial statements. The alternatives are the straight-line depreciation method, the double declining-balance method and the theoretical annuity depreciation method. An important feature of the current research approach is to be able to evaluate how well the different IRR estimation methods perform under realistic conditions. The first two of the alternative depreciation methods for the simulated firm are prevalent in actual accounting practice. The idea of straight-line depreciation method is that it allocates the costs evenly based on the passage of time over the expected life-span of the asset. Decreasing charge depreciation methods are based on the idea of equipment being more efficient in their early life. We choose double-declining-balance method as a representative of the decreasing charge methods because it is by definition (the doubled rate) related to the corresponding straight-line method. Annuity depreciation is included as one of the alternatives for quite a different reason. It is purely a theoretical concept. It is included to verify whether the simulation and the profitability estimation algorithms produce the correct internal rate of return under annuity depreciation as predicted by theory. Straight-line depreciation is calculated as Double-declining-balance depreciation is a decreasing depreciation used in the U.S. practice. See Davidson and Weil (1977). Double- declining-balance depreciation method formula is The above formula for the double-declining-balance depreciation forms an infinite geometric series. However, in accounting the capital investment expenditure is exhausted at the end of the life-span. We use the historical cost convention. Hence, all the remaining book value of the relevant investment is depreciated in the last year of the life-span. The well-accepted definition for the annuity depreciation is that the profit (before interest and taxes) p[t] is assessed as the interest on the initial capital stock v[t-1] in year t. Thus (16) p[t] = r v[t-1] and hence from Formula (12) we get (17) d[t] = f[t] - r v[t-1]. Annuity depreciation is a theoretical construct. As is evident from Formula (17), a circular reasoning is involved. It is necessary to know in advance the value of r (the internal rate of return) in order to be able to apply the annuity depreciation method. In other words, the profitability information is needed for estimating the profitability. In a simulation model, however, this is possible since the true internal rate r can be fixed in advance. 2.5. Accountant's vs. Economist's Profits and Annuity Depreciation The construction of our simulation model was concluded in the previous section. However, annuity depreciation has a pivotal role in the relationship between accountant's and the economist's profit and valuation concepts. Hence the discussion started in the section on the problem statement is continued utilizing the notation introduced. The accountant and the economist have different concepts of income. The accountant's profit and the accountant's rate of return are based on historical data. The accountant needs depreciation in defining annual profits. The accountant's rate of return is given by The economist's income concept is independent of depreciation. It is based on future cash flows. The well-known economist's valuation of the firm is defined as the present value of the future net cash inflows: In accordance to the classic results, discussed in the section on the problem statement, IRR and ARR (appropriately weighted, if not constant) agree if the annuity method of depreciation is used for depreciating the book value of the firm's assets. This result is tantamount to proving that if the economist's valuation w[t] and accountant's valuation v[t] of the firm's assets agree, then IRR and ARR agree. A second, relevant classic result is that if the steady state growth of the firm is equal to its internal rate of return, then ARR and IRR agree. See Solomon (1966: 115). For a discussion and a presentation of the proofs see for example Salmi and Luoma (1981). The previous chapter presented our simulation engine. The current chapter presents four IRR estimation methods from earlier literature to be analyzed and evaluated with our simulation approach. The methods are Kay's, Ijiri-Salamon's, Ruuhela's and the accounting-practice compliant average ARR method. Before any IRR estimation method can be applied on the simulated (or actual financial) statements, the IRR estimation method must be made operational for the financial data available. This fact is observed whenever necessary. We do not evaluate market-value-based methods in this paper. However, the market-value-based methods by Lawson and Steele are briefly discussed at the end of the chapter. 3.1. Kay's Method 3.1.1. Presentation of the Method Kay (1976) presented an iterative method for estimating the IRR. Kay's original presentation used continuous notation. From the accounting point of view, however, a discrete version of Kay's results is needed to make the method applicable on simulated (or real-life) financial statements of a business firm. For Kay's method we have from Kay (1976: 451), Salmi and Luoma (1981: 25) and Peasnell (1982a: 371) the discrete version for IRR estimation As is recalled, the years in our data-generating simulation engine run from 1 to T while the actual observation period is from T-n-1 to T. For notational simplicity the indexing of the years of the observation period has been adjusted in Formula (20) to run from 1 to n. In this notation, the annual accountant's profit (operating income) p[t] and the book values of the firm's assets v[t] at the end of each year are now observed for years 1 to n. Therefore the first v[t-1] available is for year t = 2. This fact is duly reflected in the summation notation in Formula (20). Kay's iterative method is easily coded as a computer program to solve Kay's IRR estimate given the profit and book value observations from the financial statements. For the conditions of convergence of the IRR iteration procedure see Steele (1986: 2-5). The actual programs coded for this paper are Turbo Pascal 7.01 programs for an MS-DOS PC. (The programs are made publicly availableon the World Wide Web at the following address <URL:http://lipas.uwasa.fi/~ts/smuc/prog/smucprog.html>.) 3.1.2. Discussion of the Method Rewrite Formula (20) as It is immediately obvious that Kay's IRR estimate is a weighted average of the accountant's rates of return over the observation period, which would have been The factors The following question naturally arises and will be tackled in our simulation evaluation. Is the more complicated IRR estimation Formula (20) decidedly better than the straight-forward Formula (22) which furthermore is based on well-established accounting concepts? The link between the accountant's and the economist's valuation concepts are very evident also in Kay's (1976: 455) derivation. Kay presents the following relationship between the IRR estimate ( In the above the economist's value of the firm w is the present value of the future net cash flows (c.f. Formula (19)). The accountant's book value is based on the historical accounting data of the capital investments and depreciation (c.f. Formula (13)). If the two valuations agree, then also the accountant's and the economist's rates of return agree. The first corollary of this fact is that if the (theoretical) annuity depreciation could be used, then Kay's method is expected to give the exactly correct IRR estimate (Salmi and Luoma (1981: Appendix III) for the proof. The second corollary, in line with Solomon (1966: 115), is that if the growth and the profitability agree (r = k) then, again, Kay's method is expected to give the exactly correct IRR estimate ( 3.2. Ijiri-Salamon Method As was seen in the previous section Kay's method can be interpreted as a method that seeks the link between the IRR and the ARR. Another route is taken in the Ijiri-Salamon method. Ijiri (1979) presented what Salamon (1982) interpreted and expanded as an IRR estimation method based on the concept of the cash recovery rate, CRR. Ijiri (1979: 259) derived the following relationship between CRR and IRR When the CRR is known, the corresponding value of IRR can be readily solved by numerical iteration from Formula (24) using e.g. the bisection method. The IRR estimation problem thus becomes a CRR estimation problem. The central idea of the Ijiri-Salamon method is using this surrogate because CRR is easier than IRR to estimate from the financial statements. The cash recovery rate CRR can be defined as the ratio between the cash inflows from capital investments and the outstanding gross capital investments. Ijiri (1980: 55) presents the calculation of an annual CRR from published financial statements as (25) CRR[t] = Cash Recoveries / Gross Assets Cash Recoveries = (Funds from Operations) + (Proceeds from Disposal of Long-Term Assets) + (Decrease in Total Current Assets) + (Interest Expense) Gross Assets = (Total Assets) + (Accumulated Depreciation), averaged between beginning and ending balances. In our simulation evaluation the cash recoveries are simply equivalent to f[t]. The gross assets must be discussed in more detail. The total assets are given directly by the book value v[t-1]. First, when the total assets have been defined the accumulated depreciation must be assessed to get the gross assets. Second, the beginning instead of the average book values are used in our study. In financial statement analysis practice the accumulated depreciation is typically obtained by canceling backwards the depreciations for a suitable span of years. In analysis practice the choice of the backwards span tends to be somewhat arbitrary. However, it is mathematically obvious that given the average life-span of the capital investments and a constant level annual depreciations, the accumulated depreciation will be given by accumulating the depreciations from half the average life-span. While this result concerns the straight-line depreciation, the choice will be used as the best approximation for all the depreciation profiles. Furthermore, Ijiri's approach requires an estimate of the life-span N of the firm's capital investments. This means a potential source of further estimation errors in the method. In simulation the true life-span of the capital investments is known accurately. Hence the effect of the accuracy of estimating the life-span of the capital investments can be examined for Ijiri-Salamon method in our simulation approach. Note that this potential source of error is not present in Kay's method. Next consider the different conventions in calculating the book values in financial statement analysis. Instead of the often suggested averaging between the annual beginning and ending book values we use the beginning values v[t-1]. This leads to more accurate results when a discrete instead of a continuous approach is used. This choice is in line with the treatment of Kay's method in Salmi and Luoma (1981) and Peasnell (1982a). The estimates of the annual cash recovery rates CRR[t] are calculated from where V[t] denotes the gross assets at the end of year t calculated from where N is assumed an even integer for notational simplicity. The calculated CRR[t] values are averaged and the average is substituted as CRR into Formula (24) in line with Ijiri (1980). Ijiri's IRR estimate can then be iterated from Formula (24). Since we are using a simulation approach with a fully known engine to generate the observations, we also have the option to calculate the exact accumulated depreciation. This enables us to differentiate between the sources of the error in the IRR estimate. The components of the error are the error due to Ijiri-Salamon's method and the error due to the approximation of the accumulated 3.3. Ruuhela's Method The third method to be included in our analysis is the IRR estimation component of Ruuhela's "Growth, Profitability and Financing" model. As we have seen in the above, Kay's method is based on a relationship between the ARR and the IRR and Ijiri-Salamon method on the relationship between the CRR and IRR. Ruuhela's method can be considered to fall into a category of direct estimation of the IRR from the financial statements without the intermediate ARR or CRR concepts. The method was first presented in Ruuhela (1972) and mathematically streamlined by Salmi (1982). The method was restructured in Ruuhela et al. (1982). The explicit estimation of the firm's growth and the assumption of a stable business-culture period are characteristic of Ruuhela's approach. Ruuhela's IRR estimate is given by where k is growth-rate trend of the capital expenditures, a[N,k] is the annuity factor and F is defined as the capital investment ratio Ruuhela's method assumes a constant, exponential growth of the capital-investment g[t] and the cash-inflow f[t] time series of the firm. The quotient F of the two time series thus is constant in the method. Ruuhela's method also assumes that the capital investments contribute in accordance to the Anton distribution. In applying Ruuhela's method an estimate of the common growth rate of the firm's time series is needed. Most often an OLS estimate of the growth-trend of the firm's funds from operations corresponding to f[t] is used as the estimate. Given the OLS estimate 3.4. Discussion of the Model-Oriented IRR Estimation Methods Consider the conceptual backgrounds of the three methods presented so far. The IRR estimation formulas of Kay's and Ijiri-Salamon methods draw on the relationship between an income statement variable (a flow variable) and a balance sheet variable (a stock variable). As is seen from Formula (20) Kay's method involves the accounting profit p[t] and the book value of the firm v[t]. Conceptually, Kay's presentation leans heavily on exploring the relationship between the economist's and the accountant's rate of profit. The Ijiri-Salamon method involves the cash inflows f[t], the gross assets V[t], i.e. the book value of the assets undepreciated, and the life-span N of the firm's capital investments. This is readily seen from Formulas (26) and (20). The concept of the cash recovery rate is central in the method. Ruuhela's method is directly based on the conventional internal rate of return model of capital investments. Ruuhela's method consequently directly involves the two relevant flow variables the cash outflows to the capital investments g[t] and the annual cash inflows f[t] and the concept of discounting in the form of the annuity factor a[N,k]. No stock-concept variable is involved. The role of the growth variable k comes from the fact that the consecutive capital investments that produce the corresponding, lagged cash inflows, typically grow in a going concern. Ruuhela stresses that the profitability of the firm is a long-term concept based on business culture of the firm to be able to generate and utilize capital investment opportunities. According to Ruuhela firms usually experience long phases of stable business culture when the long-run profitability stays on a rather fixed level. Profitability can be measured for such stable intervals. In corporate life a change or a discontinuity in business culture often coincides with a change of the top-level management. At such junctures the long-run profitability typically changes and should be estimated anew. Ruuhela prefers to call the profitability of such a stable period the profitability of the business culture rather than the profitability of the legal entity, the firm. In our simulation testing the business culture is taken as unchanged. 3.5. Averaged Accountant's Rate of Return Method The fourth and last method included into our analysis is based on straight-forward accounting practice. Much of the discussion, ever since Vatter (1966), in the ARR vs. IRR debate has centered around the question whether or not the ARR is a good approximation of the IRR. Instead of reentering the deductive debate we seek a resolution to this question by including the averaged ARR in our simulation and comparison. The inclusion of the average ARR method is prompted by the fact that accounting practitioners routinely use and are comfortable with the concept of annual profits and return on investment. Employing averaged ARR as the IRR estimate can be considered a direct extension of this business practice. The average ARR is calculated as the arithmetic average of the accountant's annual rate of return from Formula (22). Technically, an average can be calculated as an arithmetic average or a value-weighted average. We use the former for two reasons. First, the arithmetic average is in line with business practice. Second, an average with a large fairly stable denominator is very little affected by the choice of the averaging method. Only in the case of major shocks some differences might exist. The beginning book values are used in the denominator instead of the annual averages in line with our treatment of Kay's and Ijiri-Salamon methods. Our advance hypothesis is that the average ARR method will not be inferior to the other methods. Our hypothesis is based on the concept of economic Darwinism. Quoting Watts and Zimmerman (1986; 195) "Competition among firms implies that operating procedures ... that are used systematically by surviving organizations are efficient." 3.6. Discussion of Market-Based Methods The methods discussed so far use pure accounting data from the income statement and the balance sheet. The internal rate of return, however, is based on future cash flows in line with the economist's income concepts and valuation of assets. The question arises if IRR estimation methods based on market values rather than book values should be used. There are several papers putting forward implicit or explicit suggestions of an estimation of the IRR involving the market values of the firm's stock. Reconsider Kay's method. Formula (23) can be interpreted as a suggestion by Kay to adjust the accounting-based IRR estimate with the market value of the firm w[t] to arrive at the internal rate of return which would agree with the economist's rate of return. Lawson (1980) presented a method for estimating the equity, debt and entity rates of return for the firm. To estimate IRR, his cash-flow based method equates the discounted operating cash flow less net capital investment less tax payments less/plus liquidity change to the discounted sum of initial and the ending (market-based) value of the firm. Steele (1986: 8) suggested in his paper evaluating the derivations in Salmi (1982) and Peasnell (1982a, 1982b) an alternative version of Kay's Formula (20) to include market values into the estimation of the firm's IRR. Theoretically the idea of basing the IRR estimates on stock prices is sound, because the prices reflect the economist's valuation of the firm's future income in line with the internal rate of return concept. However, there are some serious problems with the practicality of this theoretically well-founded approach. First, firms are not necessarily traded on stock exchanges so genuine market values would not be readily available for a considerable number of firms. Second, it is a well-known fact that stock prices are more volatile than accounting earnings. This indicates a potential, temporal instability in what should be stable long-run profitability estimates. Third, it is not easy to assess whether or not the accounting function of business firms would agree on a measure of income based on market values instead of deep-rooted accounting conventions. Despite the practical reservations stated in the above, the evaluation of the market-based methods of IRR estimation would be highly interesting. However, we do not pursue this avenue in the paper at hand. The line of enquiry is not readily amenable to the present simulation model. In particular, the problem is establishing a reliable procedure that would give the market values of the simulated firm which would be exactly compatible with the true internal rate of return. Extrapolating the time series into infinity in the simulation is not a viable answer. The results would be too volatile for the simulation evaluation. Furthermore, an extrapolation to infinity would be unrealistic. The business culture of a firm is not preserved to the infinity with an unchanged long-term However, it can be noted that there is some recent research information about the IRR estimates arrived at by the accounting-based vs. market based methods. An unpublished master's thesis prepared under our supervision tentatively indicates that the IRR estimates from a sample of real-life business firms derived from Ijiri-Salamon method are much more closely related to the estimates from the accounting-based Ruuhela's method than the from the market-based Lawson's method. 4.1. Simulation Design and Data Description To tackle the research questions posed we use the research design delineated by Figure 4. The financial data is generated for the different parameter combinations listed in Table 1. The IRR estimates are obtained for the chosen methods under these different parameter combinations. The obtained IRR estimates are then compared with the true internal rate of return for which the data was generated. Our first research question concerns the effect of the business cycles on the robustness of the four IRR estimation methods. For our simulation it is realistic to assume that the long-run average length of a business cycle is six years (C = 6 in Formula (2)). In the simulation the length of the observation period is set at 13 years covering two full business cycles. Three alternative amplitudes of the cycles are used in our simulations. For no cycles we set A = 0.00, for medium cycles we set A = 0.50 and for strong cycles A = 1.00. With an amplitude A = 0.00 there are no business cycles in the capital investments, only the trend and the noise. With A = 1.00 the capital expenditures double from the trend and fall to zero in six year cycles. The amplitude A = 0.50 is between the two. Where the results are found to be insensitive to the cycles, the amplitude is fixed at the average case in the exposition of the results. The IRR estimation results for the methods under observation will be presented for the different combinations of the essential parameters based on one instance of each combination. One of the components is the random fluctuation in the cyclical level of the capital investments, i.e. the noise term 1+ Our second research question concerns the effect of contribution patterns and the life-span estimates of the capital investments. As was discussed in an earlier section, the underlying contribution pattern of the capital investment process of a real-life business firm cannot be readily, if at all, unraveled from the firm's financial statements. Thus the generic contribution distribution of the firm is not known. Consequently, we simulate the effects of three potential contribution distributions (c.f. Figure 3). The "neutral" uniform contribution distribution, the "growth- maturity-decline" negative binomial contribution distribution and the "steady-decline" Anton contribution distribution are selected. The life-span of the capital investments in the simulation will be set at 20 years. The contribution coefficients for the uniform contribution distribution from Formula (7) for alternative profitabilities become 0.0735 for r = 4%, 0.1018 for 8%, 0.1339 for 12% and 0.1686 for 16%. The negative binomial contribution distribution coefficients from Formulas (9) and (10) are delineated by Figure 5 for an 12% example-level of profitability. Likewise, from Formula (11) the corresponding contribution coefficients for the Anton distribution for the 12% profitability level decline linearly from 0.170 to 0.056. The life-span of the capital investments affects the numerical values of the chosen contribution distribution and the annual depreciation figures. The life-span of the capital investments is known in the simulation (we have chosen a typical 20 years), but it cannot be accurately known in applications on real-life business firms. This is one of the potential sources of inaccuracy in the IRR estimation methods. The Ijiri-Salamon method and Ruuhela's method require an estimate of the life-span as part of the IRR estimation procedure while Kay's and the average ARR methods do not. The effect of misestimating the life-span in the two susceptible methods will be considered in the analysis section by comparing the IRR estimates with a 20-year life-span to the results with a 16-year and a 24-year life-span. Our third research question concerns the effect of a disparity between the firm's growth and profitability. As was discussed the earlier literature poses that a growth and profitability equality has a special meaning in the relationship between IRR and ARR. We fix a growth rate of k = 8% in the simulation. The simulated data is generated to produce true profitability figures of r = 4%, 8%, 12% and 16%. The true rates are at and on both sides of the growth rate. Here the relation between the profitability and growth is crucial rather than the absolute levels. Therefore, either growth or profitability could have been fixed for a meaningful simulation and the other varied. We have chosen to fix the growth rate and vary profitability to achieve the cases of low profitability (4%) compared to growth, equal rates (8%) and high profitabilities (12% and 16%). The selected combinations are intended to tally with common growth-profitability combinations of real-life business firms. Figures 6 and 7 present the growth vs. profitability combinations for a sample of 87 U.S. and 244 Finnish firms between 1969-88 and 1965-94 respectively. The data are based on unpublished master's theses written at the University of Vaasa using one of the methods, Ruuhela's method. Our fourth question concerns the sensitivity of the methods to the depreciation choice that the firm has used in preparing its financial statements. The simulated time series are produced for three different depreciation methods to evaluate their effect on the results. The first two methods are the straight-line depreciation and double-declining-balance depreciation based on the common accounting practice. The third method to be used in the analysis is the theoretical annuity depreciation. The assumed 20-year life-span of the simulated capital investments means that the annual rate of depreciation in generating the simulated data is 5% in the straight-line method and 10% in the double-declining-balance method. The figures for the theoretical annuity method of depreciation are a function of the true profitability as is seen in Formula (17). Our last question involves the effect of major irregularities in the level of the capital investments. The robustness of a profitability estimation method can be tested by including capital investment shocks in the model. In business terms such a shock is usually related to a major deviation from the level of capital investment pattern. Experiments are made with different magnitudes and timing of a one-time shock. The shock alternatives simulated are a five-fold shock and a seventeen-fold shock relative to the normal capital investment level in the third or in the ninth year. Table 2 gives an example of one realization of the time series from the simulated financial statements. The observation period in Table 2 is 13 years from the simulated year 22 to 34 (the lines not denoted by the *). The realization presented in Table 2 is for the case of the negative binomial contribution distribution with a true profitability of 12%, a growth trend of 8%, medium amplitude (A = 0.50) of business cycles, with noise ( The data are presented graphically in Figure 8. Because of their different scale the book values have not been included in Figure 8. The figure can be visually compared with the corresponding time series of actual business firms. Contrary to the more rigid, steadily growing series of earlier research, the series produced by our simulation model and parameters are realistic in terms of factual observations. This contention is readily corroborated by the empirical time series data gathered in the course of several research projects at University of Vaasa, such as Ruuhela et al. (1982). As in real-life business firms the simulated time series of the capital investments show a wide fluctuation while the derivative series are much smoother. This results from the fact that the capital investments produce the corresponding cash inflows over a long, lagged period and that similarly the depreciation is extended over the life-span of the capital investments. Furthermore, despite the fluctuations the underlying growth-trend for the firm is a constant in the simulation. Capital investment shocks are simulated to test the robustness of the IRR estimation methods. When shocks are included the noise is excluded (see 1+ The Ijiri-Salamon method needs an estimate of the gross book value of assets as is seen in Formula (26). This figure is not routinely available on the balance sheet of a business firm. For obtaining the gross book value an estimate of the cumulative depreciation is needed as is seen in Formula (27). Table 3 displays the cumulative depreciation and the gross book value for the data in Table 2. The numbers are calculated for our error analysis in two different ways. The first two columns are calculated with the exact cumulative depreciation. In a simulation approach this is possible since the engine producing the financial data is known accurately. The two last columns are calculated in line with what could be done with actual data from business firms. 4.2. Evaluation of Kay's Method 4.2.1. Effect of Regular Business Cycles We begin the evaluations by assessing the effect of business cycles on Kay's method. The IRR estimates by Kay's method are presented in Table 4 for the three different levels of amplitudes in the business cycle. The results are presented in Table 4 for the negative binomial contribution distribution which is the most general of the alternative distributions. To see the pure effect of the cyclical component we first omit the random noise term. The results are presented for the four different growth-profitability combinations and the three different depreciation methods "Str" straight-line depreciation, "Decl" double-declining-balance depreciation and "Ann" annuity depreciation. It is readily seen in the table that the effect of the business cycles is marginal for Kay's IRR estimation method. In the worst case with the strong cycles (A = 1.00) the difference between the IRR estimates 18.9% and 18.6% (16% true profitability and double- declining-balance depreciation) is only 0.3%. The presented result is for the negative binomial contribution distribution. The results for the other two contribution distributions, the uniform distribution and the Anton distribution, indicate a similar insensitivity. (The additional tables are not displayed for brevity.) Hence we can safely conclude that Kay's IRR estimation method is not affected by regular business cycles. This being the case the rest of the analysis of Kay's method can be conducted without a loss of generality using the medium cycle strength (A = 0.50). 4.2.2. Overall Accuracy of the Kay's IRR estimates We can now analyze the total error in Kay's IRR estimates. Table 5 presents the results for Kay's IRR estimation method under medium business cycles. The noise component is included at this phase. The results are condensed into a single table for the three contribution distributions. The general impression conveyed by Table 5 is that the level of Kay's IRR estimates is fairly well in line with the true profitability. In particular, when the firm's growth and profitability are near each other, Kay's method performs excellently. There are, however, situations where Kay's method performs poorly. The biggest absolute discrepancy in Table 5 in an estimate ( It is not easy to evaluate how serious the observed errors are from the point of view of decision making. It depends on whether the alternative methods give better estimates. Most importantly, the seriousness of a deviation would depend on what would be the consequences of the management of the firm having erroneous profitability information. Predicting such consequences in quantitative terms is a very involved question and is outside the scope of our research. 4.2.3. Effect of Noise In Table 4 it was observed that the effect of the business cycles on the estimation error is marginal. To asses the effect of the noise component Table 6 presents Kay's IRR estimates without the noise for a comparison with Table 5. While the noise term in the capital investment level seems to have more effect on the Kay's IRR estimation than the regular cycles the effect of noise is rather mild. At most the IRR estimate changes from 20.3% to 19.5% (in the case of the 16% true profitability, uniform contribution distribution and double-declining-balance depreciation). The magnitude of the difference is 0.8% compared to a total error of 4.3%. We conclude that noise is not a main source of the estimation errors. Hence the analysis can be founded on the results in Table 5. At this juncture a general word of caution is in order. It goes without saying that generalizing from the conclusions based on simulation rather than analytical deduction always should be considered with a fair amount of caution. 4.2.4. Effect of Contribution Patterns, Growth-Profitability Relationship and Firm's Depreciation Choice Our second research question concerns the effect of the type of capital investments available to the firm. Consider Table 5 anew for effect of the alternative contribution patterns. As pointed out earlier, the shape of the contribution distribution of the capital investments is not readily known for real-life firms. Therefore it is of interest to test whether the IRR estimation results are sensitive to this factor. It is seen that under capital investment opportunities that contribute in accordance with the negative binomial distribution, or the Anton distribution, the results are more accurate than under the non-declining uniform contribution distribution. Our third research question concerns the effect of the discrepancy between growth (k) and profitability (r). It is obvious from the results that a discrepancy between growth and profitability levels is the crucial source of error in the Kay's IRR estimates. It is also noted that when r > k Kay's IRR systematically overestimates the true profitability (the special case of the straight-line depreciation under the Anton contribution distribution will be discussed in a later section). Thus it appears that Kay's method gives even too optimistic IRR estimates to firms with good profitability. For r < k the direction on the estimation error depends on the contribution distribution, the depreciation combination and the irregularities in the capital investments (the noise). Thus it would seem that it is not possible to make any predictions whether Kay's estimates for firms with low profitabilities are optimistic or pessimistic. Our fourth research question concerns the effect of the depreciation method choice that the firm makes. The effect of the firm's accounting choice appears highly important to the accuracy of the IRR estimates. The error in the estimates in Table 5 is about half or less when the firm applies the straight-line depreciation method instead of the double-declining-balance method. This observation raises interesting accounting issues about the depreciation method choice. 4.2.5. Effect of Major Capital Investment Shocks Our fifth research question concerns the effect of major capital investment shocks. Figure 9 delineates an example time-series data. Table 7 gives Kay's IRR estimates under a third year shock ("early shock"). Table 8 is for a ninth year shock ("late shock"). To isolate the effect of the shocks, noise has been excluded. Kay's IRR estimation method seems to be reasonably robust to the capital investment shocks even if there is some disruption in the estimates. The effect of the shock seems to be to decrease the IRR estimates, the more the bigger and later the shock appears. The observed behavior is easy to explain. The one-time investment shock becomes dominating, and its effects are much outside the period under observation. For high profitabilities relative to growth the shock compensates for the error caused by the growth-profitability discrepancy. For low profitabilities the error from the growth-profitability discrepancy is even aggravated. It can be noted, however, that logically it is not equally likely that major capital investment shocks will appear in corporations with profitability problems than in firms with good profitability prospects. Furthermore, as will be observed in the next section, the introduction of major capital investment shocks will cause deviations from the theoretically expected results. 4.2.6. Theoretical Considerations There are several theoretical assertions about the relationship between the internal rate of return and the accountants rate of return under the specific growth rates, depreciation methods and contribution distributions presented in earlier literature. Next we consider these assertions, under the more general conditions of business cycles and noise, utilizing our simulation results. Solomon (1966: 115) posed that when the growth rate and the true internal rate of return are equal, the accountant's rate of return also becomes the same. Consequently, it is theoretically to be expected that if the growth and profitability are exactly equal, Kay's method should give exactly the correct IRR estimate because it is built on the relationship between the IRR and ARR. The equality would be expected to hold over all the contribution distributions and over all the depreciation methods. Formula (2) generates the capital investments. It added several components to the constant growth model. Consider the presented theoretical contention with the added components. Table 6 confirms that the expected equality holds (within the used numerical precision) not only in the case of constant, exponential growth but also in the case with the business cycles added. However, when irregularities are introduced in terms of the noise (cf. Table 5), the expected theoretical result no more fully holds. The deviation is not marked numerically, but theoretically the assertion breaks. As is natural, the disruptive effect of the one-time capital investments shocks is more marked than that of the noise. Analytically, the accountant's rate of return and the internal rate of return are equal when the annuity method of depreciation is used (see e.g. Salmi and Luoma, 1981: 28 and Peasnell, 1982a: 364). The simulation results for Kay's method are in agreement with this contention for all the observed combinations of growth vs. profitability and for all contribution distributions even with the irregularities introduced upon the growth-trend and the business cycles in terms of the noise and the capital investments shocks. See the columns marked "Ann" in Tables 5, 7 and 8. The theoretical results about the annuity depreciation are very strong. They are in line with discussions and results in literature about accountant's and the economist's concepts of income. It is a well-known result that the theoretical annuity depreciation method and the business practice straight-line depreciation method yield the same depreciation if the contribution distribution for the capital investments is the Anton distribution. See Solomon (1971; 168 footnote) for references. Consequently, for the Anton contribution distribution the simulations should produce the same IRR estimate for the straight-line depreciation as it does for the annuity depreciation. Also this theoretical contention is corroborated by the simulation. Compare the columns marked "Ann" and "Str" below "Anton" in Table 5. This result holds even if major capital investment shocks are introduced. (The numerical tables for the Anton distribution with the major investment shocks are not displayed for brevity.) 4.2.7. Conclusions about Kay's Method The main findings about Kay's IRR estimation method are the following. Under ordinary circumstances Kay's method performs quite well. However, the deviation of Kay's IRR estimates from the true internal rate of return can be considerable if the growth rate of the firm and its profitability are not near each other. This is the main source of error in Kay's method. Kay's method seems to lead to systematically overoptimistic profitability estimates when the firm's true profitability exceeds the firm's growth considerably. If the true profitability is below the firm's growth the nature of Kay's IRR estimate is ambiguous. The magnitude of the error caused by a growth-profitability gap is jointly dependent on the contribution pattern of the capital investments, the firm's depreciation choice and the noise in the capital investment time series. Kay's method is not affected by regular business-cycle fluctuations in the capital investment time series, but it is mildly affected by noise. Kay's method is reasonably robust to major capital investments shocks. The irrelevance of business cycles and the mild effect of noise on the accuracy of the estimates are important advantages in Kay's method. Kay's method has a firm theoretical background in the theory of accountant's and economist's profit concepts. This fact is reflected in always getting exactly the expected IRR estimates under the theoretical annuity depreciation and getting fairly accurate IRR estimates under the equality of growth and profitability. Furthermore, if the capital investments contribute in accordance to the Anton distribution, the estimates under the firm applying a straight-line depreciation are accurate. 4.3. Evaluation of Ijiri-Salamon Method 4.3.1. Exposition of the IRR Estimates with Ijiri-Salamon The cash-recovery-rate-based Ijiri-Salamon IRR estimation method differs from Kay's method in two respects in the data that it needs. An estimate of the life-span of the firm's capital investments is needed. Furthermore, an estimate of the gross book value of the firm's assets is needed. (The gross assets V[t] are the net assets v[t] plus the accumulated depreciation. Cf. Formula (25).) This fact introduces two additional, potential sources of error to the method: a misestimation of the life-span of the capital investments and a misestimation of the gross book value. In evaluating Ijiri-Salamon method we can utilize the fact that in simulation the life-span (N = 20) and the true accumulated depreciation, and hence the gross book value of the firm's assets are known precisely. An example of the accurate accumulated depreciation D[t] and the accurate gross book value V[t] was presented in Table 3 in describing the data of the simulation. Tables 9 to 11 present the IRR estimates with Ijiri-Salamon method with noise. These tables for the three different contribution distributions include the results for three alternative estimates of the capital investments' life-span E(N). The IRR estimates are presented assuming a correctly estimated life-span of 20 years, an underestimate 16 years, and an overestimate 24 years. In other words for the life-span estimates being off the mark by a fourth. Table 12 presents the IRR estimates for comparison without the noise. Table 13 presents the estimates in the case of early, realistic shock. For brevity, only the cases with the negative binomial distribution are displayed by Tables 12 and 13. The full set of the tables can, however, be readily reproduced for verification since the relevant computer source codes have been made available to the interested reader from the World Wide Web: <URL: http://lipas.uwasa.fi/~ts/smuc/prog/smucprog.html>. The IRR estimation results are presented assuming that the firm either employs the straight-line depreciation ("Str") or the double- declining-balance depreciation ("Decl"). The accumulated depreciation must be estimated from the financial statements. In accounting practice, the accumulated depreciation figure usually is an approximation based on a time series of recent financial statements. We use the estimate given by Formula (27). An example of the gross book value figures can be seen in the last column of Table 3. The accumulated depreciation can also be calculated accurately in the simulation approach. Ijiri-Salamon's IRR estimates with accurate accumulated depreciation is presented in the "Accu" column of the tables. This particular information facilitates a decomposition analysis of the error sources in the IRR estimates. 4.3.2. Effect of Various Factors on Ijiri-Salamon IRR Estimates As is recalled, the first of our research questions concerns the effect of the business cycles on the IRR profitability estimation methods. As for Kay's method our simulations for Ijiri-Salamon method indicate that the method is not sensitive to cycles. For brevity, the numerical IRR estimation results for the different cycle amplitudes are not displayed. Therefore, the cycle amplitude was fixed at A = 0.50 in the tables presented in the previous section. As for Kay's method, the effect of noise is rather mild as can be seen comparing the representative Tables 10 and 12. Our fifth research question concerns the effect of investment shocks on the profitability estimates given by the various methods. Our simulations indicate that like Kay's method the Ijiri-Salamon method is reasonably robust to capital investment shocks. Compare Tables 12 and 13 for an example effect of the capital investment shock. In fact, an investigation of the two tables shows that the effect of misestimating the life-span of the capital investments is mostly more marked a source of the IRR estimation error than the effect of the capital investment shocks. A comparison of Tables 12 and 13 with the pair of Tables 6 and 7 for Kay's method indicates that while the effect of the capital investment shocks is not destructive on the methods, its effect on Ijiri-Salamon method is more Overall, Ijiri-Salamon method fares on the average in the simulations comparably to Kay's method. The worst cases in the regular Tables 9 to 11 appear when the profitability is low compared to the growth. Ijiri-Salamon IRR estimate at worst is 50% off the mark in relative terms. However, in the Ijiri-Salamon method there is no clear pattern to the errors. Unlike in Kay's method there are no cases where the error would disappear. Furthermore, there is no clear pattern to the direction and the magnitude of the error. As has been discussed, the realization of the theoretical assertions concerning the growth-profitability equality conditions, the annuity depreciation and Anton distribution could be checked. However, these assertions do not cover the relationship between the cash recovery rate and the internal rate of return. This state of matters also is clearly reflected in the simulation results as a lack of similar theoretical regularities as were observed in the results for Kay's IRR estimation method. This can be considered a disadvantage. 4.3.3. Decomposition of the Ijiri-Salamon Method Estimation Error The simulation results for Ijiri-Salamon method seem at rough par with Kay's method. However, a decomposition of the sources of the overall error exposes a more critical picture of the potential quality of the IRR estimates by Ijiri-Salamon method. The total error in Ijiri-Salamon's IRR estimates is made up by several components, which individually can be larger in absolute terms than the total error, but the components of the error compensate each other in the presented simulations. Table 14 gives one example of the decomposition of the total error into three components. The error decomposed is for the IRR estimates listed in Table 10 for the columns of the double-declining-balance depreciation. The total error is made up of the following three components. If the user of Ijiri-Salamon method knew exactly the true life-span of the capital investments and were able to calculate the accumulated depreciation figures accurately, all the error would be attributable to the method's formal derivation. This error is listed in Table 14 in the column "Formula". However, the focus of interest is on deriving the estimates for real-life business firms. Hence the life-span of the capital investments cannot be readily known accurately. The column "Life span esti" displays how much of the total error is due to errors in estimating the life-span. Furthermore, obtaining the accumulated depreciation from a time series of published financial statements is not trivial and involves approximations in actual accounting practice. The column "Cumu depr calc" reflects the resultant error. The column "Tot. err" gives the total error, which is equivalent to the error in Table 10 between the estimated IRR and the true internal rate of return. 4.3.4. Conclusions about Ijiri-Salamon Method The main findings about Ijiri-Salamon IRR estimation method are the following. Like Kay's method the Ijiri-Salamon method performs quite well in estimating the long-run profitability of the firm. However, the error of the Ijiri-Salamon method is less predictable and thus more risky than in Kay's method, because of the many sources of the error. In Kay's method the main source of error is a discrepancy between growth and profitability. In the Ijiri-Salamon method it is not possible to pinpoint the main source of error because of their complicated interaction. Ijiri-Salamon method is unaffected by regular business-cycle fluctuations, but it is mildly affected by noise. The method is reasonably robust to one-time capital investments shocks. The other sources of errors dominate the shocks. Ijiri-Salamon method lacks similar theoretical results as are characteristic of Kay's method. The mathematical derivation of the method is sound. But the method is not based on the linkage between the income determination in accounting and economics. Hence, there are no theoretical expectations for the method's behavior under special circumstances. To sum up, the method fares comparatively well in practice but fares less well in the theoretical background. 4.4. Evaluation of Ruuhela's Method Also Ruuhela's IRR estimation method differs from Kay's in the financial statement data that it uses. Like Ijiri-Salamon method an estimate of the life-span of the capital investments is needed. Furthermore, Ruuhela's method needs the estimate of the growth rate of the firm. On the other hand, and very importantly, Ruuhela's method does not need the time series of depreciation. Ruuhela's IRR estimation method is independent of the depreciation method that the firm chooses. 4.4.1. Effect of Regular Business Cycles We begin the simulation evaluation of Ruuhela's IRR estimation method by considering our first research question which concerns the effect of business cycles. Tables 15 and 16 present the IRR estimates for the Anton contribution distribution. The derivation of Ruuhela's method assumes the Anton contribution distribution. The noise component is omitted in this section. We make these two choices in order to minimize the number of concurrent issues that need to be taken into account at this stage. The estimates in the tables are displayed for the different growth vs. profitability combinations, the alternative cycle amplitudes and the alternative estimates of the life-span of the investments. Ruuhela's method needs an estimate of the firm's growth rate. This growth rate is estimated in Ruuhela's method by OLS regression from the time series of the funds from operations corresponding to f [t], i.e. the simulated cash inflows. The OLS-estimated growth rates are given within the parentheses in Table 15. For comparison, Table 16 presents Ruuhela's IRR estimates with exactly the correct growth (k = 8%). It is readily seen that unlike in Kay's and Ijiri-Salamon methods Ruuhela's method is sensitive to the business cycles. It is also seen in Table 15 that when there are no cycles (A = 0.00), when the life-span estimate is equal to the true life-span (20 years) of the capital investments and when the capital investments contribute according to the Anton distribution that Ruuhela's method produces exactly the correct IRR estimates. Like Kay's method Ruuhela's method has under its own assumptions a direct linkage to the income (and depreciation) theory. Furthermore, it is obvious both from the formulas of Ruuhela's method (especially Formula (30)) and the empirical results presented (see Table 16, columns with cycles for the 20-year life-span estimate) that Ruuhela's constant-growth assumption is crucial for his method. The business cycles causes a deviation under even perfect growth estimates and correctly estimated life-spans of the capital investments. Given the methods assumptions of constant-growth and its observed sensitivity to business cycles it is not surprising that the worst cases in the tables appear with strong business cycles (A = 1.00) and with misestimated life-spans. 4.4.2. Effect of Noise To observe the effect of noise on Ruuhela's method we present the IRR estimates of Table 15 anew in Table 17 this time with the noise component included. Two observations can be made by comparing Tables 15 and 17. First the noise obviously affects the growth OLS estimates. On the other hand the profitability estimates do not change much. Hence the sensitivity of Ruuhela's method to noise is mild. This corroborates the importance of the effect of the cyclical component on Ruuhela's IRR estimation results. 4.4.3. Effect of Contribution Patterns and Growth-Profitability Relationship and Other Factors Our second research question concerns the effect of the cash contribution patterns of the capital investments available to the firm. Our third question concerns the effect of disparities between the firms growth rate and profitability. Tables 18 and 19 respectively give the IRR estimates for the different growth-profitability combinations under the uniform contribution distribution and the negative binomial distribution. Table 17 in the previous section contains the IRR estimates under the Anton contribution distribution for the firm's capital investments. The contribution pattern of the capital investments has an effect, but the effect is a joint effect with the other parameters of the IRR estimation situation. As discussed, in the case of Ruuhela's method the Anton distribution has a special role since it is used as an assumption in the derivation of the method. This is also seen in the tables. The best IRR estimates are gained under the Anton contribution distribution. The comparison of the tables for the case when growth equals profitability produces near-correct but not perfect estimates under no business cycles. A discrepancy between growth and profitability has a considerable effect on the quality of Ruuhela's IRR estimates. The effect of the fluctuations in the capital investments caused by business cycles is overriding in Ruuhela's method. With the increase of the cyclical fluctuations the growth vs. profitability equality loses its effect in Ruuhela's method. Our fourth question concerns the effect of the firm's choice of the depreciation method on the quality of the IRR estimates. In Ruuhela's method this question does not rise since the method is independent of the firm's depreciation choices. 4.4.4. Effect of Major Capital Investment Shocks Our last research question concerns the effect of major capital investment shocks on the reliability of the IRR estimation methods. Table 20 gives the OLS growth estimates It is readily seen from the table that with the introduction of major capital investment shocks the OLS growth estimation procedure is derailed. In conclusion, if there are major capital investment shocks, Ruuhela's method should not be applied on a the time period including such a structure-changing shock. (At the very least another method of growth estimation, like LAD estimation should be considers.) This observation is in line with Ruuhela's own observations about IRR estimation being valid only for periods of stable business culture. 4.4.5. Analysis of the Estimation Error in Ruuhela's Method Tables 21 and 22 decompose the IRR estimation error in Tables 19 and 17, respectively, into its components for Ruuhela's method. The results are presented for our benchmark contribution distributions, the negative binomial distribution, and for Anton distribution which features in the derivation of Ruuhela's method. The components of the total error are attributable to deviation in the OLS growth estimate ("Grwt esti") and the error in the capital investments' life-span estimate ("Life span esti"). The third component is the remainder of the total error. The remainder is attributed to the IRR estimation formula ("Formula"). The errors can either strengthen or dampen each other. 4.4.6. Conclusions about Ruuhela's Method The main findings about Ruuhela's IRR estimation method are the following. Like Kay's method Ruuhela's method has a strong theoretical background in the linkage to the income determination theories of accounting and economics. The formal requirements of Ruuhela's method are more restrictive than Kay's. The constant-growth assumption is essential in Ruuhela's method. It explains the method's considerable sensitivity to business cycles and noise. Shocks should be excluded. They usually are involved with a change of business culture. An assumption of a unique IRR would be contested in applying Ruuhela's approach under the circumstances. A disparity between the firm's growth and profitability generally increased the deviation of Ruuhela's IRR estimate from the true IRR. This feature is common with Kay's method. The quality of the growth estimate affects Ruuhela's IRR estimate. The effect, however, is a joint effect with the other potential sources of error. Ruuhela's method is independent of the depreciation method that the firm uses. Thus the accounting choices of the firm with regard to depreciation policies do not affect Ruuhela's IRR estimation method unlike the other methods. 4.5. Averaged Accountant's Rate of Return Method The last method to be analyzed in this paper is the method of using the average ARR as the IRR estimate. The long-standing debate about the relevance of the averaged accountant's rate of return as a surrogate of the economist's theoretical profitability comes down to the question whether the average ARR is a good approximation of the firm's IRR, or whether the more complicated methods are the only avenue to a proper long-term profitability estimation (or if any are). The accountant's way of evaluating annual profits is dominant in business practice. Hence the soundness of extending the ARR concept to long-term profitability estimation is of paramount practical importance and interest. 4.5.1. Closeness of the Average ARR Method to Kay's Method As for Kay's method the effect of cycles is negligible for the average ARR method. Thus the results of the simulation analysis are not presented for all the cycle alternatives. Table 23 gives the IRR estimates using the average ARR method in the case of medium level of business cycles (A = 0.50) The IRR estimates produced by the average ARR method in Table 23 are strikingly similar to the simulation results with Kay's method in Table 5. The maximum difference in the estimates is only 0.1 per cent in absolute terms. This closeness is not an unexpected result, since Kay's method in the format in Formula (21) can be interpreted as an iterative weighted-average ARR method. Only if major investment shocks are introduced the average ARR method gives estimates that are markedly different from Kay's estimates. This can be seen by comparing Table 24 for the average ARR method and Table 7 for Kay's method for an early shock. A similar comparison be done for a late shock in Tables 8 and 25. The second entry in each cell gives the deviation between the IRR estimates from the average ARR method and Kay's method. The tables confirm that under ordinary cyclical conditions the average ARR method and Kay's method give virtually equivalent results. In practical long-run profit evaluation terms of the accountant there is no numerical difference between the two methods. Only with the excessive seventeen-fold capital investment shocks the picture of the equivalence between the two methods changes. The methods start deviating markedly for the disparate growth-profitability combinations. Neither method, Kay's nor the average ARR, consistently outperforms the other when shocks are present. For example, Kay's method fares better for an early seventeen-fold shock in the case high profitabilities, but the situation is reversed for the late shock or low profitabilities. 4.5.2. Theoretical Considerations and Conclusion Given the close kinship between Kay's method and the average ARR method it is interesting to observe which of the theoretical contentions still hold in the simulation for the average ARR method. The first theoretical contention discussed in connection with Kay's method was Solomon's position that when the growth rate and the true internal rate of return are equal, the accountant's rate of return also becomes the same. For Kay's method no numerical deviation from this equivalence is observed assuming perfectly regular cycles, no noise and no shocks (see Table 6). For the average ARR method the same observation is made when there are no cyclical fluctuations, no noise and no shocks. However, with the cyclical fluctuations, but no noise in Table 26 the relationship no longer holds accurately. The deviation, however, is marginal. (The maximum deviation 0.1 occurs in the table in the case of negative binomial contribution distribution and double-declining-balance depreciation). As will be recalled, the next theoretical contention is about the equivalence of the IRR and the ARR under the theoretical annuity depreciation method. The validity of this contention is very strong. In our simulations it holds throughout both for Kay's method and the average ARR method, even under disparate growth-profitability combinations and major capital investment shocks as can be observed from Tables 7, 8, 24 and 25. If the contributions from the capital investments follow the Anton distribution, the straight-line depreciation method results remain equivalent to the annuity depreciation results. Looking at Table 23 this result is seen to hold even under the ordinary conditions of business cycles and noise. However, with major capital investment shocks this theoretical contention ceases to hold both for Kay's and the average ARR methods. To conclude about the average ARR method, the simulated IRR results are virtually equivalent to the results with Kay's method with the exception of the effect of excessive capital investment shocks. Therefore much the same numerical conclusions apply which already were discussed in connection of evaluating Kay's method. They are not repeated. The general conclusion about the business-practice based average ARR method is, however, very important. The average ARR method mostly performs as well (or as badly) as any of the sophisticated IRR estimation methods analyzed in our research project. Considering this fact and the average ARR method's practical appeal it is safe to say that for a practitioner it comes out best of the methods analyzed in this paper. The importance of the other IRR estimation methods, especially that of Kay's and Ruuhela's, lies in their merits for the theory of accounting. 4.6. Comparison of the Results In comparing the different methods for estimating the internal rate of return of the firm's capital investments the following aspects are relevant: numerical performance, theoretical foundations and practical applicability. In this section we summarize the results in general terms. First, consider numerical performance. In our simulations the relevant parameters are given such values as should put them in a realistic range with regard to actual business firms. Within the observed range none of the methods unequivocally outperforms the others in the simulation. The deviations in Kay's and the average ARR method are more regular and predictable than the deviations in Ijiri-Salamon and Ruuhela's methods. The number of potential sources of errors in Ijiri-Salamon and Ruuhela's method is greater than the other two methods. Since the errors of these methods partly compensate for each other, the resulting total error, while less predictable, is no worse for Ijiri-Salamon method than for the other methods. Ruuhela's method is the most dependent of the methods on its internal assumptions. Under its restrictive assumptions it works perfectly, but in a general situation it also produces the worst of the overestimation errors if there are strong business cycles and if the firm's profitability exceeds its growth considerably. No common, generalizable pattern of errors emerged for the observed, different parameter combinations, with one tentative exception. Kay's method, Ruuhela's method and the average ARR method all have a tendency to overestimate rather than underestimate the true profitability when the firm's profitability exceeds its profitability considerably. In the simulations of the present paper each of the boxes in the different tables can be considered "equally weighted". One potential direction of further research would be to adopt a numerical index to compare the numerical performance of the methods with each other. For this purpose it would be necessary to estimate from factual business observations the relative frequencies of the different combinations of the key parameters. (Some indication of the relative frequencies of the different cases are provided by the data in Figures 6 and 7.) In simulation a Monte-Carlo approach could be Second, consider the methods' theoretical robustness in the light of the simulation results. Kay's method came out as the theoretically most generic, with the average ARR method very close by. The ARR equality to IRR when the growth rate and the IRR agree, the theoretical annuity depreciation method's IRR-conformance, and the posed relationship of the annuity and straight-line depreciation methods under Anton contribution distribution all were confirmed in the simulations with Kay's method. Ruuhela's method is theoretically very sound, but its constant-growth and Anton contribution distribution assumptions make it empirically more vulnerable than Kay's and the average ARR method. Ijiri-Salamon method does not conform empirically to any of the expected theoretical propositions. This fact casts serious doubts on the theoretical validity of the method despite its relative reliability in the numerical simulation. The conclusion for the Ijiri-Salamon method is that it can be regarded as an elaborate, good rule of thumb. The other methods have deep roots within income theories of accounting and economics. Last, consider practical applicability. In this area the average ARR method has the outstanding merit of being directly based on established accounting practice of performance measurement. It would be trivial to use computers to calculate Kay's IRR elaborate weighted-average estimates in business practice. However, the marginal improvement compared to the average ARR method does not compensate the obvious disadvantages of having to "sell" an iterative method to the users of financial information over the suggestion of using an average return on investment (ROI = ARR) for long-term profitability measurement. Ijiri-Salamon and Ruuhela's method are at a considerable disadvantage compared to the average ARR method since they require a fairly involved estimation process. In this light, for the practitioner it is our recommendation to choose for long-term profitability estimation the average ARR method over the more sophisticated IRR estimation methods. Knowing and understanding the analyzed, more sophisticated methods is not wasted, however. On the contrary, the practitioner should be aware of and familiar with the foundations of the methods s/he applies in order to make sound decisions. This research analyzes four internal rate of return (IRR) estimation methods from literature for assessing the long-term profitability of a business firm from its published financial statements. The IRR estimation methods considered are Kay's, the Ijiri-Salamon, Ruuhela's and the average ARR methods. A realistic simulation approach is developed to evaluate and compare the methods. A simulation approach with a known internal rate of return makes it possible to study the ability of the various methods to estimate the firm's true IRR. The research contributes by evaluating the performance of selected IRR estimation methods under more general conditions than the earlier literature. This is facilitated by including cyclical fluctuations, noise and the possibility of major capital investment shocks into the simulated financial data. Most importantly the research contributes in literature's long-standing dispute about the validity of accountant's rate of return ARR as a proxy for the IRR. Five research questions are posed concerning Kay's, Ijiri-Salamon, Ruuhela's and the average ARR methods. The questions cover how the methods are affected by business cycles and irregularities in the capital investments, the methods' sensitivity to capital investments' payback patterns, their sensitivity to disparity between growth and profitability, and their sensitivity to the accounting choices made by the firms. First, the effect of business cycles and orginary noise around the growth-trend of the firm's capital investments is of interest in evaluating the performance of the IRR estimation methods. The simulation model includes capital investment cycles in generating the simulated financial data. It is observed that three of the four methods are insensitive to cyclical fluctuations. The exception is Ruuhela's method which relies heavily on its constant-growth assumption. In the case of Kay's, Ijiri-Salamon and the average ARR method the insensitivity to business cycles is an important result because it confirms the applicability of the methods beyond the common steady-state assumptions. Furthermore, it is observed that ordinary noise in the capital investment time-series does not have a marked effect on the IRR estimates. Second, the sensitivity of the IRR estimation methods to the capital investment's payback patterns is of interest. The true pattern of contributions from the firm's capital investments is not known for actual business firms. Therefore, alternative contribution distributions are considered. It is observed that all the methods can be sensitive to the contribution distribution. The effect of the shape of the contribution distribution on the IRR estimates is interactively dependent on the depreciation methods applied by the firm and the relationship between growth and profitability. The conclusion is that contribution distribution of the firm's capital investments can have an effect of the quality of the IRR estimates given by the analyzed IRR estimation methods. Furthermore, contrary to the other two IRR estimation methods, Ijiri-Salamon and Ruuhela's methods require an estimate of the life-span of the firm's capital investments. The reliability of the IRR estimates by Ijiri-Salamon and Ruuhela's method depends on the quality of the life-span estimate. Third, it is to expected from theory that a disparity between the firm's growth rate and its long-term profitability affects the quality of the IRR estimates. It is observed that the reliability of the IRR estimates of all the methods is very sensitive to the relationship between the underlying true profitability and the firm's growth rate. In accordance to the simulation results the discrepancy between the true growth and profitability is the dominating source of the error in the IRR estimates in all the methods analyzed. In addition, the other sources of errors in the IRR estimates interact with the growth-profitability discrepancy. The errors can be aggravated by the discrepancy. This indicates that for better IRR estimation methods a correction for growth-profitability discrepancy should be an integral part. Fourth, the depreciation method applied by the firm in its financial statements can affect the IRR estimation result in concert with the contribution distribution of the capital investments. Also this effect is strongly related to the growth-profitability discrepancy. For example, for Kay's and the average ARR method a worst case of the interactive effect appears under the following circumstances: The firm grows fast, it has low profitability and the firm applies an accelerated depreciation method in a situation where the contribution from the capital investments happens to follow the uniform distribution. In this respect Ruuhela's method has an advantage over the other methods since it is unaffected by the firm's depreciation choice. Fifth, the simulations mostly indicate an unexpectedly good tolerance of the analyzed IRR estimation methods to major capital investment shocks. Ruuhela's method is the exception in this respect since its growth estimation is disrupted by such shocks. However, as discussed, in corporate practice a major capital investment shock is likely to coincide with a change in business culture. It is the literature's standard assumption of a constant IRR for the firm that comes to doubt under such circumstances. To conclude, the simulation comparison of the selected IRR estimation methods shows that none of the analyzed sophisticated methods performs consistently better than the average ARR method. Thus, considering the various facets discussed in this paper, the accounting-practice-based average ARR method can be recommended as the best choice for the long-term profitability estimation. However, none of the methods, including the average ARR, is an unbiased estimator of the firm's IRR. For fast growing firms with low profitability and for slow-growth firms with good profitability the long-term profitability estimates should be interpreted with much caution. On the other hand, the average ARR method can be safely used when a firm has comparable growth and profitability even when there are ordinary fluctuations and noise in the capital investment intensity. Anton, H.R. (1956). Depreciation, cost allocation and investment decisions. Accounting Research 7, 117-131. Artto, E. (1980). Profitability and cash stream analyses. Helsinki School of Economics, D-44, Helsinki. Bromwich, M. (1992). Financial Reporting, Information and Capital Markets. London: Pitman Publishing. Brealey, R.A. and S.C. Myers (1991). Principles of Corporate Finance, 4th ed. New York, N.Y.: McGraw-Hill, Inc. Brief, R.P. and R.A. Lawson (1992). The role of the accounting rate of return in financial statement analysis. Accounting Review 67:2, 411-426. Butler, D., K. Holland, and M. Tippett (1994). Economic and accounting (book) rates of return: Application of a statistical model. Accounting and Business Research 24:96, 303-318. Conte, S.D. (1965). Elementary Numerical Analysis: An Algorithmic Approach. Tokyo: Kogakusha Company, Ltd. Copeland, T.E. and J.F. Weston (1979). Financial Theory and Corporate Policy. Reading, Mass.: Addison-Wesley Publishing Company. Davidson, S. and R.L. Weil (1977). Handbook of Modern Accounting, 2nd ed. New York, N.Y.: McGraw-Hill. Fisher, F.M. and J.J. McGowan (1983). On the misuse of accounting rates of return to infer monopoly profits. American Economic Review 73:1, 82-97. Fisz, M. (1967). Probability Theory and Mathematical Statistics. 3rd ed. New York, N.Y.: John Wiley & Sons, Inc. Gordon, L.A. and M.M. Hamer (1988). Rates of return and cash flow profiles: An extension. The Accounting Review 63:3, 514-521. Ijiri, Y. (1979). Convergence of cash recovery rate. In: Quantitative Planning and Controlling. Essays in Honor of William Wager Cooper on the Occasion of His 65th Birthday. Ed. Y. Ijiri and A.B. Whinston. New York, N.Y.: Academic Press. Ijiri, Y. (1980). Recovery rate and cash flow accounting. Financial Executive 1980, 54-60. Kay, J.A. (1976). Accountants, too, could be happy in a golden age: The accountants rate of profit and the internal rate of return. Oxford Economic Papers (New Series) 28:3, 447-460. Kelly, G. and M. Tippett (1991). Economic and accounting rates of return: A statistical model. Accounting and Business Research 21:84, 321-329. Laitinen, E.K. (1980). Financial ratios and the basic economic factors of the firm: A steady state approach. Jyväskylä Studies in Computer Science, Economics and Statistics 1, Jyväskylä. Lawson, G.H. (1980). The measurement of corporate profitability on a cash-flow Basis. The International Journal of Accounting Education and Research 16:1, 11-46. Levhari, D. and T. Shrinivasan (1969). Optimal savings under uncertainty. Review of Economic Studies 36:2, 153-163. Levy, H. and M. Sarnat. (1986). Capital Investments and Financial Decision, 3rd ed. Englewood Cliffs, N.J.: Prentice-Hall Inc. Peasnell, K.V. (1982a). Some formal connections between economic values and yields and accounting numbers. Journal of Business Finance and Accounting 9:3, 361-381. Peasnell, K.V. (1982b). Estimating the internal rate of return from accounting profit rates. The Investment Analyst 1982, 26-31. Pike, R. (1996). A longitudinal survey on capital budgeting practices. Journal of Business Finance and Accounting 23:1, 79-92. Ruuhela, R.(1972). Yrityksen kasvu ja kannattavuus (in Finnish English summary: A capital investment model of the growth and profitability of the firm). Acta Academiae Helsingiensis, Series A:8. Ruuhela, R., T. Salmi, M. Luoma, and A. Laakkonen (1982). Direct estimation of the internal rate of return from published financial statements. The Finnish Journal of Business Economics 31:4, 329-345. Also available from World Wide Web: <URL: http://www.uwasa.fi/~ts/dire/dire.html>. Salamon, G.L. (1982). Cash recovery rates and measures of firm profitability. Accounting Review 57:2, 292-302. Salmi, T. (1982). Estimating the internal rate of return from published financial statements. Journal of Business Finance and Accounting 9:1, 63-74. Salmi, T. and M. Luoma (1981). Deriving the internal rate of return from accountant's rate of profit: Analysis and empirical estimation. The Finnish Journal of Business Economics 30:1, 20-45. Also available from World Wide Web: <URL:http://www.uwasa.fi/~ts/jkay/jkay.html>. Salmi, T. and T. Martikainen (1994). A review of the theoretical and empirical basis of financial ratio analysis. The Finnish Journal of Business Economics 43:4, 426-448. Also available from World Wide Web: <URL:http://www.uwasa.fi/~ts/ejre/ejre.html>. Salmi, T., R. Ruuhela, A. Laakkonen, R. Dahlstedt, and M. Luoma (1984). Extracting and analyzing the time series for profitability measurement from published financial statements: With results on publicly traded Finnish metal industry firms. Part III. The Finnish Journal of Business Economics 33:1, 23-48. Salmi, T. and I. Virtanen (1995). Deriving the internal rate of return from the accountant's rate of return: A simulation testbench. Proceedings of the University of Vaasa. Research Papers 201, Vaasa. Also available from World Wide Web: <URL:http://www.uwasa.fi/~ts/simu/simu.html>. Solomon, E. (1966). Return on investment: the relation of book-yield to true yield. In: Research in Accounting Measurement. Ed. R.K. Jaedicke, Y. Ijiri, and O.W. Nielsen. American Accounting Solomon, E. (1971). Return on investment: The continuing confusion among disparate measures. In: Accounting in Perspective. Ed. R.R. Sterling and W.F. Bentz. Cincinnati, Ohio: South-Western Publishing Co. Shinnar, R., O. Dressler, C.A. Feng, and A.I. Avidan (1989). Estimation of the economic rate of return for industrial companies. Journal of Business 62:3, 417-445. Stark, A.W. (1994). Some analytics of why conditional IRRs can contain growth rate related measurement error. Journal of Business Finance and Accounting 21:2, 219-229. Stark, A.W., H.M. Thomas, and I.D. Watson (1992). On the practical importance of systematic error in conditional IRRs. Journal of Business Finance and Accounting 19:3, 407-424. Steele, A. (1986). A note on estimating the internal rate of return from published financial statements. Journal of Business Finance and Accounting 13:1, 1-13. Suvas, A. (1994). Profitability, growth and the prediction of corporate failure. The Finnish Journal of Business Economics 43:4, 449-468. Tamminen, R. (1976). A theoretical study in the profitability of the firm. Acta Wasaensia, No 5, Vaasa. Teichroew, D., A.A. Robichek and M. Montalbano (1965). An analysis of criteria for investment and financing decision under certainty. Management Science 11:3, 395-403. Vatter, W.J. (1966). Income models, book yield and the rate of return. Accounting Review 41:4, 681-698. Watts, R.L. and J.L. Zimmerman (1986). Positive Accounting Theory. Englewood Cliffs, N.J.: Prentice-Hall Inc. Weingartner, H.M. (1969). Some new views on the payback period and capital budgeting decisions. Management Science 15:2, 131-140. Whittington, G. (1979). On the use of the accounting rate of return in empirical research. Accounting and Business Research 9, 201-208. Yli-Olli, P. (1980). Investment and financing behaviour of Finnish industrial firms. Acta Wasaensia, No 12, Vaasa. APPENDIX 1: List of Symbols i,j,t = auxiliary indexes g[0] = initial level of capital investments g[t] = capital investments in year t k = growth rate of the capital investments T = length of the simulation period n = length of the observation period (number of years under observation for the profitability estimation) A = amplitude of the cycle C = length of the cycle z = random variable following the (0,1)-normal distribution S = capital investment shock coefficient f[ti] = absolute contribution (cash-inflow) in year t from capital investment i years back b[i] = relative contribution from capital investment i years back N = life-span of a capital investment project (the same for all capital investments) f[t] = cash inflow in year t r = true internal rate of the simulated firm q = shape parameter for negative binomial distribution m = location parameter for negative binomial distribution P[j] = negative binomial distribution s = scaling factor p[t] = accounting profit in year t d[t] = depreciation in year t v[t] = book value of the firm at the end of year t w[t] = market value of the firm at the end of year t CRR[t] = cash recovery rate in year t V[t] = gross assets at the end of year t D[t] = accumulated depreciation E(N) = Estimate of the life-span of the capital investments a[N,k] = annuity factor for N years at a rate of k F = capital investment ratio CRR[t] = accountant's rate of return in year t APPENDIX 2: Annuity Depreciation under Anton Contribution Distribution Assuming Anton contribution distribution from Formula (12) Formula (18) defines the annuity depreciation of a single capital investment g as For t = 1 we have, considering that v[0] = g, (A2.2) d[1] = (1/N + r) g - r g = (1/N) g. Thus the annuity depreciation d[1] is equal the straight-line depreciation (1/N)g. Likewise, for t = 2 we have (A2.3) d[2] = {1/N + [(N-1)/N] r} g - r [g - d[1]] = {1/N + [1 - 1/N] r} g - r [1 - 1/N] g = (1/N)g. Repeating the process the general d[t] becomes which, again, is equal the straight-line depreciation. Other scientific publications by Timo Salmi in electronic format [ts(ät)uwasa.fi ] [Photo ] [Programs ] [FAQs ] [Research ] [Lectures ] [Acc&Fin ] [Faculty ] [University ] [itv(ät)uwasa.fi] [Photo] [Publications] [Department] [Faculty] [Revalidate] C:\_G\WWW\~TS\SMUC\SMUC.HTM
{"url":"http://lipas.uwasa.fi/~ts/smuc/smuc.html","timestamp":"2014-04-19T14:28:55Z","content_type":null,"content_length":"164785","record_id":"<urn:uuid:c8b529f9-7b7f-4207-87c7-643fec32be57>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
Short lattice vectors orthogonal to a random vector up vote 5 down vote favorite Let $N$ be some prime number. Suppose I draw $s$ elements $g_1,..., g_s$, where each $g_i\in [N]$ is taken uniformly from some interval $I_i$ of size, say $\sqrt{N}$. Is it possible to provide a lower-bound (which works on average, or w.h.p.) on the minimal length of a vector $h\in \mathbf{Z}^s$, for which $h \cdot g = 0 (mod N)$.? Here I refer to the length of a vector as the magnitude of its largest coordinate. At least intuitively, I would say the minimal length $h$ for a "typical" $g$ is $\Omega(\sqrt{N})$. One can assume that $s$ is much smaller than $log^a(N)$, where $a<1$, so that there is negligible chance of two disjoint subsets of $g_i$'s having the exact same sum. nt.number-theory prime-numbers I think you'll be interested in thm 2 of this paper by Venkatesh - math.stanford.edu/~akshay/research/andreas.pdf There's a good chance one can formulate it better (and provide a bit shorter proof) for your case, as your dealing with the homogeneous case and not the inhomogeneous one (which requires analysis of the affine group, rather than $SL$). – Asaf Sep 25 '13 at 8:52 add comment 1 Answer active oldest votes I left my old answer below. For very small $s$ I think that the right answer is about $M=N^{1/s}$ (for non-negative entries.) My reasoning is that one can pick all but the last entry of $h$ freely and then the last entry is forced and uniformly distributed in $[0,N-1].$ So if we run over the $k=M^{s-1}$ ways to make those choices keeping the entries under $M$, we expect the smallest possibility for that last entry to be of order $\frac{N}{M^{s-1}}$. Here is a small experiment with $N=1009$ and $s=3$. Ten times I picked 3 random elements and then looked for the minimal vector $h$ (using non-negative entries) I expected about $p^{1/ 3} \approx 10$ to be enough most of the time. It got a bit higher than that but nowhere near $\sqrt{N} \gt 31.$ [855, 752, 433], [8, 7, 7] [872, 804, 715], [4, 0, 5] [862, 647, 603], [7, 1, 9] [764, 731, 897], [7, 14, 13] [352, 811, 776], [12, 8, 7] [285, 653, 876], [13, 11, 6] [334, 502, 752], [7, 10, 9] [840, 788, 333], [15, 2, 15] [48, 476, 627], [6, 1, 2] [526, 55, 580], [7, 6, 7] up vote 4 down vote accepted Allowing $h$ to have entries in the range $(1-p)/2,(p-1)/2]$ should (and does in similar experiments) make the max about half as big. OLDER If $h\in \mathbf{Z}^s$ then $h$ might be said to have length $s$. Obviously that is not what you mean, but what do you mean? The separation between the first and last non-zero entries? The sum of the squares of the entries? For either of those there is a "short" solution when $s \gt 2\sqrt[4]{N}$. Then there are $\binom{s}{2} \ge 2\sqrt{N}+\sqrt[4]{N}$ sums $g_i+g_j$ all falling in an interval of length $2\sqrt{N}.$ Hence some two are equal and there is sure to be an appropriate vector $h$ with all entries $0$ except two $+1$ and two $-1$. If I compute correctly, then, for $s = 2\sqrt[4]{N}$, there is about an $85\%$ chance that there is $i \ne j$ with $g_i=g_j$ which allows only two non-zero entries, each $\pm 1$. For larger $s$ this becomes highly likely. The cases above have $h\cdot g=0$ in $\mathbf{Z}$ If $s \gt \log_2{N}$ then there will have to be (disjoint) subsets with equal sum $\mod N$ and hence some appropriate vector $h$ with all entries $-1,0,1$. Probably we can keep $s$ much smaller and have such a solution with high probability. With $s \gt (1+\epsilon)\log_2{N}$ and $g_i$ chosen from $[0,N-1]$ one could even be sure to have an equal sum in $\mathbf Later Thanks for clarifying. My argument for magnitude $1$ when $s \gt \log_2{N}$ still applies. I am note sure what happens if the vector $h$ needs entries from $\mathbf{N}.$ That seems more natural (npi) to me. Perhaps you do want to just choose the $g_i$ from $[0,N-1]$, otherwise the choice of $I_i$ matters. Perhaps you meant fixed $s$ although it seems likely to depend on $s$. for $s=1$ one has the uniform distribution in $[0,N-1]$ or $[0,\frac{N}{2}]$ (non-negative vs integer case). Great answer! still, I'm actually interested in the regime where s is significantly smaller than log(N), say smaller than $log(N)^a$ where $a<1$. In this regime counting arguments may not work. – Lior Eldar Sep 25 '13 at 8:15 For very small $s$ the right bound is $N^{1/s}$ or about half that if $h$ has entries in the range $(-p/2,p/2)$ – Aaron Meyerowitz Sep 25 '13 at 9:09 add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory prime-numbers or ask your own question.
{"url":"http://mathoverflow.net/questions/143064/short-lattice-vectors-orthogonal-to-a-random-vector","timestamp":"2014-04-20T06:41:51Z","content_type":null,"content_length":"57811","record_id":"<urn:uuid:209ef834-882b-4ea4-81e0-58ff9900df39>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Look at triangle PQR. Which other segment is certain to pass through O? Best Response You've already chosen the best response. Best Response You've already chosen the best response. the perpendicular bisector of side RQ the segment that connects point R to the midpoint of side PQ the segment that divides angle PRQ into two congruent angles the segment from point O to side PR that meets PR at a right angle Best Response You've already chosen the best response. Did you get your answer? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4edea9b0e4b05ed8401ab353","timestamp":"2014-04-25T07:03:29Z","content_type":null,"content_length":"33464","record_id":"<urn:uuid:314206c4-ad3d-48ec-bfc3-dca2da359b9a>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
Deck the halls with τ of holly, formula-la-laaa! You're reading: Irregulars Deck the halls with τ of holly, formula-la-laaa! By Alistair Bird. Posted in Irregulars Christmas is a time for giving, celebrating, family and magic. But did you know it’s also a time for equations? Department store Debenhams has decided to honour this recent Christmas tradition by tasking at least two members of Sheffield University’s undergraduate maths society to come up with formulae for ‘a perfectly decorated Christmas tree‘, picked up by The Sun, The Metro and others. Previous festive howlers include ‘the formula for the perfect family Christmas‘ (sponsored by The Children’s Society to promote a book) and a prior stab at ‘the equation for the ideal Christmas tree‘ (sponsored by B&Q), which are just nonsensical strings of abbreviations. However, unlike those examples of naff-ematics, the Sheffield tree-decorating equations make enough sense for me to take a critical, overly-serious look at them on their own merits, and show how you might begin to come up with something more rational. Firstly, I can’t see much at fault with the equation: \[\text{Height of star or fairy (in cm)} = \frac{1}{10} \times \text{height of tree (in cm)}.\] The idea that the two heights should be proportional seems reasonable, though you may wish to quibble over the apparently arbitrary constant. More importantly, the dimensions on either side of the equation are consistent, relating one length to another. Most ‘formula for’ stories fail even a rudimentary dimensional analysis. However, the unit of measurement provided (centimetres) isn’t needed—you could just as easily perform the calculation in metres or inches. This dimensional consistency doesn’t hold for the next equation: \[\text{Number of baubles} = \frac{\sqrt{17}}{10} \times \text{height of tree (in cm)}.\] To take this equation seriously you would have to believe that $\frac{\sqrt{17}}{10}$ is some sort of universal bauble density constant giving the ideal number per centimetre of tree height, independent of the tree’s width. The principle that “it is better to be vaguely right than exactly wrong”^1 applies here. Since you can’t be precise about aesthetics, leaving a square root in the equation is merely a ploy to seem more mathematical. Debenhams claim the formulae are being rolled out for use by their personal shoppers nationwide. If you really were creating an equation for the shop floor, you would want something simple enough to be worked out by mental arithmetic or quickly tapped into a calculator. The constant here could have been approximated to $\frac{2}{5}$ or $0.4$. The reason that these types of press-released equations fall outside the realm of mathematics is not the subject matter, but the lack of any reasoning or logical justification. Some mathematicians happily publish recreational and rigorous papers about, for instance, the optimal arrangement of a dartboard—and are able to question each other’s assumptions. If I were pressed to come up with a similar formula for the number of baubles, I would start by modelling the Christmas tree as a solid cone with radius $r$ and height $h$. I might then assume the number of baubles is proportional to the surface area of the tree (you could perhaps argue that the volume is a more realistic quantity, as baubles can be placed ‘inside’ the cone representing my tree). This would give: \[\text{Number of baubles} \propto \pi r \sqrt{r^2 + h^2}.\] If you simplistically assume your Christmas tree’s radius (which I think can actually vary quite a bit) is proportional to its height, then you can gather up all the nasty constants to give: \[\text{Number of baubles} \propto h^2.\] But to decide on a constant for baubles per unit surface area, you still need some decorated Christmas trees that you find aesthetically pleasing, whose measurements you can substitute into the equation. It’s not for me to decide. In this case, Debenhams may want to see how a professional designer decorated trees of different sizes to check how well the assumptions worked out. The final two equations are: \[ \text{Length of lights} &= \pi \times \text{height of tree}; \\ \\ \text{Length of tinsel} &= \frac{13\pi}{8}\times\text{height of tree}. \] The fact that these formulae involve $\pi$ appears reasonable, given the circular cross-section of Christmas trees. However, if you wanted to model the length of tinsel or lights as a conical spiral with $n$ turns, the precise formula would be: \[ \text{Arc length} = \frac{h}{2} \sqrt{1+r^2(1+2\pi n)^2} + \frac{h(1+r^2)}{2 \pi n} \sinh^{-1} \left(\frac{2 \pi n r}{\sqrt{1+r^2}} \right). \] One nice thing about mathematics is that so much of it has been done before. This means that it’s easy to come up with a sensible equation for even the most frivolous of situations. Here, I didn’t have to derive the formula myself from the parametric equations for the curve: I put a bit of trust in Wolfram MathWorld, and adapted the arc length they give for a conical spiral, substituting the angular frequency for $\frac{2 \pi n}{h}$. It’s not particularly pretty, but at least I’ve provided enough information for you to check the equation yourself. Simplicity is also important if you need to convince a non-expert to adopt your mathematical idea, without having to take it on faith. Though the equation I’ve given is complicated enough to warrant a computer program, the ideas behind it are hopefully easy enough to understand. By knowing what the assumptions are, you could say “I like my lights to dangle instead of following the spiral exactly, so I’ll add a bit more length”. Mathematics in the media doesn’t have to be like this, though I’m guessing these students didn’t expect their work to be plastered all over the press. If you want more useful festive formulae, you could do worse than the maths for the perfect Christmas present wrapping (sponsored by Amazon.co.uk), which looks dubious when written up in a newspaper, but which does contain mathematical content. Kudos though for whoever came up with the ‘treegonometry’ pun in the Christmas tree press release. 1. “Logic: Deductive and Inductive” by Carveth Read [↩] One Response to “Deck the halls with τ of holly, formula-la-laaa!”
{"url":"http://aperiodical.com/2012/12/deck-the-halls-with-tau-of-holly/","timestamp":"2014-04-21T05:07:03Z","content_type":null,"content_length":"47100","record_id":"<urn:uuid:53a4cc12-76c4-479f-bade-d31e78f7bd1d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
Analyzing WiMAX Modulation Quality Matlab/Simulink can be a useful tool in analyzing the effects of signal impairments, such as phase noise, on the modulation quality of single-carrier and multicarrier OFDM WiMAX transmitters. Advanced wireless communications standards rely on complex modulation schemes to achieve high bandwidth efficiency. WiMAX, for example, employs the orthogonal frequency division multiplexing (OFDM) technique.^1 Due to many challenges with such complex modulation formats, sophisticated simulation and verification tools and approaches are required to achieve optimum system-level performance. For example, the transmit modulation accuracy depends on the transmit filter accuracy, digital-to-analog- converter (DAC) performance, in-phase/ quadrature (I/Q) imbalances, phase noise, and transmitter nonlinearity. To better understand the effects of various performance parameters on WiMAX, a simulation model based on Matlab/Simulink software from The MathWorks was developed to analyze the impact of phase noise, peak-to-average power ratio (PAPR), transmitter (TX) nonlinearity, and I/Q mismatches on the transmitter modulation quality. The WiMAX 64-state quadratureamplitude- modulation (64QAM) OFDM (IEEE 802.16 OFDM) standard was used for this example because it imposes the highest signalto- noise-ratio (SNR) requirements. The Simulink model consists of the major function blocks shown in Fig. 1(a): the 64QAM modulator, the OFDM modulator, the RF transmitter (Tx), the OFDM receiver (Rx), and the 64QAM demodulator. According to the model, random binary data is first 64QAMmodulated, and then OFDM-modulated to divide the transmission bandwidth into 192 narrow subchannels; the data are then transmitted in parallel over these subchannels at a relatively low rate. Running at this low data rate plus adding the cyclic prefix helps to combat delay spread. The OFDM modulator creates an OFDM symbol through an inverse Fast Fourier Transform (IFFT) by means of these 192 data subcarriers along an additional 28 lower-frequency null guard subcarriers and 27 higher-frequency null guard subcarriers, 8 pilot subcarriers, and one DC null subcarrier. The OFDM signal then is frequency upconverted to 2.35 GHz and further amplified to +23 dBm before it is sent to the receiver (Rx). The PAPR is measured before the signal is OFDMdemodulated. Figure 1(b) shows the RF Tx with direct frequency upconversion architecture and three key components: the frequency upconverter (UPC), the pre-power amplifier (PPA), and the power amplifier (PA). The error vector magnitude (EVM) and relative constellation error (RCE) are measured after the signal is OFDM-demodulated. Then the signal is further 64QAM-demodulated before bit-error-rate (BER) monitoring. The Simulink model does not include the channel coding for accurate BER monitoring. The OFDM parameters are listed in Table 1. Figure 1(b) addresses an approach to modeling the nonlinearity of the RF Tx, which is a major source of SNR impairment in the system. When multitone signals pass through the RF Tx, intermodulation components are generated, which causes the distortion to the signal. Nonlinearity models for UPC and PPA use the static nonlinearity model by specifying the third-order intercept point (IP3). The nonlinearity model for PA is the well-known Rapp model^2 as shown in Eq. 1: V[sat] = the saturation voltage level and P = the knee parameter. The Rapp model addresses amplitude-modulated-related (AM-AM) distortion but no AM-phase-modulation (AM-PM)-related distortion. According to previous work, it has been stated that in the sub-5 GHz band, typical values of the knee parameter are in the range of 2-4.^3 In this model example, a knee parameter of 2 was used. The next step in the Simulink model development is to address two key Tx requirements: the RCE and the EVM, according to the WiMAX standard.1 These two parameters are also the metrics for evaluating the modulation quality and the impact of distortion on WiMAX system performance throughout this article. Both EVM and RCE are defined as the vector difference between the actual and ideal symbol position as shown on an I/Q constellation diagram. Their relationship can be expressed by Eq. 2: The Tx RCE is dictated by the required Rx SNR for a certain modulation and error correction scheme to ensure that the error contribution to system performance is small: In the case of the WiMAX OFDM Rx, since 64QAM modulation with a channel code rate of 3/4 requires the largest SNR (21 dB) for a BER of 1 10 ^6 , the 64QAM Tx requires the smallest RCE (-31 dB), corresponding to an EVM of 2.8 percent. This required Tx RCE of -31 dB only causes a SNR degradation of 0.41 dB as calculated by The following sections will use the Matlab/Simulink model to address the impact on Tx design caused by Tx nonlinearity, PAPR, local oscillator (LO) phase noise, I/Q gain/phase mismatch, and DAC parameters. The LO phase noise degrades SNR as additive noise effects. The UPC, the phase noise is superimposed on all subcarriers when the OFDM modulation signal is mixed with the LO signals. The required Tx RCE dictates the integrated LO double-sideband (DSB) RMS phasenoise (FM[rms]) requirement, which is -31 dB for 64QAM OFDM with a 3/4 coding rate mode. Parameter F[rms] has been widely used to calculate the phase noise degradation to the SNR for a single-carrier (SC) system. Earlier reports claimed that OFDM systems were orders of magnitude more sensitive to phase noise than SC systems.4,5 But it was also claimed oppositely in later articles that SNR degradation caused by phase noise was the same in OFDM and SC systems.^6,7 The Matlab/Simulink model can be useful in modeling the impact of LO phase noise on OFDM systems. The phase-noise profile in Fig. 2a^8 is used here as the reference phase noise (RPN), and the phase noise with 30 dB better than RPN at each frequency offset as "30-dB better" in Table 2. The calculated F[rms] with RPN is -45.37 dB using the method in ref. 9. In test case 1, the contribution to the SNR distortion is all from the LO phase noise since the UPC, PPA, and PA are configured to have no nonlinearity by setting output third-order intercept point (OIP3) and saturated output power (P[sat]) to +100 dBm. The simulated RCE is -48.35 dB as in Table 2, which is almost same as the calculated F[rms] value. Therefore, the Matlab/Simulink model shows that the phase noise causes almost the same SNR degradation to both multicarrier and single-carrier systems. It is worthwhile to mention that some algorithms have been used for tracking, estimating, and correcting common phase error (CPE) of the LO phase noise using pilot signals, but this is beyond the scope of this report. A major problem with the OFDM modulation is its relatively high PAPR, which requires a high IP3 specifications for the RF Tx. When the number of subcarriers is large, the subcarrier signals are uncorrelated and their amplitudes add to produce the peak power. When all individual subcarriers reach their peak at the same time, it could produce a maximum PAPR equal to the number of used data and pilot subcarriers, N[used] + N[pilot], or 10 x log[10](200) or about 23 dB in WiMAX OFDM systems. In reality, this maximum PAPR rarely occurs. Figure 3 shows the simulated complementary cumulative distribution function (CCDF) for 64QAM OFDM envelope signal powers with 18,992,933 samples collected. The probability of occurrence of signal peaks that are 12.4 dB greater than the average power (20.5 dB) is only 2 x 10^-6.To guarantee the system BER to be less than 1 x 10^-6 at a certain PAPR, the probability of that PAPR should be less than 2 x 10^-6. Therefore, a 12.4-dB PAPR should be appropriate for 64QAM WiMAX if a system BER of 1 x 10-6 is to be achieved. Continue to page 2 Page Title Typically, PAs are main sources for Tx nonlinearity. In WiMAX systems, PAs must deliver high power levels with stringent linearity requirements and handle a high PAPR. Otherwise, the transmitted signals may suffer spectral spreading and in-band distortion. Achieving such PA linearity results in a trade off in efficiency. Therefore, finding the maximum allowable nonlinearity is very important for WiMAX applications. In test case 2, the LO phase noise is configured as "30-dB better" to reduce any impairments due to phase noise. The IP3s of the UPC and PPA are reduced to +37 and +38 dBm, respectively, and the PA's P[sat] is reduced to +32.35 dBm. The output 1-dB compression point (OCP1) was measured to be +30.2 dBm. Since the Tx output power is +23 dBm and the PAPR is 11.7 dB, the peak power could be as high as +34.7 dBm, or 4.5 dB above the OCP1. The simulated EVM is 2.489 percent. The SNR distortion at OCP1 could be slightly different for different PA designs. Therefore, other than knowing the PAPR value, building a PA model that matches the design will be useful for deciding the power backoff from the OCP1. In test case 3, P[sat] is further reduced to +29 dBm to increase the nonlinearity. The simulated EVM and RCE are 6.7 percent and -23.5 dB, a severe degradation due to the nonlinearity. The obvious spectral regrowth could also be observed from the plots in Fig. 4. The signal transmitted through the RF Tx could experience signal distortion due to gain and phase mismatches. The imperfections are typically generated by relative differences between the transceiver I and Q signal branches from the DAC to all analog components in the RF Tx I/Q. In Fig. 1a, the I/Q imbalance block following the OFDM modulator adds any of those two imperfections into the 64QAM OFDM modulation signal. In test cases 4 and 5, the IP3 and P[sat] are set to +100 dBm to eliminate the nonlinearity and the "30-dB better" phase noise is applied to reduce the effects of LO phase noise on system distortion. The SNR degradation due to the 0.15-dB amplitude mismatch and 0.5-deg. phase mismatch are simulated as shown in Table 2. For test case 7, all impairments are applied (OCP1 = +26.8 dBm), including LO phase noise, nonlinearities, and I/Q gain and phase mismatches) and the simulated EVM is 2.72 percent while the simulated RCE is -31.29 dB, which are slightly better levels than the WiMAX specifications. In reality, some tolerance should be allowed for performance variations due to changes in temperature, process, and voltage, as well as impairments from the DAC and the filters, which are not included in the Simulink model. The RCE for test case 6 can be estimated based on the results from test cases 1, 2, 4, and 5, as shown in Eq. 5, where RCE[n] is the measured RCE for test case In summary, signal-quality requirements for a WiMAX Tx were analyzed using Matlab/Simulink software. The simulator makes it possible to study the impact of impairments on Tx modulation quality for a 64QAM OFDM WiMAX system. The analysis showed that LO phase noise has almost the same impact on multicarrier or signal carrier systems. The simulation created a CCDF of a WiMAX OFDM PAPR, showing a PAPR of 12.4 dB to be an appropriate power backoff for WiMAX. 1. IEEE Standard 802.16-2004, "Part 16: Air interface for fixed broadband wireless access systems," October 2004. 2. C. Rapp, "Effects of HPA-Nonlinearity on an 4-DPSK/ OFDM-Signal for a Digital Sound Broadcasting System," Proceedings of the 2nd European Conference On Satellite Communications, Liege, Belgium, Oct. 22-24, 1991, pp. 176-184. 3. Tal Kaitz, BreezeCOM, Performance aspects of OFDM PHY proposal, IEEE 802.16.3c-01/49, 2001-03-14. 4. T. Pollet, M. van Bladel, and M. Moeneclaey, "BER sensitivity of OFDM systems to carrier frequency offset and Wiener phase noise," IEEE Transactions on Communications, Vol. 43, February/March/ April 1995, pp. 191-193. 5. A. Garcia Armada and M. Calvo, "Phase Noise and subcarrier spacing effects on the performance of an OFDM communication system," IEEE Communications Letters, Vol. 2, No. 1, January 1998. 6. M. Moeneclaey, "The effect of synchronization errors on the performance of orthogonal frequency-division multiplexed (OFDM) systems," in Proceedings of COST 254 (Emergent Techniques for Communication Terminals), Toulouse, France, July 1997. 7. A. Garca Armada, "Understanding the effects of phase noise in orthogonal frequency division multiplexing (OFDM)," IEEE Transactions on Broadcasting, Vol. 47, June 2001, pp. 153-159. 8. A 2.4-GHz Direct Conversion Transmitter for WiMAX Applications, Cecile Masse, 2006. 9. Jonathan Y. C. Cheah, "Analysis of Phase Noise in Oscillators," RF Design, November, 1991. pp. 99-105. 10. Hyunchul Ku, and J. Stevenson Kenney, "Behavioral Modeling of Nonlinear RF Power Amplifiers Considering Memory Effects," IEEE Transactions on Microwave Theory and Techniques, Vol. 51, No 12, December 2003. 11. L. Ding, "Digital Predistortion of Power Amplifiers for Wireless Applications," Ph.D thesis, Georgia Institute of Technology, March 2004.
{"url":"http://mwrf.com/test-and-measurement/analyzing-wimax-modulation-quality","timestamp":"2014-04-17T05:18:11Z","content_type":null,"content_length":"89810","record_id":"<urn:uuid:470757b1-f036-4684-8725-f8f1d5331712>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
NCERT CBSE Class 8th Maths : Percentage (i) There are 5 oranges in a basket of 25 fruits. The percentage of oranges is ___ (A) 5% (B) 25% (C) 10% (D) 20% (ii) 2/25 = _______ %. (A) 25 (B) 4 (C) 8 (D) 15 (iii) 15% of the total number of biscuits in a bottle is 30. The total number of biscuits is _______. (A) 100 (B) 200 (C) 150 (D) 300 (iv) The price of a scooter was Rs 34,000 last year. It has increased by 25% this year. Then the increase in price is Rs. _______. (A) 6,500 (B) 8,500 (C) ` 8,000 (D) ` 7,000 (v) A man saves Rs 3,000 per month from his total salary of Rs 20,000. The percentage of his savings is. (A) 15% (B) 5% (C) 10% (D) 20% 2. (i) 20% of the total number of litres of oil is 40 litres. Find the total quantity of oil in litres. (ii) 25% of a journey covers 5,000 km. How long is the whole journey? (iii) 3.5% of an amount is Rs 54.25. Find the amount. (iv) 60% of the total time is 30 minutes. Find the total time. (v) 4% sales tax on the sale of an article is Rs 2. What is the amount of sale? 3. Meenu spends 2000 from her salary for recreation which is 5% of her salary. What is her salary? 4. 25% of the total mangoes which are rotten is 1,250. Find the total number of mangoes in the basket. Also, find the number of good mangoes. 5. What percent is 15 paise of 2 rupees 70 paise? 6. Find the total amount if 12% of it is Rs 1080. 7. 72% of 25 students are good in Mathematics. How many are not good in Mathematics? 8. Find the number which is 15% less than 240 9. The price of a house is decreased from Rupees Fifteen lakhs to Rupees Twelve lakhs. Find the percentage of decrease 10. 15 sweets are divided between Sharath and Bharath, so that they get 20% and 80% of them respectively. Find the number of sweets got by each. 11. A school cricket team played 20 matches against another school. The fi rst school won 25% of them. How many matches did the first school win? 12 The marked price of a toy is Rs. 1,200. The shop keeper gave a discount of 15%. What is the selling price of the toy? 13. In an interview for a computer firm 1,500 applicants were interviewed. If 12% of them were selected, how many applicants were selected? Also find the number of applicants who were not selected. 14. An alloy consists of 30% copper and 40% zinc, and the remaining is nickel. Find the amount of nickel in 20 kilograms of the alloy. 15. Ram and shyam contested for the election to the Panchayat committee from their village. Ram secured 11,484 votes which was 44% of the total votes. Shyam secured 36% of the votes. Calculate (i) the number of votes cast in the village and (ii) the number of voters who did not vote for both the contestants. 16. A man spends 40% of his income for his food, 15% for his clothes and 20% for house rent and saves the rest. How much percent does he save? If his income is Rs 34,400, fi nd the amount of his 17. Jyothika secured 35 marks out of 50 in English and 27 marks out of 30 in Mathematics. In which subject did she get better marks and by how much? 18. A worker receives Rs 11,250 as bonus, which is 15% of his annual salary. What is his monthly salary? 19. The price of a suit is increased from Rs 2,100 to Rs 2,520. Find the percentage of increase. 20. If 25% of students in a class come to school by walk, 65% of students come by bicycle and the remaining percentage by school bus, what percentage come by school bus? 21. In a particular class of students, 30% of them take Hindi, 50% take Tamil and the remaining take French as their second language. What percent take French as their second language? 22. In a city, 30% are females, 40% are males and the remaining are children. What percent are children?
{"url":"http://cbsemathstudy.blogspot.com/2011/08/ncert-cbse-class-8th-maths-percentage.html","timestamp":"2014-04-18T18:11:10Z","content_type":null,"content_length":"91846","record_id":"<urn:uuid:48e842e7-9487-48a0-8488-78e09c03e3ba>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
Chi-square approximations for censored data up vote 0 down vote favorite If you are working with censored data (that is, you only know a lower bound for the value of some observations), can you still use the Chi-square approximation for the statistic $D \equiv -2r(\ theta_0) \approx \chi_{(1)}^2$ where $\theta_0$ is the true parameter of the distribution? st.statistics pr.probability probability-distributions add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged st.statistics pr.probability probability-distributions or ask your own question.
{"url":"http://mathoverflow.net/questions/132862/chi-square-approximations-for-censored-data","timestamp":"2014-04-16T07:13:35Z","content_type":null,"content_length":"44899","record_id":"<urn:uuid:87d28388-2f6c-4814-882c-5ea3a94efcdc>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
Rota reflects on math and mathematicians Behind most great science and engineering discoveries stands the work of a host of mathematicians. But their research, like the support of a silent consort, often goes unrecognized. Gian-Carlo Rota, professor of applied mathematics and philosophy -- the sole MIT claimant to that title -- is not silent. His interest in communicating with mathematicians, as well as the rest of us, has manifested itself in some very public ways. He spoke at Family Weekend this year ("Ten Predictions about Science") and last year ("Ten Lessons of an MIT Education"), was the Killian Lecturer in 1997 ("Mathematical Snapshots"), speaker at the Provost's Seminar in 1998 ("Ten Remarks on Husserl and Phenomenology"), and presenter of the 1998 American Mathematical Society (AMS) Colloquium Lectures -- a series of three talks presented each year by one of the world's most eminent mathematicians, according to the Professor Rota also engages people, very graciously, on an individual level. He's known among students for his accessibility and his clear presentation of material in his math and philosophy courses. He's also respected for his deep understanding of those subjects and revered for his love of communicating. "The first course of his I took changed my life. It's the only class at MIT that really has done that," said Eric "Krevice" Prebys, a senior in math and computer science. "He helped me to see the world in a totally different way. And that's what I wanted out of college." Mr. Prebys took Professor Rota's Introduction to Phenomenology (24.171) his sophomore year, and later, Probability (18.313) -- "the best probability course at MIT." He is currently enrolled in Professor Rota's phenomenology course on Martin Heidegger, Being and Time (24.172). Professor Richard Stanley of mathematics, a former student of Professor Rota, credits his advisor with having transformed combinatorics, their mathematical specialty, from a "Mickey Mouse area" of research into a "respectable subject." Professor Stanley was one of the organizers of Rotafest, a four-day conference on combinatorics held at MIT in 1996 honoring Professor Rota's 64th birthday. Of course, not everyone loves Professor Rota. His latest book, Indiscrete Thoughts (Birkh������������������user, 1997) includes essays that debunk the "myth of monolithic personality" through sketches of the lives of notable mathematicians. When first published, one mathematician wrote that he would not speak to Professor Rota again; another threatened a In an interview with MIT Tech Talk, Professor Rota shared his ideas about mathematicians, the mathematics profession and why they remain poorly understood. What's it like to be a mathematician? It's the least rewarding profession except one: music. Musicians live an impoverished life. Mathematicians -- for what they do -- are really poorly rewarded. And it's a very competitive field, almost as bad as being a concert pianist. You've got to be really an egoist. You've got to be terribly self-centered. Why are there so few women in the field? Women are more realistic than men -- they can see that it's a flight from reality. What they don't see is that it's a flight from reality that works. The distribution of mathematics talent among men and women is exactly the same. But in 40 years of teaching I've seen really good women mathematicians leave the profession, including one very close friend, to my great chagrin. I almost cried. Why don't we hear about the work of mathematicians? Mathematicians have bad personalities. They're snobs. Among them, and at MIT, there's a tendency to judgment: people who don't write formulas are tolerated. Mathematicians also make terrible salesmen. Physicists can discover the same thing as a mathematician and say 'We've discovered a great new law of nature. Give us a billion dollars.' And if it doesn't change the world, then they say, 'There's an even deeper thing. Give us another billion dollars.' Are mathematicians really so different from other scientists and engineers? The more experimental scientists and engineers are, the more common sense they have, and so on until you get to the mathematicians, who are totally devoid of common sense. What do mathematicians do? They work on problems. There are historical problems floating around. You are in competition with people who came before you. Sometimes you discover the competition wasn't that good after all. How do they choose the problems? People like to think that scientists see a need and try to solve that problem. Engineers may work that way. But in math, you don't have an application when you work on a problem. It's not the need prompting the science. The reality is, it's the other way around. You say to yourself, 'I have a feeling there's something to this problem' and you work on it, but not alone. Many people throughout history work on a single problem, not a "lone genius." That's another phony-baloney theory. And once the problem has been solved? Applications are found after the theory is developed, not before. A math problem gets solved, then by accident some engineer gets hold of it and says, 'Hey, isn't this similar to���������������������������? Let's try it.' For instance, the laws of aerodynamics are basic math. They were not discovered by an engineer studying the flight of birds, but by dreamers -- real mathematicians -- who just thought about the basic laws of nature. If you tried to do it by studying birds' flight, you'd never get it. You don't examine data first. You first have an idea, then you get the data to prove your idea. What is combinatorics? Combinatorics is putting different-colored marbles in different-colored boxes, seeing how many ways you can divide them. I could rephrase it in Wall Street terms, but it's really just about marbles and boxes, putting things in sets. Actually, some of my best students have gone to Wall Street. It turns out that the best financial analysts are either mathematicians or theoretical physicists. We're also interested in the mathematical properties of knotting and braiding. Someone in 1910 started with knots. You take one, cut it and you get a braid. It's actually one of the hottest topics in math today and holds the secret to a number of problems (I have a gut feeling). If we understand braids well enough, we'll solve all the problems of physics. Do these have applications for other sciences? Protein folding is very closely related to this process. But biologists are just at the beginning. As they get deeper and deeper into the DNA structure, they'll need so much mathematical theory they'll have to become mathematicians. There aren't more than two or three people right now who know both math and biology. It takes a tremendous effort. But it's very probable that an understanding of genetics is dependent on understanding knotting. What sorts of problems have combinatorics solved in the past? One example is quantum mechanics, which was discovered 30 years ago. The mathematics behind quantum mechanics had been worked out 20 years before by a mathematician who didn't know what it was good What would you like to tell the public about math and science? Basic science is essential. The need for public relations is essential. We won't survive -- continue to get funding -- without it. People think we've got enough basic science. But the fact is, basic science costs so little compared to, say, developing a new kind of submarine. It's a law of nature: the things that get cut first are the least [expensive]. Take [the funding for] the National Endowment for the Arts -- that was peanuts. A version of this article appeared in MIT Tech Talk on October 28, 1998.
{"url":"http://newsoffice.mit.edu/1998/rota-1028","timestamp":"2014-04-17T16:19:56Z","content_type":null,"content_length":"87699","record_id":"<urn:uuid:780914e3-eca5-4ab5-8399-ca92c4b8f9d8>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
5.1 Activities for quantitative economics As part of a drive to ensure that students are given a stimulating learning environment I have tried to develop an interactive approach for level 1 mathematics and statistics units. This captures the interest of students who would not traditionally be able to cope with the rigorous mathematical content in the undergraduate suite of economics degrees. Students are organised into small sub-groups and set tasks to complete that will enhance their understanding of the material being covered. In order to maintain student interest, the tasks vary on a weekly basis and include activities. The feedback from these and other activities used has been excellent. Students feel engaged in the process of learning and consistently state that the activities help them to understand the topics A key feature of these activities is that they place students in the role of teacher, having to explain to other students how to work out answers to problems. This idea is strongly associated with the work of Palincsar and Brown (1984) who termed it ‘reciprocal teaching’. In a long-running programme of research with younger students, they identified key roles in teaching (clarifying, questioning, predicting) and demonstrated the benefits for learning if these roles are undertaken by students. These activities apply this principle in higher education. It should be noted that the activities could be adapted for use in teaching other aspects of economics. Activity 1 for quantitative methods seminar Each group receives a set of four questions on topics covered in the previous lecture. In the first half of the seminar, students work through the questions in groups of four. The seminar leader provides advice and further explanation as necessary. Each group is then asked to prepare an overhead projector transparency that shows how they worked out their answer to one of the questions. In the second half of the seminar, one nominated student from each group teaches their method of answering the question to the whole class. All students are free to ask questions following each presentation. Answers to all questions are provided at the end of the seminar. This exercise encourages students to become involved and ensures that all students have worked through every type of question relevant from the lecture material. This exercise is repeated in four seminars throughout the semester in order that all students have the opportunity to participate in the teaching process. Activity 2 for quantitative methods seminar Each student receives a cue card with the equation of a line written on one side and two sets of coordinates written on the other side. Students spend the first 15 minutes of the seminar plotting their line on a graph. Using the graph and the equation, students are then asked to find a student who has a line that intersects with their line. This person becomes their partner for the rest of the seminar. The pairs of students must then find the intersection between their two lines using three different methods. At this point the students are working together and can gain from each other’s understanding of the subject. This activity also encourages students to work with different people in the class, creating greater interaction and a more integrated and cohesive learning Once the students have found the point at which their two lines intersect, they are asked to turn the cue cards over. Using the two sets of coordinates on each card, students are then asked to find the equation of the line that passes through these two points. Thus each student will find the equation of a new line. Finally, students are asked to find the point at which these two new lines Activity 3 for quantitative methods seminar Students are asked to form groups of 4–5 and each group member is assigned a letter between A and E. Each group is given one question to solve that relates to a technique or topic in quantitative economics. Each group focuses on a different technique or topic. Students are given 15 minutes to work through their problem and to ensure that each member of the group understands all aspects of the assigned problem. The groups are then rearranged so that all the students labelled A are working together, all the students labelled B are working together and so forth. As a result there are now five new groups and each student is working with others who have been answering different questions. Students are required to spend the remaining seminar teaching the other students in the group how to successfully solve their assigned problem. By the end of the seminar students have worked with two different groups, have worked through a new mathematical technique, have taught this technique to other students, and have learned from other students how to answer other questions in quantitative economics.
{"url":"http://economicsnetwork.ac.uk/handbook/seminars/51","timestamp":"2014-04-18T13:25:12Z","content_type":null,"content_length":"18259","record_id":"<urn:uuid:ad7639ae-acda-44ad-900d-c8b0eee0b14d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Arranging cross shaped particles on a 2D grid September 23rd 2009, 12:57 AM #1 Aug 2009 Suppose we have a 2D grid. This grid has L sites in the x direction, and D sites in the y direction. Further suppose that the y dimension has periodic boundaries, and the x direction is bounded by hard walls. In this L by D grid we place N cross shaped particles. They are arranged so that every particle occupies 5 grid sites as shown below. o o o o o o o x o o o x x x o o o x o o o o o o o The crosses are mutually repulsive, so if you place a cross center in a position (depicted below by C), you cannot place another in the grid points depicted by x: o o x o o o x x x o x x C x x o x x x o o o x o o It is also impossible to place cross centers at the most extreme grid points in the x direction, as one of their points would be out of the grid. Suppose that the grid is large enough to accomondate N crosses under aforementioned restrictions. The question is as follows: With L, D and N known, and freely distributing the crosses, what are the possible values (and possibly probabilities) of the occupation of the last column of L. Rephrasing: how many grid points on the last column of the grid (on the x direction) are occupied? of course there may be a lot of options, but there is a strict range of possible ocupation numbers (and probability associated with each one). For example: L = 3 ; N = n ; D >> N There's only one possibe occupation numer, which is N. Another example: L = 4 ; D = 3 ; N = 1 There are two options, either 0 or 1, with equal probabilities. L = 5 ; D = 3 ; N = 1 Same two options, 0 or 1, but now with probabilities 2/3 and 1/3 respectively. Anyone has an idea of how to attack this problem for general L, D, N? You said that y-dimension has periodic boundaries and x-direction is bounded by hard walls.........does this has anything to do with the question? It does. It means that a cross center cannot be placed in points such as (0,y) or (L,y) in the x direction. In the y direction, cross centers can be placed in (x,0) or (x,D), but then the excess part of the cross protrudes from the other side and denies other crosses to be placed in its nearest neighbors. September 23rd 2009, 05:46 AM #2 September 23rd 2009, 10:22 AM #3 Aug 2009
{"url":"http://mathhelpforum.com/discrete-math/103863-arranging-cross-shaped-particles-2d-grid.html","timestamp":"2014-04-19T07:26:42Z","content_type":null,"content_length":"35813","record_id":"<urn:uuid:65d68f42-9b85-40e1-a1af-e7f6dad37238>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
Maximizing with Bounds: Beyond Gradient Ascent Next: Placing and Tightening the Up: Optimization - General Bound Previous: Quadratic Bounds Now consider the use of a bound such as the one proposed above. We have an analytically specified function, f, which we are trying to optimize starting from an initial point x^* = x[1]. Figure 6.1 depicts the bound and the function to be optimized. Note the contact point where the function meets the bound is at x^*. We can now find the maximum of that bound and move the contact point x^* to this new locus on the function. This point is now x[2] and is used to find another parabola which lies below the function f and touches it at our current operating point, x[2]. As this process is iterated, we gradually converge to a maximum of the function. This process is illustrated in Figure 6.2. At the function's maximum, however, the only parabola we can draw which is a lower bound is one whose peak is at the function's maximum (i.e. a 'fixed' point). The algorithm basically stops there. Thus, there is a natural notion of a fixed point approach. Such an algorithm will always converge to a local maximum and stay there once locked on. In the example, the algorithm skipped out of a local maximum on its way to its destination. This is not due to the fact that the local maximum is lower but due to the some peculiarity of the bounding approach and its robustness to very local attractor. The bound is shaped by the overall structure function which has a wider and higher local maximum at x^final. We will discuss this trade-off between the amplitude of a local maximum and its robustness (width of the basin) later on and their relationship to annealing and global optimization. Note how these bounding techniques differ from gradient ascent which is a first order Taylor approximation. A second order Taylor approximation is reminiscent of Newton and Hessian methods which contain higher order derivative data. However, both gradient ascent and higher order approximations to the functions are not lower bounds and hence using these to maximize the function is not always guaranteed to converge. Basically gradient ascent is a degenerate case of the parabolic bound approach where the width of the parabola is set to infinity (i.e. a straight line) and the step size chosen in an ad hoc manner by the user (i.e. infinitesimal) instead of in a more principled way. A parabolic bound's peak can be related to its width and if this value can be properly estimated, it will never fail to converge. However, selection of an arbitrary step size in gradient ascent can not be guaranteed to converge. Observe Figure 6.3 which demonstrates how an invalid step size can lead gradient approaches astray. In addition Figure 6.4 demonstrates how higher order methods can also diverge away from maxima due to adhoc step size. The parabola estimated using Taylor approximations is neither an upper nor a lower bound (rather it is a good approximation which is a different quality altogether). Typically, these techniques are only locally valid and may become invalid for significantly large step sizes. Picking the maximum of this parabola is not provably correct and again an ad hoc step size constraint is needed. However, gradient and Taylor approximation approaches have remained popular because of their remarkable ease of use and general applicability. Bound techniques (such as the Expectation Maximization or EM algorithm [15]) have not been as widely applicable. This is because almost any analytic function can be differentiated to obtain a gradient or higher order approximation. However, we shall show some important properties of bounds which should illustrate their wide applicability and usefulness. In addition, some examples of nonlinear optimization and conditional density estimations will be given as demonstrations. Next: Placing and Tightening the Up: Optimization - General Bound Previous: Quadratic Bounds Tony Jebara
{"url":"http://www.cs.columbia.edu/~jebara/htmlpapers/ARL/node40.html","timestamp":"2014-04-20T23:29:20Z","content_type":null,"content_length":"8998","record_id":"<urn:uuid:7b77d80a-1f5d-4d19-8deb-dffedcc09a50>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
the answer is: Simplifying x + 2 = 3x + -1x2 + 5 Reorder the terms: 2 + x = 3x + -1x2 + 5 Reorder the terms: 2 + x = 5 +... what is the probability that a survey student oppose legalization of marijuana? in a poll of 626 university students, 292 said that they were opposed to legalizing marijuana. what is the probablitty the fourth photograph will be of katie? Robert has an envelope containing 24 photographs. Six of the photographs are of Robert, and the rest are of his friend Katie. He will choose 4 photographs at random. If the first 3 are of Robert. rational zeros for f(x)=x^4-4x^3+x^2+16x-20 i dont understand what rational zeros are. What is the width in feet? a field is 45 2⁄3 yards wide. What is the width of the field in feet? What is 7x/x = 7 this is a fraction What is 3 x 1cm as a decimal I am working on a coding problem and I need help Can you help me with a question? Ok I have to simplify this a(b-4)^2 -------------- (this is a fraction bar) 3a(b-4) Word Problem Penny sells shoes at the local store and recently set up a display to help sell shoes. One third of the shoes in the display are high-tops. One third of the shoes are boots, and one sixth of the whats 65% as a fraction or mixed number in simpliest form and as a decimal hi im having trouble on my math homework please show work What is 2 to the negative third power plus 2 to the negative second power? exponents as a fraction how to multiply 3/2 (1)- 3 this is an algebra equation. i have to make a table
{"url":"http://www.wyzant.com/resources/answers/fraction?f=new-questions","timestamp":"2014-04-16T19:32:31Z","content_type":null,"content_length":"47078","record_id":"<urn:uuid:5dccf8da-79ef-4205-bf7e-53f3043f5e74>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
Worthwhile Canadian Initiative: New Keynesians just assume full employment without even realising it And anyone with even an ounce of Old Keynesian blood left in his veins, if they understood what the New Keynesians are doing, would be screaming blue murder that we are teaching this New Keynesian model to our students as the main macro model, and that central banks are using this model to set monetary policy. I have made this point before. And here. But I'm now going to make this point so simply and clearly that any New Keynesian macroeconomist will be able to understand it. Here is a very simple version of a standard New Keynesian model. Assume no investment, government spending or taxes, and no exports and imports. There is only consumption. Assume a "haircut" economy of self-employed hairdressers, cutting each other's hair, in which all goods are services, with labour the only input, so counsumption, output, and employment are all the same thing. And so prices and wages are the same thing too. Assume no exogenous shocks, ever. And no growth either. Nothing exogenous ever changes. Assume a constant population of very many, very small, identical, infinitely-lived agents, with logarithmic utility of consumption, and a rate of time-preference proper of n. The individual agent's consumption-Euler equation, with r(t) as the one-period real interest rate, is therefore: C(t)/C(t+1) = (1+n)/(1+r(t)) Ignore the Zero Lower Bound on nominal interest rates. In fact, just to make the central bank's job even simpler, ignore nominal interest rates altogether, and assume the central bank sets a real interest rate r(t). Suppose the "full employment" (natural rate) equilibrium is (say) 100 haircuts per agent per year consumption, income, and employment. Forever and ever. The central bank's job is to set r(t) such that C(t)=100, for all t. Inspecting the consumption-Euler equation, we see that this requires the central bank to set r(t)=n for all t. Assume the central bank does this. It is obvious that setting r(t)=n for all t only pins down the expected growth rate of consumption from now on. (It pins it down to zero growth.) It does not pin down the level of consumption from now on. Suppose initially we are at full employment. C(t)=100. Then every agent has a bad case of animal spirits. There's a sunspot. Or someone forgets to sacrifice a goat. So each agent expects every other agent to consume at C(t)=50 from now on. So each agent expects his sales of haircuts to be 50 per period from now on. So each agent expects his income to be 50 per period from now on. So each agent realises that he must cut his consumption to 50 per period from now on too, otherwise he will have to borrow to finance his negative saving and will go deeper and deeper into debt, till he hits his borrowing limit and is forced to cut his consumption below 50 so he can pay at least the interest on his debt. His optimal response to his changed expectation of other agents' cutting their consumption to 50, if he expects the central bank to continue to set r(t)=n, is to cut his own consumption immediately to 50 and keep it there. C(t)=50, which means 50% permanent unemployment (strictly, underemployment), is also an equilibrium with r(t)=n. So is any rate of unemployment between 0% and 100%. What can the central bank do to counter the bad animal spirits? If it cuts r(t) below n, even temporarily, we know there exists no rational expectations equilibrium in which there is always full employment. All we know is that we must have negative equilibrium growth in consumption for as long as r(t) remains below n. It is not obvious to me how making people expect negative growth in their incomes from now on should cause everyone to expect a higher level of income right now from a higher level of everyone else's consumption right now. Sacrificing a goat sounds more promising as a method of restoring full employment. Did every other New Keynesian macroeconomist already know about this, and just swept it under the mathematical rug? Didn't I get the memo? "Under the assumption that the effects of nominal rigidities vanish asymptotically [lim as T goes to infinity of the output gap at time T goes to zero]. In that case one can solve the [consumption-Euler equation] forward to yield..." Bullshit. It's got nothing to do with the effects of nominal rigidities. What he really means is "We need to just assume the economy always approaches full employment in the limit as time goes to infinity, otherwise our Phiilips Curve tells us we will eventually get hyperinflation or hyperdeflation, and we can't have our model predicting that, can we?" That Neo-Wicksellian/New Keynesian nonsense is what the best schools have been teaching their best students for the last decade or so. They have been teaching their students to just assume the economy eventually approaches full employment, even though there is absolutely nothing in the model to say it should. Remember the Old Keynesian Income-Expenditure/Keynesian Cross diagram? What we have here, if the central bank sets r(t)=n, is a version of that diagram in which APC=MPC=1 for all levels of income, so the AE curve coincides with the 45 degree line. Any level of income between 0 and full employment income is an equilibrium. New Keynesians simply must put money back into the model. You can follow this conversation by subscribing to the comment feed for this post. Consumption is too low this period. The central bank lowers the interest rate to increase consumption this period. The equation used to represent the result of the lower interest rate has many solutions with two extremes. One is the desired effect where consumption rises today relative to an unchanged consumption in the future. The other is that consumption today stays the same, the lower interest rate doesn't have the desired effect at all, and instead it just lowers consumption in the future. You focus on that really bad possibility. Why would a lower interest rate today lead to less consumption in the future? Only if people borrow more/save less today, and so have to pay debts or have less accumulated wealth for consumption It seems to me, then, that the really bad result is impossible. The really good result seems to just ignore the adverse impact of the added debt/less wealth on future consumption. And so, the mixed result are the more realistic ones. So, we consume more now and less in the future. But in the future, if we consume below potential, the interest rate will be lower still. So, in the future, consumption won't be below potential. So, all that can happen is that consumption today can rise. This isn't consumption rising above its future level but rising up to its future level. I think this translates your problem of a low interest rate now and forever (right) just causing people to think that consumption will get lower and lower each period forever, into a situation where instead the interest rate gets lower and lower heading off to negative infinity. (I don't really know why it could need to get below -100%. Saving then is giving a gift and borrowing is accepting a gift. If haircuts are scarce, then there should be no problem with demand.) With these representative agent models, net wealth, and so assets and debt have to be zero in equilibrium. This means that what debt and wealth mean to the people are only disequilibrium conjectures. If the sunspot pushes down consumption this period, then the central bank sets an interest rate this period that is too low for that low level of consumption this period. Consumption then isn't too low. And everyone knows this, so the sunspot can't lower consumption. And even if the central bank is too slow to fix it this period, they will eventually. “It is not obvious to me how making people expect negative growth in their incomes from now on should cause everyone to expect a higher level of income right now from a higher level of everyone else's consumption right now… New Keynesians simply must put money back into the model." I think I follow this because your explanation is so clear. There’s nothing in your simplified model that would contradict your conclusion. Whether its money that goes back in or not - something else EXPLICIT needs to go in in order to ensure a desired level effect beyond the rate of change effect. Not quite sure about your example (how can CB set real rate directly?), but in basic NK model as described in Gali, the logic is pretty much the same: any trajectories other than the equilibrium one would explode as t -> infinity, so they are ruled out. And even that requires monetary policy to respond strongly enough to inflation (even a hypothetical, off-equilibrium inflation), otherwise one could get indeterminacy and sunspots. I believe John Cochrane has a paper that criticizes NK models for similar reasons ("Determinacy and Identification with Taylor Rules", JPE 119(3)). On the other hand, maybe we shouldn't interpret all the mathematical assumptions too literally, and just take NK model as a simplified representation of a particular economic story. In that case, do such mathematical issues really matter? Nick, I wish that this topic you keep raising gets more attention because I find it extremely interesting. (In fact I think that a book reviewing all your quarrels with standard economics would be a great tool for graduate students.) I apologize for the unstructured comment I hope that most of it is useful for the topic you are considering. I argued in your old post that with the threat of the CB implicit in a Taylor rule infinite consumption cannot be an equilibrium solution since the agent will anticipate physical limits (I tried to argue through the budget constraint but you proved me wrong). I am still trying to find a similar argument for avoiding consumption going to zero. But in this comment I want to argue that Galí is I think that if one assumes that the effects of nominal rigidity vanish asymptotically is enough to get full employment, in the NK sense. I'm not sure why they would vanish asymptotically, but if in the long run we have flexible prices the output gap will go to zero, since under flexible prices the output equilibrium is the natural level of output. And its obtained by the intersection of labour supply and demand, the production technology and market clearing(assuming for simplicity Y(t)=N(t): Using labour supply to find the real wage: Where f'(N(t)) is the marginal disutility from work, and using technology Y(t)=N(t) and market clearing Y(t)=C(t) And this gives the natural level of output, if the disutility from labour was simply disutility=N(t) (for simplicity), then the output is simply: So if the effects of price rigidity vanish asymptotically we get the natural level of output and zero output gap. Why would the effects vanish? I'm not sure, but I give a tentative explanation. All we need is that all firms in the limit set the price equal to the markup over marginal cost. If shocks vanish asymptotically (whatever kind of shock) then as time tends to infinity all firms will be able to set its price. Or maybe better explain, assuming vanishing shocks marginal cost in the limit is constant, and since a proportion theta of firms are able to update the price each period all firms (except theta^t proportion) will end up setting the price equivalent to the flexible price equilibrium, and thus the natural output will be recovered. Of course this is not the case if the economy is regularly hit by new shocks. In that case I think I need the justification of why consumption cannot go to zero (decrease t+1 consumption always instead of increasing t) if the CB credibly promises to low the real rate every period. I guess that using money in the utility function in a standard NK model, that is just used to find the money supply consistent with the appropriate nominal rate would not satisfy you. Overall I find this topic really interesting and extremely disturbing at the same time and its related to a model I'm trying to construct. I have the feeling that under flexible prices labour supply and demand imposing market clearing and technology determines output, and the Euler equation just pins down the required real rate, and under sticky prices the reverse is true, but it's a bit confusing since in general equilibrium everything is simultaneous. In any case I'm considering cases in the zero lower bound and rigid prices and it gives this sensation. It is clear in the mock economy you setup above that full employment would be reached quickly. I mean with no investment required (everybody already has a pair of scissors), people would simply meet in pair every week and barter a hair cut (you cut mine, I cut yours). I guess a good model should take this activity into account and give us a prediction of how fast it would happen and the magnitude of the effect compared to other variables. The question becomes: Can we salvage the model? Can we modify it to add this aspect to it? I am probably making some sort of fundamental error but I'd like to echo a commenter from your previous post and Roger Gomis: perhaps including disutility from labor (and its associated period by period, intra-temporal FOC) could resolve this issue. In terms of Econ 101 type graphs, you have convinced me that AE coincides with the 45 degree line. However, Y (and therefore C) is pinned down (over the long run) by the interaction of labor demand and labor supply (production function and consumption-leisure decision). If you ignore AE in this scenario, you have the classical model and perhaps analogously, in the NK case, ignore nominal rigidities and you have RBC. On the other hand, maybe we shouldn't interpret all the mathematical assumptions too literally, and just take NK model as a simplified representation of a particular economic story. On one level this is right. These models were developed in order to retain something like an IS curve to justify countercyclical monetary policy, but within a framework of intertemporal optimization by rational agents. So in some sense the result of these models is all that matters and the particular way it's arrived at is not so important. New Keynesians already know that lower interest rates raise current output, they don't need Woodford or whoever to prove it. But I don't think you can handwave away the logical problems with these models on those grounds. If all you want is Y_t = Y(r_t), Y' < 0, then just write that; getting there via an Euler equation is a complification, not a simplification. Nick: "New Keynesians simply must put money back into the model." I was with you until then. Honestly if you'd finished with "Delenda est Carthago" it would make as much sense to me. Why should the absence of money be picked on as the crucial flaw in the model? To me the real problem is that RE models in general, not just NK, are apt to have multiple equilibria. Time and again we see some implausible assumption being adopted for lack of a better way to make things determinate. Is it so hard to admit that RE is a bit of a mess, whether it's Lucas or Woodford using it? perhaps including disutility from labor (and its associated period by period, intra-temporal FOC) could resolve this issue. I'd like to hear Nick's response to this but here's my sense of why it doesn't help. You are thinking that in the underemployment equilibrium, the marginal disutility of labor is lower than at the full employment equilibrium. So people should want to work more, i.e. there should be excess demand for haircuts and excess supply of labor. But that's only looking at one side of the market. Remember, the price of a haircut is just the disutility of the labor that produces it. So, yes, the marginal disutility of labor is lower in the underemployment equilibrium, but so is its marginal Put it another way: Suppose I believe that my permanent income (in haircuts) is one haircut per week. Then I choose a consumption path of one haircut per week. And if everyone else does the same, then my lifetime income will in fact be one haircut per week. So there's no reason for me to change my behavior. (And there is no question of a price adjustment -- a haircut can't sell for a price other than one haircut.) It may very well be true that, given the utility of haircuts and the disutility of the labor to produce them, everyone would be better off giving and receiving two haircuts per week. But there is no way for the choices of individual rational agents to get us from the one-haircut to the two-haircut equilibrium. You are forgetting about the condition that determines employment in the model. What you say is true when the real interest rate and the real wage is set exogenously. But so what? These are not market-clearing prices. There are a lot of allocations that satisfy your Euler equation that do not satisfy market-clearing. Is this what you have claimed to discover? David Andolfatto Benoit has it exactly right: "It is clear in the mock economy you setup above that full employment would be reached quickly. I mean with no investment required (everybody already has a pair of scissors), people would simply meet in pair every week and barter a hair cut (you cut mine, I cut yours)." If barter were possible, unemployment would be impossible, *even if the central bank set a really stupid r(t)*. The unemployed would just get together and barter their services. NK macroeconomists want to have unemployment when the CB sets r(t) wrong, and full employment when the CB sets r(t) right. NK models only make sense as a model of a monetary exchange economy. But they don't have money in the model. That's why, Kevin, I said they must put money back into the model. If there were a stock of money, then some sort of real balance effect would be included in individual agents' transversality condition, and unemployment and deflation would lead to agents holding too much real money, and they would plan to spend more than their expected incomes. primed: if we allowed barter exchange, then we would have an RBC model. Equilibrium where the marginal disutility of labour = the marginal utility of a haircut. But in that model, even a stupid law that set r(t) too high ( a minimum interest rate law) couldn't prevent full employment. Bill: "Why would a lower interest rate today lead to less consumption in the future? Only if people borrow more/save less today, and so have to pay debts or have less accumulated wealth for consumption tomorrow." Remember, this is a model with identical agents. If they all have a declining path for C(t), they all have a declining path for Y(t) too. There is never any borrowing or lending along any equilibrium Roger: "I think that if one assumes that the effects of nominal rigidity vanish asymptotically is enough to get full employment, in the NK sense. I'm not sure why they would vanish asymptotically, but if in the long run we have flexible prices the output gap will go to zero, since under flexible prices the output equilibrium is the natural level of output." But if (say) you had a model where the AD curve were vertical, then no amount of price flexibility can get you to full employment. You need a downward-sloping AD curve (that cuts the LRAS curve) plus price flexibility, to get you to full employment. The standard NK model has a vertical AD curve, except it's a very thick AD curve. Contrast to the standard ISLM, where you get a downward-sloping AD curve in the usual case. Falling P means rising M/P with means a movement down along the AD curve to the right. But the NK model does not include M/P. David: these are self-employed hairdressers. The production function is one haircut per hour. P and W are the very same thing. The supply of labour and the supply of haircuts are the same thing. The demand for labour and the demand for haircuts are the same thing. When there is unemployment, this means they cannot sell as many haircuts (= cannot sell as much labour) as they want to sell. JW: my response is basically the same as yours. Except that the marginal product of labour is *always* one haircut per hour. But if you can't actually *sell* that extra haircut (for money) the marginal product is irrelevant. You can't cut hair if there isn't a customer sitting in the chair. And even if you could, you wouldn't gain by producing an extra haircut if you couldn't sell to anyone. In an underemployment equilibrium, every hairdresser would like to both sell and buy an extra haircut. But none will buy unless he can sell, and none can sell unless someone else buys. Nick's last comment is very clarifying. (New) Keynesians believe both that the economy will not have full employment without appropriate monetary policy, and that it will reliably reach full employment with appropriate monetary policy. The challenge is to write down a model that has both those properties. (Or, really, to write a model that captures the most important facts about the world that give the economy both those properties.) The goal is a story in which monetary policy to be both necessary and sufficient. You don't solve the problem by telling a story in which resources are fully utilized without any need for a central bank. Another, somewhat different way of looking at this same problem is, if markets in general set prices optimally, why does this one price, the interest rate, have to be set by a central planer? One answer, which Nick obviously likes, is that money is special: It can only be produced by the government, and there is no substitutability between money and any privately produced goods. Of course that's not the only possible answer. [Edited to fix typo NR] That sentence should be: "(New) Keynesians believe both that the economy will NOT have full employment without appropriate monetary policy..." [Fixed. NR] I'd like to hear Nick's response to this but here's my sense of why it doesn't help. You are thinking that in the underemployment equilibrium, the marginal disutility of labor is lower than at the full employment equilibrium. So people should want to work more, i.e. there should be excess demand for haircuts and excess supply of labor. But that's only looking at one side of the market. Remember, the price of a haircut is just the disutility of the labor that produces it. So, yes, the marginal disutility of labor is lower in the underemployment equilibrium, but so is its marginal product. (emphasis mine) No, because per assumption the market for labor isn't clearing. If you reduce the number of haircuts done by half, it would be really really weird for the marginal product of an additional haircut to go down! In Nick's recessionary economy MP > MC and Qd < Qs. JW: "(New) Keynesians believe both that the economy will not have full employment without appropriate monetary policy, and that it will reliably reach full employment with appropriate monetary policy. The challenge is to write down a model that has both those properties. (Or, really, to write a model that captures the most important facts about the world that give the economy both those properties.) The goal is a story in which [appropriate] monetary policy [is] both necessary and sufficient. You don't solve the problem by telling a story in which resources are fully utilized without any need for a [good] central bank [doing the right thing]." Yes. Clearly stated. Nick: "If there were a stock of money, then some sort of real balance effect would be included in individual agents' transversality condition...." I'm out of my depth here but I think you've got too fond of this real-balance idea altogether. Unless I've got things wildly wrong (quite possible) simply adding a stock of money won't get you a real balance effect. You'll need a growing population, or OLG, or something like that. With a constant immortal population and separable utility there's no Pigou effect. At least that's what I gather from reading Benassy but I'm far too lazy to work through the proofs. Kevin: the Pigou effect is a subset of the real balance effect. I haven't read Benassy (whose work I think highly of, as you know) on the Pigou effect, but I disagree with what you say he says on Is the point you are making that New Keynesians have nominal interest rates in their models but no debt or debt level in them? Curious, in your example, what action does a central bank perform to hit its desired "real" interest rate for haircuts? Does it have a bunch of wigged mannequins to employ unemployed hair cutters - central bank creates demand for haircuts? Does it have a robot that can give haircuts - central bank creates supply of haircutters? I am having trouble following the metaphor all the way through. Perhaps the argument being made by NK is that in the long run (so to speak) the economy resembles the barter economy assumed by RBC. In fact it could be argued (I am not saying NK argues this) that from the 'long arc of history' standpoint, even a monetary economy is a 'nominal rigidity'. It is obvious that setting r(t)=n for all t only pins down the expected growth rate of consumption from now on. (It pins it down to zero growth.) It does not pin down the level of consumption from now on. To be charitable, this is a modelling artifact. The modeller *should* have talked to the First Year Calculus Profs who would have clearly explained the difference between a derivative f'(x) and its underlying function F(x)+C. Derivation is not a linear operation, you lose information and the inverse operation, integration, produces a series of answers with C as a constant parameter. What you are on to Nick is that C, the "anchor point" is completely absent in New Keynesian models whereas Old Keynesian models had them. You are trying to reinsert it through the use of money. Fair enough, it's a perfectly valid approach. But this whole argument also illustrates why mathematical rigour is good. Rigour ensures that we don't overlook anything or generate red herring answers that occur through imprecise model specifications. primed: that would justify the assumption of full employment in the limit. Not sure though whether it makes even logical sense. If the economy reverts to barter at time T, how does the central bank set r(t) at time T-1? Dunno. Frank: "Is the point you are making that New Keynesians have nominal interest rates in their models but no debt or debt level in them?" No. It's that there is no stock of money. "Curious, in your example, what action does a central bank perform to hit its desired "real" interest rate for haircuts?" Ask the NKs that question. Implicitly, the CB borrows and lends money to any agent who wants to borrow or lend, at an interest rate r(t) (plus inflation). But there is no money ever borrowed or lent in equilibrium, because C(t)=Y(t). I had another comment on this on the other post but I think it got lost in spam. Here's a counter example, but not a very good one. Suppose the CB's "monetary rule" is that each period they choose the interest rate to guarantee full employment. In other words they pick r(t)=((1+n)C(t+1)/100)-1. That's not "n" exactly... but it will be. That C(t+1) actually has an expectations operator on it. If the CB's policy is credible then the only rational expectation (or perfect foresight) is to believe that E(C(t+1))=100 as well which simplifies the rule to r(t)=n. In this case CB's policy acts as a coordinating mechanism for agents expectations. So in this case you got an equilibrating mechanism. In fact you got no dynamics, it just jumps to full employment always (given absence of shocks etc). But it's not a very good counter example because the NK models don't usually have CBs in them which care only about full employment and don't give a fig about inflation. (Also, in this model why stop at 100? Since neither the CB nor the agents care about inflation, why not go for ... 101?) Now consider the diametrically opposite case, where the CB only cares about inflation (inflation nutters). Consider... consider... consider... yup, there's absolutely no reason for why agents would expect C(t+1)=100 or any kind of other return to equilibrium. If you got a NK model with such a policy rule that returns to full equilibrium after a shock, then yeah there's probably some kind of cheating going on. The intermediate case is where the CB's reaction is some kind of weighted average of inflation and employment. Remember that the Euler is just one equation in a 3 equation framework. In the long run we reach full employment because our definition of full employment accommodates to our position and our experience becomes our reality. I'm moving my (slightly modified) comment from the other thread over since the discussion has moved over here. Hopefully that's okay. In that prior discussion I had an aside about the issue of labor markets in the NK model wasn't particularly useful -- indeed it was a distraction. I was expressing my skepticism about the usefulness of the NK/DSGE blend in general, and specifically about their labor market modeling assumptions as being the worst part. But that's not particularly relevant to the discussion at hand. Which I take as to whether there is a meaningful trend/equilibrium/full employment that the model comes back to because of equilibriating forces within the model. In the terms you were asking: is consumption halving for everyone an equilibrium? There I think the answer is no. There isn't just a optimization over consumption but over labor supply -- and as you note in your response to me the model assumes that the agent can work as much as he wants at the prevailing wage. And in the case where you halve consumption the agent would want to work substantially more. There is lurking somewhere in the depths of the model a first order condition with respect to labor supply as well. With a diminishing returns to labor production function around too. It's just when reducing it to the 3 variable system that gets swept under the Or think about it in terms of the individual agents. Suppose one halves her consumption and expects everyone else to do the same. Then, at the prevailing wage she has an incentive to increase her hours of work. And by the assumptions of the model she can do so without affecting wages. She can consume more, but because she is also working more will not violate the transversality condition. But then, so does everyone else. And that gets you on the way back to an equilibrium with what they call full employment in the model. With respect to your response to David above, the production function isn't one hair cut per hour. Instead it is F(n) = A_t * n^alpha (eq 5 in the Gali paper linked to earlier). There are diminishing returns to labor on the aggregate level and that matters. Determinant: I think that's what I'm saying. (You use "C" to represent the constant of integration, presumably, rather than consumption.) But normally economists are well aware of this problem. Somehow it slipped past them here, I think. Maybe due to confusing the individual agent with the economy as a whole, in a representative-agent model? Or more likely, from mistakenly applying an assumption which would make sense in the Old Keynesian ISLM model (where the effects of nominal rigidities really do disappear and ensure full employment in the long run, at least under certain assumptions) to a New Keynesian model which does not have a well-defined downward-sloping AD curve. notsneaky: I had a look through the spam filter, and think I remember fishing out one of yours very early this morning. Nothing in there now. I'm OK with assuming (for simplicity) the CB just targets full employment in this case. Given perfect information on shocks, that seems OK (let's ignore that, strictly speaking, that makes inflation But I'm not sure I'm getting how the CB's setting r(t) in the way you suggest acts as a coordination mechanism. Wouldn't sacrificing a goat, or just using cheaptalk a la Schelling focal points, work even better than messing around with r(t)? Remember that C(t) can jump. And we normally assume the CB sets r(t) an instant before agents choose C(t), because agents choose C(t) after observing r(t). Sjysync: no worries about moving your comments over here. "With respect to your response to David above, the production function isn't one hair cut per hour. Instead it is F(n) = A_t * n^alpha (eq 5 in the Gali paper linked to earlier). There are diminishing returns to labor on the aggregate level and that matters." It is in my model. I've simplified. But as long as we have non-decreasing marginal disutility of labour we still get a well-defined "full-employment" level of C(t). (And strictly, my Consumption-Euler equation assumes separability in C and L, I think). " There isn't just a optimization over consumption but over labor supply -- and as you note in your response to me the model assumes that the agent can work as much as he wants at the prevailing wage. And in the case where you halve consumption the agent would want to work substantially more. There is lurking somewhere in the depths of the model a first order condition with respect to labor supply as well." In my underemployment equilibrium, where C(t)=50, it is indeed true that the self-employed hairdressers want to sell more labour (i.e. sell more haircuts). But they can't, because nobody will buy any more than 50 haircuts. So yes, they are "off" their labour supply curves (i.e. "off" their output supply curves). That FOC is not satisfied. But the individual agent can't do anything about it. He is (In the background, of course, hairdressers are monopolistically competitive, and an individual hairdresser will cut his price (cut his wage) when the Calvo fairy touches him with her wand. But that makes no difference to my argument here. The speed or slowness of the Calvo fairy simply determines how quickly deflation will set in if the economy is in an underemployment equilibrium). Damn it guys, everybody understood this stuff about the constrained labour demand curve (when firms were sales-constrained) back in the olden days of the 1970's, when Patinkin and Benassy and Clower ruled the roost. You young 'uns have so much to re-learn! When firms are sales-constrained, the labour demand curve is NOT the VMPL curve (or MRMPL curve under monopolistic competition). See Barro and Grossman 1971. Lord: I can imagine a world in which that is true. But that's not the world of the NK model. I think – though am not sure – that you’re right about the diminishing returns to labor not mattering. I’d have to crank through the algebra to be sure. But to be fair you’re simplifying away from their model. But here is the rub: “In my underemployment equilibrium, where C(t)=50, it is indeed true that the self-employed hairdressers want to sell more labour (i.e. sell more haircuts). But they can't, because nobody will buy any more than 50 haircuts. So yes, they are "off" their labour supply curves (i.e. "off" their output supply curves). That FOC is not satisfied. But the individual agent can't do anything about it. He is sales-constrained.” In the NK model that isn’t the way it works. There is a perfectly competitive market for labor among all the differentiated good producing firms and the workers. That market clears (by assumption) in the model and the wage is the same across all the differentiated product producers. In your example, though not in all NK models, wages are perfectly flexible every period. So any worker who drops there wage has infinite demand for labor. Each worker is infinitesimally small relative to the economy as a whole, so they have no effect on overall wages or labor supply. You note “an individual hairdresser will cut his price (cut his wage) when the Calvo fairy touches him with her wand. But that makes no difference to my argument here.” But that isn’t the case. Price and wage are different in the model. Prices are what the individual firms charge while wages are what workers are paid. In your scenario there is a _permanent_ difference between the marginal utility of consumption and the marginal disutility of labor. You’re implicitly not allowing the wage to adjust to clear that market. Each of the workers is happy to lower their wage and work more. Each of the firms would be happy to hire at a lower wage and increase their production (since production isn’t fixed, just the price of output). Instead you’re saying that doesn’t happen. What you want is a different model. One where those adjustments cannot occur for some reason. Which is fine. But that doesn’t mean that under its own terms the NK model doesn’t have a equilibrating force. Half the consumption simply isn’t an equilibrium because every agent has an incentive to push away from it. This conversation reminded me of Robert Hall's famous "consumption is a random walk" result. It's sort of similar isn't it? How does a CB stabilize that? And speaking of Hall, this also sort of makes some sense of Paul Krugman's reporting that "Hall used to be famous at MIT for talks along the lines of “Not many people understand this, but the IS curve actually slopes up" " Hmmm. That seemed a bit cryptic but if you got that C(t+1) term in there, it does slope sort of slope upward, doesn't it? Also, if we assume perfect foresight (except for a time zero shock which makes sure we don't start at full employment), can we just invert that Euler equation and write c(t+1)=c(t)+(r(t)-n) or is there something wrong with that? "But there is no money ever borrowed or lent in equilibrium, because C(t)=Y(t)." Incorrect. There is no net money borrowed or lent when C(t) = Y(t). Meaning there is no central bank that can push money into a system and pull money out of a system. You and I can lend to each other and still realize C(t) = Y(t). I borrow $10,000 from you at %5, you borrow $10,000 from me at 5%. Net debt between the two of us is $0 at 0% and so C(t) can equal Y(t) at every time t even though there is $20,000 of total debt outstanding. I think Siysnyc is right that typical NK models do have an auto-correct feature if you take them on their own terms. Nick's model simplifies away this feature by not separating labor and product markets. Of course it's noteworthy that the feature depends on an assumption (labor market clearing) that is extremely unrealistic in a way that undercuts our main motivation for doing macroeconomics in the first place. I mean, I mostly don't give two hoots (maybe I give one) that the economy isn't producing quite as much as it ideally could: if you care about output, then you want to do growth economics, not Keynesian macro. What mostly bothers me (and presumably most Keynesians and other demand-siders) about recessions is that a bunch of people are out of work. But in NK models this isn't even the case. So Andy, does this auto-correct feature still exist in a NK macro model that does something a little weird (labor market power) to induce unemployment in a representative HH NK model? Like this from Gali, Smets, Wouters: Sjysnc and Andy: I strongly disagree. I will now modify my simple model to add a labour market, and show it changes nothing. Assume each agent owns a salon, but is not allowed to work in his own salon. Each agent hires another agent to work in his salon, and works himself in another's salon. In a barter economy , with W/P on the vertical axis, and L on the horizontal, the aggregate labour demand curve would be horizontal at 1 haircut per hour (because the MPL=1). With monopolistic competition, the aggregate labour demand curve would be horizontal at (1-1/e), where e is the elasticity of an individual salon's demand curve. And the aggregate labour supply curve would be upward-sloping, with a height equal to the (increasing) marginal disutility of labour. "Full employment" equilibrium L (full employment equilibrium C) is defined where those labour supply and demand curves cross. Suppose it's at 100 haircuts per year per salon (=100 haircuts per year per worker. And the full employment equilibrium W/P is (1-1/e). In the underemployment equilibrium, the aggregate labour demand curve is horizontal at (1-1/e), ***until it hits 50 haircuts per year per worker***, and then it turns vertical and drops straight down. It's reverse-L-shaped. That's because the quantity of haircuts demanded is 50 haircuts per salon, so hiring 51 hours of labour cannot be profit-maximising, since that 51st hour of labour will be wasted, since there is no 51st customer. This is what Patinkin/Clower/Barro-Grossman called the "constrained labour demand curve", as opposed to the "notional labour demand curve" that we get if we ignore firms' sales constraint. Assuming perfectly flexible nominal wages, (but sticky prices a la Calvo or whoever) the underemployment equilibrium real wage W/P is determined by the point where the labour supply curve crosses the vertical portion of the constrained labour demand curve. It will be strictly less than (1-1/e). Yes, workers are "on" their labour supply curves. In that (rather stupid) sense, there is no "involuntary unemployment" in this model. But employment and real wages are lower than at full employment. And firm's are "on" their labour demand curves too. BUT THE CONSTRAINED LABOUR DEMAND CURVE IS VERY DIFFERENT FROM THE NOTIONAL LABOUR DEMAND CURVE. Comparing this new version of my model to the original version: in underemployment equilibrium income is still 50 per agent. The income can now be decomposed into wage income plus profit income, but it still adds up to 50, and each agent is still working the same number of hours in both versions. In the new version, each individual agent *as worker* can work as many hours as he chooses. He chooses 50, because W/P is so low. But each agent as salon-owner would love to hire more labour and sell more haircuts, but cannot, because nobody wants to buy more. Adding a labour market to the model, and assuming W is perfectly flexible, changes nothing at all. Barro Grossman showed this back in 1971. Another excellent post Nick! This is something I've been wondering about as well. Do you know of any academic papers highlighting this? Also, how do you think it relates (if even slightly) to John Cochrane's criticisms of the NK models regarding their unrealistic assumption that they jump to a new equilibrium to ensure that inflation is determinate? Sjysnc and Andy: BTW: the underemployment equilibrium I have described above is *exactly* what is going on in any NK model, if the CB sets r(t) too high (except I assumed constant MPL for simplicity and they normally assume diminishing MPL). The difference is that *I* am saying we can get that underemployment equilibrium *even if* the CB sets r(t) just right. Yep, Nick, "C" in my first post is a constant of integration, not Consumption. Us engineery types have our own jargon! Anyway, the rest of this thread just seems to be talking in circles about why something isn't there that *should* be there, and everyone assumes is there. Blast it people, just change the model a bit so it has the parameter back in that you need! Fudge factor, margin of safety, whatever. I don't care if you claim that all unemployed people wear blue hats, just make it explicit in the model, or else you will get lost! C'mon, everyone back to First-Year Calc now. notsneaky: it is sort of similar to hall's random walk consumption model. But only sort of. An individual agent who chooses C(t) where r(t)=n, but with random shocks to permanent income, will have C (t) follow a random walk. But permanent income is endogenous in this macro model. (I have my own posts here on why the IS curve slopes upwards, BTW!) "Also, if we assume perfect foresight (except for a time zero shock which makes sure we don't start at full employment), can we just invert that Euler equation and write c(t+1)=c(t)+(r(t)-n) or is there something wrong with that?" Not sure. I think that means ruling out attacks of animal spirits, by assumption. Frank: " You and I can lend to each other and still realize C(t) = Y(t)." As I said explicitly in the post: All agents are identical. If I want to lend to you, you will want to lend to me, so you won't want to borrow from me. Therefore no loans in equilibrium. HJC: thanks! I'm afraid I don't know of any papers on this subject. "Also, how do you think it relates (if even slightly) to John Cochrane's criticisms of the NK models regarding their unrealistic assumption that they jump to a new equilibrium to ensure that inflation is determinate?" I don't remember reading John Cochrane on that. It *might* be related. I don't know. Nick: No problems, here is a link to a Cochrane paper if you are interested. Not sure. I think that means ruling out attacks of animal spirits, by assumption. That's why I make the exception for the zero-time time shock. If we're gonna analyze the model seriously and consider whether it's equilibratin' or not we have to start off away from full employment, inflation=target, and see whether it comes back or not. But we don't want to get bogged down in thinking about expectation formations and all that stuff. So let's assume that in time zero there's an unanticipated shock (either to output gap or inflation) but after that both consumers and the CB have perfect foresight, just for simplicity. Now, "assume perfect foresight" doesn't exactly jive with "there is an unanticipated shock" but for the purposes of taking the model apart, looking at all the axles, belts, rotors, and screws that make it up, I think it's a reasonable way of looking at it. So. If I do get to invert the Euler equation we have (in logs, deviations from "full employment", all that) c(t+1)=c(t)+(r(t)-n) ---- the upward sloping IS curve (except for initial period) pi(t)=pi(t-1)+k*c(t) --- Philips curve r(t)-n=f(whatever) --- monetary rule. Since CB has perfect foresight "whatever" means that you can pick any lags or leads you want to put in there. This is actually the Ramsey model way of looking at it with the difference that r(t) is not determined by MPK but by a CB. And at that point it's simply a question of whether this system of difference equations has a stable steady state where c=0. So here we are not assuming that everyone believes that the economy always returns to full employment, we're just asking what kind of f(whatever) function will get it back there and is consistent with such beliefs. Here's the weird thing. Because this system DOES have an upward sloping IS curve - the growth rate of consumption depends positively on r - that flips all the usual conclusions on their head. Iterate and work through the algebra. The CB should pick LOWER interest rates when inflation is high. It should pick LOWER interest rates when output is above equilibrium (to bring back This is weirding me out somewhat. "As I said explicitly in the post: All agents are identical. If I want to lend to you, you will want to lend to me, so you won't want to borrow from me. Therefore no loans in equilibrium." Yes loans in equilibrium. You and I are both hair cutters and hair cut recipients. I want to ensure that I always have someone to cut my hair and you want to do the same. I do this by borrowing $10,000 from you and buying $10,000 in haircuts from you in advance. You do the same, borrowing $10,000 from me and buying $10,000 in hair cuts from me in advance. We owe each other $10,000 and we owe each other $10,000 in future hair cuts. You are making an assumption that borrowed money is always spent on goods delivered when the borrowed money is spent - the impatient borrower. Borrowed money can also secure goods that cannot be delivered until some time in the future. I'm OK with assuming (for simplicity) the CB just targets full employment in this case. Given perfect information on shocks, that seems OK Ok, but then I think that if you agree to this, then you're giving away the game. If there's one monetary policy which can lead back to full employment then there can be others. It's just a question of specifying an appropriate monetary [S:rule:S] policy (which is something like, choose appropriate initial r(t) then follow some monetary rule afterwards). Once you got that, and it's rational expectations (or perfect foresight, except for a time zero shock) all around, you got your equilibrating mechanism and a in-model justification for "consumers assume that output gets back to (let's ignore that, strictly speaking, that makes inflation indeterminate). Yes, but that's just because the counter example was purposefully silly. Get a Philips curve and a different monetary rule then we can bring inflation back into it (I think). But I'm not sure I'm getting how the CB's setting r(t) in the way you suggest acts as a coordination mechanism. Wouldn't sacrificing a goat, or just using cheaptalk a la Schelling focal points, work even better than messing around with r(t)? It's the definition of equilibrium. Choices of CB and households have to be mutually consistent. Suppose folks insist on believing in 50 haircuts. Then the CB sets the interest rate at (1/2)*(1+n)-1 until the stupid consumers realize that expecting 50 haircuts for ever is just wrong. It's an-off equilibrium path so it never happens given that this model has (and has to have) rational Maybe sacrificing goats would work too. Remember that C(t) can jump. And we normally assume the CB sets r(t) an instant before agents choose C(t), because agents choose C(t) after observing r(t). I'm not sure why this matters. Let's try a phase diagram. You got your perfect foresight Euler (logs, deviations from full employment): That's just the Ramsey equation with r on the x-axis instead of capital. If r is less than n, c is falling. For dc/dt=0, r=n. Now we need a dr/dt=0 nulcline. Say the monetary rule is dr/dt=0 is the x-axis, full employment. There's at least one stable path. If there's a period zero shock then the CB just needs to choose an initial r to get on that path and then follow the rule above. There is a weird implication here. If, say, the period zero shock is c (0)>0 (the shock just happened at the very end of the period and in this one instance only it was not perfectly foresighted), the CB needs to pick an initial r less than n in order to get on the declining c stable path. In other words, it needs to cut interest rates when output is above full employment just so it can raise them later. It's "discretion when shock happens, rule based monetary policy afterwards". That's the big "C", the constant of integration, you and Determinant are talking about, the initial condition which pins down the solution to the difference equations. Of course this is not the policy recommendation usually made or the implication that is drawn from these models. notsneaky: I'm not sure about this, but what you are saying sounds to my ears similar to what John Cochrane is saying. (The difference is that JC has the CB responding to inflation, while you have the CB responding to the output gap, but that's not an important difference in this context). Simplify: How about this: at the beginning of time, the CB makes the following ("blow up the world") commitment: "If everyone always chooses C=100, I will always set r=n. But if anyone ever chooses C =/=100, I will choose r=2n, and keep it there forever." Since there exists a physical upper limit on C, (and a limit less than that because even monopolistically competitive firms with sticky prices will ration haircuts if C(t) exceeds competitive equilibrium) this rules out any perfect foresight equilibrium path except C=100. Because r=2n forever means C(t) is growing forever, which can't happen. (Actually, that's not perfectly true, since C(t)=0 for all t is also an equilibrium path for any r(t).) Is my "proposed" monetary policy rule any different, (in the way it works) from yours? Frank: People in this model would not want to buy haircut insurance from each other. They don't need it. People are identical, and always willing to sell more haircuts in equilibrium. Stop throwing out red herrings. Stop trolling. This post is about NK models. If you don't have a clue about NK models, and what they assume and do not assume, then stop trying to change the subject. "Then every agent has a bad case of animal spirits." If it is possible that every agent can get a bad case of animal spirits and every agent knows that this is possible then every agent may want to buy haircut insurance. You posit a hard cutoff of demand at 50 and no matter what you can’t get demand beyond that. And are stripping down the implicit model by only looking at one good. Again, this is a different model than the NK one. Whether it is a better model or not is a separate question. But it isn’t how the NK model works. In the NK, there is the composite good made up of all the different monopolistically competitive firms goods. Each of those firms hires the non-differentiated labor in the market, and it’s all at the same wage. And if the prices of one of the intermediate goods is lower, then more of that is bought and goes into the composite good. So suppose you have that initial drop in expectations down to producing 50. The Calvo pricing assumption in the model is that a fraction theta of the intermediate goods producers lower their prices substantially and increase their output. The model has smooth demand curves for all firms by the technology of the model—you can’t get the sharp drop off in demand you posit. These intermediate goods producers who adjust then hire a large amount of labor in the spot market at the new lower wage but that still means that overall labor supply rises relative to the just 50 expectation. And all wages in the economy sink to that new lower level because the model has the labor market clearing. The marginal utility of consumption and marginal disutility of labor are equalized. Those firms that can’t adjust their prices see their demand cut back. Not sure of the sign of the change in profits for those firms since you have lower output, but costs of their only input (labor) has also declined, but those profits continue to be sent back to all the households in the economy since they all own equal shares of each company, so the households have income from that source as well. But in the model, as stated and not saying the model is a good approximation of reality, there are forces that push back to what it calls the full employment equilibrium. Now does this labor market make any sense? Nope, none at all. That was what I alluded to in my post on the other thread. There is no involuntary unemployment and attempted fixes such lotteries across agents are even worse. And as Andy notes above this makes the NK model not a great guide to think about what’s happening in the economy during a recession. And is one of the places where I have the major issues with the model. But the ability of firms to sell more output at a low enough price isn’t where I think the NK really falls down. Sjysnyc: Ah, you may not realise this, but you are arguing with the guy who invented macro with monopolistically competitive firms, way back in 1987 before it was cool! I exaggerate, of course, but I did beat Blanchard and Kiyotaki to publication by a month or so, IIRC, (but nobody read my paper, possibly because theirs was better). I invented NK macro! (Well, not really.) But I did have monop comp in the back of my mind all the way through this post. It doesn't make any difference (except the composition of aggregate demand between salons becomes indeterminate in the limit as we approach perfect competition). Assume each salon has the the exclusive right to a particular style, and people have a taste for varity of styles, so each salon faces a downward-sloping demand curve, as a function of aggregate demand per salon, and the real price: Ci = C.(Pi/P)^-e Assume for simplicity the CB had targeted zero inflation since the beginning of time, so all salons have the same price when the animal spirits attack and C drops to 50. Each salon would like to cut its nominal price Pi, in order to cut its real price Pi/P, and move down along its demand curve to the point where MR=MC. A small fraction of the salons (those touched by the Calvo fairy) will do just that. But if that fraction of the salons reduce their real price, all that means is that the remaining salons' real price is increased. The average real price across all salons is one, by definition. All this does is change the *distribution* of demand between firms. Price cuts are a "zero-sum" game. Now, if we situated these salons in a normal macro model, with aggregate demand determined by (say) ISLM, or MV=PY, so you get a downward-sloping AD curve, the fall in the general price level P would also move the average salon down along that AD curve to the right, increasing C. So total consumption and employment (and W/P) would rise. But the NK model simply lacks that feature. P(t) does not appear in the model. Only P(t)/P(t+1) appears in the model, and it only influences demand via the gap between real and nominal interest rates. (And if we take the limit as e approaches infinity, we get perfectly competitive salons.) Hello! (first comment from a long-time lurker) Very insightful, as always. Now I do not have an expertise in NK models. If in your model, we add two (not unreasonable) assumptions: 1. Agents are not memoryless 2. At least one agent believes that the sunspot-shock is temporary Then wouldn't the economy return to its equilibrium output without any need for change in nominal agregates (or even without money)? I'm not sure if NK-models state these assumptions explicitly (or if including them violates the canonical NK model). I thought we weren’t supposed to use argument from authority anymore? Then wouldn’t we just say Mike Woodford is really smart and has lots of published papers that use NK models so they must be The monopolistic competition does make a difference—or at least so it seems to me—as now you’re bringing in the very adjustment mechanisms that get you back to what counts as full employment in the NK model: allowing labor supply to increase and prices to adjust. So take the fraction of intermediate firms that do adjust prices. They drop their prices substantially and hire lots of labor. They have enough demand at those new lower prices now, and can actually expand their production. So output goes up at those firms that adjusted their prices. And unless you’ve got some strange aggregation function overall demand should be greater than 50. At the same time, wages go down not only at those firms but at all firms. [In the background workers are getting not just wages but also the profits from the firms who have cut prices.] And for all the identical workers in the economy they are back on the equilibration of marginal utility of consumption and marginal disutility of labor. You’ve got the mechanism that allows labor supply to increase through the extra hiring—the other equilibrium condition I was talking about earlier. In contrast to your 9/11 648 post there _is_ an incentive to hire that extra labor now. Or are you still saying that despite drop in price there can be no more than 50 total haircuts in the economy? Or that the extra haircuts just happen to be exactly canceled out by less at others? Despite the fact that nominal wages have now dropped substantially across the economy and you no longer have a difference between the marginal utility of consumption and the marginal disutility of And then if you think of dynamics that happens again next period as output goes up again as more firms have the ability to adjust their prices. So some further fraction now are producing more, and so it goes on each period with production slowly inching back up, and hence pushing employment back toward the NK model’s concept of full employment. [Well, kind of. This is all a little off as there isn’t any actual way for the consumption to fall from animal spirits in the model, but it does outline the equilibrium forces that push employment back to the model’s concept of full employment.] notsneaky: I'm not sure about this, but what you are saying sounds to my ears similar to what John Cochrane is saying. (The difference is that JC has the CB responding to inflation, while you have the CB responding to the output gap, but that's not an important difference in this context). Actually I'm assuming CB is inflation targeting here too, I just took differences and plugged in the Philips Curve. The actual monetary rule is r(t)=n+a*pi(t), the PC is pi(t)=pi(t-1)+k*c(t). So r(t) -r(t-1)=(ak)*c(t)=b*c(t). So it might very well be the same thing Cochrane is saying, I dunno, I'll have to go and read his blog. I'll come back to the MR below. Simplify: How about this: at the beginning of time, the CB makes the following ("blow up the world") commitment: "If everyone always chooses C=100, I will always set r=n. But if anyone ever chooses C =/=100, I will choose r=2n, and keep it there forever..." I think this would work too but am not sure (is that supposed to be r=2n or just a stand in for any "crazy" r?) What's tripping me up a bit is the exact timing within each period, and trying to combine the assumption of an unexpected shock with perfect foresight afterward. If C=/=0 in period zero but not because of the choice of households but some exogenous shock, does the CB set r=2n? If yes, then I don't think it works. If no, then I think it does, since all you want is to pin down the correct expectations. We have c(t+1)=c(t)+(r(t)-n) but - because as you emphasize c can jump - this does NOT apply in the period right after the shock... right? The Euler implies consumption smoothing but unexpected shocks can make you jump from one path to another and only once you're on the new right path does it hold. The policy I'm thinking off above involves a monetary rule EXCEPT for the initial period (or given the timing within each period and the lags, the period after the shock) where the CB chooses an r(t) in a "discretionary" fashion to get on the appropriate stable path. It's MR after that. So consider a output targeting rule r(t)=n+v*c(t). Then r(t)-r(t-1)=v*(c(t)-c(t-1)) which means that dr/dt=0 is the same nulcline as dx/dt=0, the vertical line at r=n. In that case if a shock happens the only stable "path" (actually a point) is to just set r=n but that means there's no adjustment back to c=0. Interest rates only stabilize the economy, they don't bring it back to full employment. In that case you're perfectly right. No equilibratin', self or otherwise. I think this has actually been emphasized in some of the papers and textbook presentations of the NK model (I'm not a NK myself, just playing one here, because I don't think there's any genuine NKs involved who are willing to jump in and defend their framework, so I'm just filling in that void) (Note that reacting to the output gap in a rule based fashion is a different policy than "set whatever r is necessary to get back to full employment".) If you have a Taylor style MR, say r(t)=n+b*c(t+1)+(1-b)*pi(t) (I put c(t+1) in there because of perfect foresight and it reduces the worrying one has to do about the time subscripts) then dr/dt=0 nullcline is a downward sloping line rather than the x-axis or the vertical axis at r=n. But there's still a stable path. There is still that crazy implication that if the economy goes into recession (c(0)<0) then what the CB needs to do is first RAISE interest rates, to get on a path where it can cut them later. Like I said it's weirding me out a bit and I'm very very un-confident that anything I'm saying here is correct. Subject to that caveat/confession, if I was gonna declare a winner in the NicK vs. NK fight I'd give you most of it. It's not exactly that the NK models just "assume that the economy comes back to full employment". It's rather something like, "the rational expectations consumers know that the CB will pick an initial value of r to get on a stable path which leads back to full employment", and that it's MR is "sensible". The part that seems to get left out of the how the NK models are described is that for all this to work (and it does work with ratex or perfect foresight, I think) the CB has to choose a crazy initial interest rate. Nick: Arguably the problem stems from the fact that the model which we teach in intro macro as Keynesian is actually more akin to Pigou's macro model than to Keynes'. https://www.uoguelph.ca/ Akshay: Thanks! If agents believe that the effect of the sunspot will be temporary, I think it has no effect at all. Because if each agent thinks his income will rise from 50 back to 100, and that the CB will keep r (t)=n, he will immediately want to spend more than 50, which isn't a rational expectations equilibrium. If all else fails, the argument from authority is a good one! (Not that I have much.) When my dentist, doctor, or car mechanic uses it, I put some weight on it. "So take the fraction of intermediate firms that do adjust prices. They drop their prices substantially and hire lots of labor. They have enough demand at those new lower prices now, and can actually expand their production. So output goes up at those firms that adjusted their prices. And unless you’ve got some strange aggregation function overall demand should be greater than 50." But I could equally well counterargue that when they drop their relative prices it simply raises the relative prices of the remaining firms. So that merely causes a re-distribution of production and employment between the two sets of firms. If we had a well-defined vertical AD curve, this would be what *must* happen. If instead we had a well-defined downward-sloping AD curve, this *could not* happen. But in this case we do not have a well-defined AD curve at all. It's like the AD curve is vertical (because the level of P doesn't matter) but it's very very thick. Anything can happen. Put it another way: yes, that fraction of firms cutting prices could result in an increase in aggregate C; that's an equilibrium. But it's also an equilibrium if it causes no change in aggregate C, just a redistribution. Sacrificing a goat might work too; or might not work. We have a multiplicity of equilibria. notsneaky: "(is that supposed to be r=2n or just a stand in for any "crazy" r?) " It didn't have to be 2. Any r(t) greater than n for all t would have the same effect. Because it means C(t) must grow over time along a perfect foresight path. You lost me a little on what you said immediately after that. I'm still in the model where there are no exogenous shocks (except sunspots). So I'm saying the CB threatens to raise r(t) permanently if agents ever choose C(t) below full employment as a result of seeing a sunspot. "There is still that crazy implication that if the economy goes into recession (c(0)<0) then what the CB needs to do is first RAISE interest rates, to get on a path where it can cut them later. Like I said it's weirding me out a bit and I'm very very un-confident that anything I'm saying here is correct." Yep. Understood. I'm with you. It's very similar to writing down the Fisher equation, assuming that money is superneutral so cannot affect the real rate, and saying that if the central banks wants to increase inflation it simply needs to raise the nominal interest rate. It's similar to having a model with an unstable equilibrium where the comparative statics all have the wrong sign, so an increase in demand causes prices to fall. These are some of the paradoxes you get when you assume the economy always jumps to a perfect foresight equilibrium path. Some economists handle this by assuming some sort of adaptive learning. So that the Rational Expectations equilibrium is treated as the limiting case where learning takes place very quickly. Which I think is a useful approach. Put it this way: what I am (maybe) saying is that the sort of NK model I have here will not converge on full employment under any reasonable sort of learning mechanism where agents learn about their future income from observing their past incomes and interest rates, like atheoretical econometricians. BSF: that deserves to be read. My off-the cuff reaction though, before reading it: If this NK model I have here simply added a Pigou effect, the problem would be solved. If all agents expect C(t)=50 from now on, they know that prices will fall, and that M(t)/P(t) will rise without limit, so each agent will become infinitely wealthy sometime in the future, but will still be living like an underemployed pauper, which is not individually rational, so each individual will decide to consume more than 50 if he expects his income to be 50, and this cannot be an equilibrium. Since we have a continuum of equilibria (like a ball on a perfectly flat table) all it needs is a tiny Pigou effect to get the economy back to full employment immediately (unless the CB sets r(t) too high). Correct me if I'm wrong but I think this reduces (or generalizes?) to saying two things: 1)The Wicksellian cumulative process (asymptotically)goes on infinitely in a world with (asymptotically) vanishing nominal rigidities. If so, that's true. Didn't Wicksell explicitly mention nominal rigidities ('history', he called it) as the only reason the cumulative process is bounded? And every once so often, economies do collapse into Howitt-Wicksell hyperinflations, no? The interesting thing to ponder is why there never is a hyperdeflationary death trap. 2) Think J W Mason has mentioned this but, in a world with otherwise efficient markets, why does one price need to be set 'exogenously'? Think John Geanakoplos and compatriots had a decent swing at that question with their GEI (General Equilibrium with Incomplete Asset Markets) project, An anthology here : http://www.dklevine.com/archive/refs41115.pdf : in sections 5 and 8 Geanakoplos describes how private agents fail to create the financial numeraire, paving the way for a monetary/price regime. The problem's not Wicksell, Nick. The problem's Arrow-Debreu-Lucas. And what should/can the NKs immediately place back into the model? Risk. I'm still not sure I am understanding you, but I don't think we are on the same page. let me guess, and try this: 1. Wicksell's cumulative process was about what happened if the central bank set the wrong rate of interest. Bad stuff happens. And it gets steadily worse over time unless the central bank eventually corrects its mistake. New Keynesians agree. I agree. But New Keynesians say that bad stuff cannot happen if the central bank sets the right rate of interest. I say that that conclusion does not follow from their model. It might follow from a different model (it would follow from a model with an Old Keynesian IS curve, or from Wicksells model), but it doesn't follow from the NK model. The NK's are abusing their own model. If they used it properly, they would see that bad stuff *might* happen even if the CB always sets the right rate of interest. 2. Whoever produces apples must set either the price or quantity (or something) of the apples they produce. That's true whether it's a private or government producer of apples. It would also be true if it were a private or government producer of money, instead of apples. There's a whole separate question of whether a rate of interest is analagous to a price or quantity or something of money. I say it isn't. And that this makes interest rates a bad instrument for monetary policy. But, if for example we take an Old Keynesian IS curve, and assume the CB sets a rate of interest, the CB cannot set any real rate of interest it wants, without the economy eventually blowing up (or down). It must set exactly the right rate of interest (in the long run). The CB cannot freely choose the real rate of interest. The problem's not Wicksell. The problem's not Arrow-Debreu-Lucas. The problem is New Keynesians taking bits from Wicksell and bits from other models and jamming them all together into an incoherent package that doesn't make sense. "And what should/can the NKs immediately place back into the model? Risk." No. Money. They've implicitly got a monetary exchange economy, without actually having money. They need to put money in properly. Risk makes no difference to my point here. You seem to be indicating that with: C(t) / C(t+1) = ( 1 + n )/( 1 + r(t) ) Cutting r(t) below n will tend to lower C(t+1) with respect to C(t) unless: C(t+1) = C(t) * ( 1 + r(t) + dM/dt ) / ( 1 + n ) Where M(t) is the money supply adjusted for liquidity preference. Here, reductions in r(t) that lead to increases in M(t) through a credit channel can increase C(t + 1) with respect to C(t). I don't think C(t)/2 can be an equilibrium path in the NK model unless the policy rule doesn't satisfy the Taylor principle. If the CB follows a Taylor rule, asymptotically paths go to +/- infinity or zero output gap. "Does everybody else in the illuminati know the NK's are just sweeping the whole thing under the rug?" In his book, Woodford points out that there is no forward inflation dynamic, i.e. today's inflation in no way determines tomorrow's. It is expectations of tomorrow's inflation that determines today's. So there is no sense in which a small inflation error can lead to a diverging path. A diverging path is *caused* by an expectation of asymptotically diverging inflation. There may be solid economic reasons to reject such paths (see below), but Woodford suggests that the mere fact that inflation is not observed to be diverging exponentially ought to be enough grounds to reject those Maybe the lack of determinacy is a feature, rather than a bug. As Woodford (2000) showed, there are paths of fiscal policy for which running a Taylor rule cannot guarantee the stabilization of inflation. Particularly extreme such fiscal policies, are consistent with expectations of asymptotic runaway inflation, Taylor principle notwithstanding. So the fact that the basic NK model is consistent with both convergent and divergent paths just means that it is consistent with both (unspecified) Ricardian and runaway fiscally dominant regimes. John Cochrane href="http://faculty.chicagobooth.edu/john.cochrane/research/papers/cochrane_taylor_rule_JPE_660817.pdf">takes up the case of possibly non-Ricardian fiscal policy, and finds that quantity of government debt along with the fiscal theory of the price level can, in fact, provide the nominal anchor required for determinacy in the NK model (much like the real balance does for monetarists). If Cochrane is correct, then I think it follows that agent belief in sufficiently bounded primary surpluses can provide the determinacy that's needed for eliminating diverging paths (or to put us in the divergent path in the case of insufficiently bounded deficits/surpluses). A couple of other interesting takes I found: McCallum 2011 agrees with Cochrane that the divergent paths cannot be ruled out a priori, but he invokes a kind of ratex version of the anthropic principle: Since it is known that the divergent paths of the NK model are not learnable, and since the agents in fact have model consistent expectations that they must have learned, they must be in a world with convergent paths. I.e. it's not consistent to assume both ratex and divergent paths. Minford and Srinivasan (2012) don't buy it: if the agents *really* learned the model they would also know about the divergent paths, and there is nothing to prevent jumping into a different path. They "fix" the problem by adding a fixed inflation target rule that kicks in contingent on being on a divergent path. Since that makes those paths non-divergent, the model is now well determined in the unique convergent equilibrium. My feeling is that any issue that can be remedied by adding a rule, which by virtue of having been added never has to be invoked, must be a pretty minor issue. It's a bit like eliminating infinite Ponzi game paths (infinite profit, over infinite time, with infinitely small probability), to make models arbitrage free when the support of the probability space is not finite. It's a reasonable axiom that makes expectations well defined, but if you really don't like it you can use a finite horizon model, at the cost of having to add more complicated boundary conditions. Then, if you want, you can take the limit of *that* model as T->infinity and then the resulting model won't have the bad paths. "That's why, Kevin, I said they must put money back into the model" Money or FTPL or a contingent different target. But none of those things necessarily change the dynamics of the converging paths. Eggertsson and Woodford (2003) show that under very general conditions, money in the utility function has no impact at all on the NK model dynamics. But I'm guessing you could use it to rule out the diverging paths, just like the FTPL. Karsten: thanks for your comment. I appreciate it. You are much more up on this literature than I am. I haven't fully digested it, but a couple of immediate thoughts: 1. I didn't specify the Phillips curve equation in my post. In principle, it could be anything from totally fixed prices to an equation with inflation inertia, or whatever. I think I'm making a point here, about the level of employment being indeterminate, that is independent of (though parallel to) the question of whether inflation is determinate. 2. Suppose the supply of land is perfectly inelastic. Suppose I compare two models of the demand for land. In model A, it's a negative function of the price of land, and a positive function of the expected rate of increase in that price. In model B, it's a function of the expected rate of increase only; the price of land does not appear in the model. Model A has divergent paths, but in some sense those divergent paths are pathological. It determines a price of land. Model B does not determine the price of land. The NK model is like model B -- except it's not just P that is indeterminate, it's the level of output that's indeterminate too. It only determines expected Ydot. Standard macro models (like ISLM for example) are like model A. Not sure if that's clear. Shorter version: here's the NK model: 1. expectedYdot = F(r) 2. Pdot - expectedPdot = G(Y) You can ignore 2, and Pdot, and still see that Y is indeterminate. Karsten Howes: "Eggertsson and Woodford (2003) show that under very general conditions, money in the utility function has no impact at all on the NK model dynamics." Gali shows that this is true in his model provided the utility function is separable. The thing is, he does have money in his model; just not in the way Nick wants. Kevin: that's sort of right. But even if you just threw money into the U function, separable U or not, that still ought to give you a Pigou effect, and that would pin down Y to full employment in the long run. Because in the C=50 case, we know that P would eventually fall towards 0, so M/P would rise to infinity, so an individual would want to consume more than his income from cutting hair. Maybe: put money in anyhow, like in U, so Y is determinate, then take the limit of this model as the U of money disappears? Y is only indeterminate *at* the limit, not in the limit?? Is that what Woodford is sorta doing (implicitly or explicitly? Nick, I'm not sure that long-run Y really is indeterminate in Gali's model. I suspect he just throws in the "assumption that the effects of nominal rigidities [on Y] vanish asymptotically" because it's just too difficult to actually prove that they do. (He reminds me of Bertrand Russell's wisecrack about axioms having the advantages of theft over honest toil.) But the fact that it's hard to prove just makes it an open question. It's also very hard to see how an output gap (say for simplicity a positive one) can be sustained forever. With adaptive expectations it's easy; just inflate faster and faster. Obviously that won't work with RE. Still, maybe there's a way if the central bank is really wild? Well I just emailed Woodford to see what he thinks of all this claptrap. :D Recent Comments
{"url":"http://worthwhile.typepad.com/worthwhile_canadian_initi/2013/09/new-keynesians-just-assume-full-employment-without-even-realising-it.html","timestamp":"2014-04-20T06:01:53Z","content_type":null,"content_length":"166086","record_id":"<urn:uuid:c2cfc785-d866-480a-8678-b7f07420934b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
How, and how not, to explain graphing trig functions Graphing trig functions should be relatively easy for students who have already mastered general function transformations with quadratics and exponentials. Nevertheless, there are so many steps involved that I think an emphasis on procedure and practice is justified in this case. With that in mind, I aimed to build on students' prior understanding of transformations while giving them an outline for how to graph trigonometric functions of the form f(x) = A*sin(B(x-C))+D. I completely butchered the first lesson on graphing trig function and promised my students (all patience and humor to the end of that horrid, horrid hour) to make amends by presenting the procedure crystal clear the following lesson. What I learned from the failed lesson: • Save your geogebra files before moving into the classroom, as the computer can and will have a fit and shut down unexpectedly (bye bye 30 minutes of work) when you plug in the projector. • When creating a nice mnemonic for how to graph these functions (I came up with BC AD - Before Christ, Anno Domini, get it?) go ahead and check first that this is actually a reasonable way to graph these functions. BCAD isn't, in the sense that it doesn't allow you to place points each step of the way. • If something isn't working, stop doing it. I actually persisted for 40 minutes giving students one retarded way after another to graph trig functions, when I should have assigned some other work and taken a few minutes to think things through. It's a testimony to the loveliness of my students that they persevered and, when at last I came to my senses and called the whole thing quits, laughed with me (and, admittedly, at me). In my defense, when I immediately after the lesson googled how to graph such functions... NOTHING came up. Most websites and videos explain A, B, and D - but skip the C. I sat down with my colleagues and we came up with the following method: • D gives principal axis, so make a line there. • A gives amplitude, so make lines at D+A and D-A. Now we know the range of the function. • C is the phase shift. For sine, put a point at (C, D). For cosine, put it at (C, D+A). • B is the frequency, and 2*pi/B is the period. Figure out how long one period is and place a point at (C+period, D) for sine or (C+period, D+A) for cosine. Now draw a full period of the function between the points you've made. Mnemonic? I'll ask the kids to make one up. All my ideas involve dog ate cat bones and dingo ate chubby baby. I'm afraid the lesson itself won't be any more exciting than demonstration and practice. But I think sometimes (especially after the havoc I put them through last time) demonstration and practice is what the kids want most of all. For homework, I give them a modeling activity involving the movement of the sun in three arctic cities. 8 comments: 1. Sine curves are all ugly. What you really need is a mnemonic that tells them how to Draw A Curve Badly... oh well, it's the best I could do. 2. I miss the kids back there. Last week I tried a new, non-rigorous way of explaining why the chain rule works. It wasn't as intuitive when I tried to explain it as it was when I prepared it and some of my students actually got kind of hostile. 3. Alex - that one is great, thanx! Johan - Aw, so what did you do when that happened? 4. Update: this explanation seems to have done the trick. In the end I went with a modification of Alex' suggestion: Draw A Curve Brilliantly. :) 5. Kept a stiff upper lip, then cried my eyes out when I got home. 6. Johan, I know, that's just like you. 7. Thanks for this! I was looking for a way to make graphing these easier on my seniors :) 8. Thanks for the advice. Newbie Math Teacher in New York.
{"url":"http://juliatsygan.blogspot.com/2011/03/how-and-how-not-to-explain-graphing.html","timestamp":"2014-04-20T13:19:38Z","content_type":null,"content_length":"91132","record_id":"<urn:uuid:26308ff3-b4d2-4026-8943-4ce63a1d6917>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
Existence of a sink in directed graphs with a certain structure up vote 8 down vote favorite I'm not a mathematician (I'm an economist) but I hope that this problem is sufficiently non-trivial that someone here will find it interesting. I'm trying to model how workers decide what "skills" to acquire when (a) they have different innate abilities for different skills but (b) they face competitive pressure from others that also choose to acquire those skills. Suppose we have $N$ workers that can choose to belong in any of $M$ different groups. Multiple workers can belong to the same group; a worker can be in one and only one group at a time. They can jump from any group to any other group. A worker $i$ in group $j$ gets value $v_{ij}f(n_m)$ where $n_m$ is the number of workers in that group, and $f'(n_m) < 0$ and $f(1) = 1$ and as $n_m$ approaches infinity, $f$ approaches 0. $v$ is uniformly distributed. Workers jump between groups to try to maximize the value they receive. Graph theory formulation: I'm interested in the movement of workers between groups. I've modeled it as a directed graph, where each node is one possible configuration of workers among groups. Two nodes are connected if one worker changing states can convert one node to the other; edges point towards the greatest utility gain for the "jumping" worker. In simulations, I've found that the system always reaches an equilibrium where no worker wants to jump and I haven't been able to construct a counter-example. My conjecture is that this is a general property of graphs with this structure, i.e., for any directed graph with the $(m,n)$ structure described above, there exists at least one "sink" with no outgoing edges and that this sink is reachable from all other nodes. Ignoring values, it is possible to draw graphs without "sinks" but it leads to contradictions when I try to assign actual values to the worker-group pairings. None of the approaches I've tried so far seem promising enough to mention here. add comment 1 Answer active oldest votes Your conjectures are proven in the paper "Congestion Games with Player-Specific Payoff Functions" by I. Milchtaich, published in Games and Economic Behavior in 1996. Usually, the term "congestion game" means a game in which players choose nonempty subsets of the set of resources. Each resource is assigned yields a certain utility (the same utility) to all players choosing it, and this utility depends only on the number of players, not their identities. Each player's total utility is the sum over all resources he has chosen. The general result is that such games are potential games, so they admit pure Nash equilibria (the sinks you consider) and any best-reply improvement path will reach one. up vote 10 down vote This particular paper considers a related notion of congestion game where players each only select one resource (as in your case). Again, their utility is based on the number of players accepted choosing that resource, but different players choosing the resource may get different utilities. Under a monotonicity assumption slightly more general than the one you have made, the author shows that a pure Nash equilibrium still exists, and while not all best-reply improvement paths reach such an equilibrium, there exists one which can be reached in this way from any starting position. 1 Thanks Noah! That's exactly the right citation. – John Horton Mar 6 '11 at 21:05 1 You're welcome; I'm glad to help. – Noah Stein Mar 6 '11 at 22:49 add comment Not the answer you're looking for? Browse other questions tagged graph-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/57579/existence-of-a-sink-in-directed-graphs-with-a-certain-structure","timestamp":"2014-04-18T08:44:18Z","content_type":null,"content_length":"55399","record_id":"<urn:uuid:b8b423c4-930f-4b79-baab-2741c962a5a9>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Anonymous on Wednesday, February 27, 2008 at 11:08am. This question makes reference to an orthonormal basis (i,j), and an origin O. 1. Consider the triangle ABC, with vertices A(0,-2), B(9,1), C(-1,11). a) a cartesian equation of the altitude from C; b) a cartesian equation of the circle which has BC as a diameter; c) a cartesian equation of the circle with C and the line through A and B as a tangent d) the points of intersection of the circles described in b) and c) ?*cartesian equation is y=mx+c Please help. I really don't get this. • Math - drwls, Wednesday, February 27, 2008 at 11:41am Not all Cartesian equations are straight lines. It is any equation that uses the variables x and y. For example, b) and c) are circles. For (a), they want the equation of a line through C that is perpendicular to the side c (from A to B) The line from A to B has slope m = (1+2)/(9-0) = 1/3 Therefore the altitude has slope -3. The equation for the altitude from C is (y-11)= -3(x+1) y = -3x + 8 b) BC diameter has a center at (4,6). and that is the center of the circuscribed circle . The length of the diameter is sqrt[(10^2 + 10^2)] = sqrt 100 = 10 sqrt 10. The radius of the circle around it is 5 sqrt10 From that information, the equation of the circle is (x-4)^2 + (y-6)^2 = [5 sqrt10]^2 = 50 c) I think you mean "circle with center at C" and line through A and B as tangent. Get the equation of that line and determine its distance from C. You are going to have to take it from here. It's time for you to practice what you've learned. Someone will be glad to critique your work • Math - Reiny, Wednesday, February 27, 2008 at 11:47am Did you draw a diagram? Most people are "visual" and it will make the question easier to see. a) the altitude must meet AB and must be perpendicular to AB so slope of AB = 3/9 = 1/3 which makes the slope of the altitude -3 we know the altitude has slope -3 and must pass through (-1,11), so.... 11 = -3(-1) + b, (I am using y = mx+b) altitude equation: y = -3x + 8 b) are you familiar with the general equation of the circle: (x-h)^2 + (y-k)^2 = r^2, where the centre is (h,k) and the radius is r ?? if so, then the centre must be the midpoint of BC which is (4,6) so equation is (x-4)^2 +(y-6)^2 = r^2 but (9,1) lies on our circle, so then (x-4)^2 +(y-6)^2 = 50 c)If AB is to be a tangent of the circle with centre at C, then the altitude we found in a) must be the radius of our circle. (Can you see how important a diagram is??) I hope you have seen the formula to find the distance between a point (p,q) and the line Ax + By + C = 0 it says Dist = │Ap + Bq + C│/(A^2+B^2) First we need the equation of AB which in general form is x - 3y - 6 = 0 so radius of our circle is │1(-1) + (-3)(11) - 6│/√(1+9) = 40/√10 equation of circle: (x+1)^2 + (y-11)^2 = 40^2/√10^2 (x+1)^2 + (y-11)^2 = 1600/10 = 160 d) I will let you do that one, here is the method 1. expand each of the two circle equations, each will contain an x^2 and a y^2 term 2. subtract one equation from the other, the square terms will drop away, leaving an equation in x and y. 3. solve for either x or y, whichever seems easier 4. substitute that back into the first circle equation 5. You should get two solutions, sub that back into the x and y equation you got in step 2. Good luck Let me know if it worked for you • Math (correction) - drwls, Wednesday, February 27, 2008 at 11:47am This is what I should have written: b) BC diameter has a center at (4,6). and that is the center of the circumscribed circle . The length of the diameter is sqrt[(10^2 + 10^2)] = sqrt 200 = 10 sqrt 2. The radius of the circle around the diameter is 5 sqrt 2 From that information, the equation of the circle is (x-4)^2 + (y-6)^2 = [5 sqrt2]^2=50 □ Math (correction) - Reiny, Wednesday, February 27, 2008 at 11:56am drwls, I was wondering what a "circuscribed circle" was, lol not as serious though as several of my students who somehow wanted to circumcise a circle. BTW, I wish somebody could come up with a way to avoid two or more tutors working on the same problem at the same time. ☆ To Reiny - DrBob222, Wednesday, February 27, 2008 at 1:02pm The English/social studies/history et al tutors on this board suggested at the start of the school year that they might avoid two people doing a great deal of work on one question as For a question involving proofing a paper or something similar in which a lot of work was required, that the first tutor to start on the project post a note that s/he is working on the response and will post the response later. That way, other tutors know the problem is being taken care of and will move on to another subject/question. You might email Writeacher or Ms Sue for more details. □ Math (to Reiny et al) - drwls, Wednesday, February 27, 2008 at 1:48pm I'd rather not IM a lot of people to tell them what I'm working on, though doing so can avoid multiple answers and wasted effort. I answer most questions late at night when no one else is around, anyway. I'm quite often wrong with my sloppy math, as you know; so another point of view helps. When the answers are short, not much effort is wasted. Related Questions Geometry - Within an orthonormal system consider points A(1,4) B(5,1) C(4,8) (a)... Math - A(-11,8)B(2,-6) and C(-19,-8) are the vertices of Triangle ABC. N(X,Y) is... Orthocenter Question - Triangle ABC has vertices A(0,6), B(4,6) and C(1,3) Find... Algebra - I need help with two math problems. 1. A triangle has vertices (1, 4... algebra - The vertices's of triangle ABC are A(-1,2), b(0,3), and c(3,1). ... Math - Triangle ABC has vertices A(0,4),B(1,2),and C(4,6). Determine whether ... math - triangle ABC has vertices A(3,4), B(4,-3) and C(-4,-1). a. draw a sketch ... 6th grade math - The vertices of triangle ABC are a(8,23) B(2,2) C(2,8). What ... maths - geometry - In a space with an orthonormal coordinate system consider the... geometry - The medial triangle of a triangle ABC is the triangle whose vertices...
{"url":"http://www.jiskha.com/display.cgi?id=1204128537","timestamp":"2014-04-19T11:43:02Z","content_type":null,"content_length":"14874","record_id":"<urn:uuid:5b09e363-19b9-4359-8537-56da2b7dda4f>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Math K - 6 / Math K - 6 By the end of kindergarten, students understand small numbers, quantities, and simple shapes in their everyday environment. They count, compare, describe and sort objects, and develop a sense of properties and patterns. Number Sense 1.0 Students understand the relationship between numbers and quantities (i.e., that a set of objects has the same number of objects in different situations regardless of its position or arrangement): 1.1 Compare two or more sets of objects (up to ten objects in each group) and identify which set is equal to, more than, or less than the other. 1.2 Count, recognize, represent, name, and order a number of objects (up to 30). 1.3 Know that the larger numbers describe sets with more objects in them than the smaller numbers have. 2.0 Students understand and describe simple additions and subtractions: 2.1 Use concrete objects to determine the answers to addition and subtraction problems (for two numbers that are each less than 10). 3.0 Students use estimation strategies in computation and problem solving that involve numbers that use the ones and tens 3.1 Recognize when an estimate is reasonable. Algebra and Functions 1.0 Students sort and classify objects: 1.1 Identify, sort, and classify objects by attribute and identify objects that do not belong to a particular group (e.g., all these balls are green, those are red). Measurement and Geometry 1.0 Students understand the concept of time and units to measure it; they understand that objects have properties, such as length, weight, and capacity, and that comparisons may be made by referring to those properties: 1.1 Compare the length, weight, and capacity of objects by making direct comparisons with reference objects (e.g., note which object is shorter, longer, taller, lighter, heavier, or holds more). 1.2 Demonstrate an understanding of concepts of time (e.g., morning, afternoon, evening, today, yesterday, tomorrow, week, year) and tools that measure time (e.g., clock, calendar). 1.3 Name the days of the week. 1.4 Identify the time (to the nearest hour) of everyday events (e.g., lunch time is 12 o'clock; bedtime is 8 o'clock at night). 2.0 Students identify common objects in their environment and describe the geometric features: 2.1 Identify and describe common geometric objects (e.g., circle, triangle, square, rectangle, cube, sphere, cone). 2.2 Compare familiar plane and solid objects by common attributes (e.g., position, shape, size, roundness, number of Statistics, Data Analysis, and Probability 1.0 Students collect information about objects and events in their environment: 1.1 Pose information questions; collect data; and record the results using objects, pictures, and picture graphs. 1.2 Identify, describe, and extend simple patterns (such as circles or triangles) by referring to their shapes, sizes, or Mathematical Reasoning 1.0 Students make decisions about how to set up a problem: 1.1 Determine the approach, materials, and strategies to be used. 1.2 Use tools and strategies, such as manipulatives or sketches, to model problems. 2.0 Students solve problems in reasonable ways and justify their reasoning: 2.1 Explain the reasoning used with concrete objects and/ or pictorial representations. 2.2 Make precise calculations and check the validity of the results in the context of the problem. By the end of grade one, students understand and use the concept of ones and tens in the place value number system. Students add and subtract small numbers with ease. They measure with simple units and locate objects in space. They describe data and analyze and solve simple problems. Number Sense 1.0 Students understand and use numbers up to 100: 1.1 Count, read, and write whole numbers to 100. 1.2 Compare and order whole numbers to 100 by using the symbols for less than, equal to, or greater than (<, =, >). 1.3 Represent equivalent forms of the same number through the use of physical models, diagrams, and number expressions (to 20) (e.g., 8 may be represented as 4 + 4, 5 + 3, 2 + 2 + 2 + 2, 10 -2, 11 -3). 1.4 Count and group object in ones and tens (e.g., three groups of 10 and 4 equals 34, or 30 + 4). 1.5 Identify and know the value of coins and show different combinations of coins that equal the same value. 2.0 Students demonstrate the meaning of addition and subtraction and use these operations to solve problems: 2.1 Know the addition facts (sums to 20) and the corresponding subtraction facts and commit them to memory. 2.2 Use the inverse relationship between addition and subtraction to solve problems. 2.3 Identify one more than, one less than, 10 more than, and 10 less than a given number. 2.4 Count by 2s, 5s, and 10s to 100. 2.5 Show the meaning of addition (putting together, increasing) and subtraction (taking away, comparing, finding the 2.6 Solve addition and subtraction problems with one-and two-digit numbers (e.g., 5 + 58 = __). 2.7 Find the sum of three one-digit numbers. 3.0 Students use estimation strategies in computation and problem solving that involve numbers that use the ones, tens, and hundreds places: 3.1 Make reasonable estimates when comparing larger or smaller numbers. Algebra and Functions 1.0 Students use number sentences with operational symbols and expressions to solve problems: 1.1 Write and solve number sentences from problem situations that express relationships involving addition and 1.2 Understand the meaning of the symbols +, -, =. 1.3 Create problem situations that might lead to given number sentences involving addition and subtraction. Measurement and Geometry 1.0 Students use direct comparison and nonstandard units to describe the measurements of objects: 1.1 Compare the length, weight, and volume of two or more objects by using direct comparison or a nonstandard unit. 1.2 Tell time to the nearest half hour and relate time to events (e.g., before/after, shorter/longer). 2.0 Students identify common geometric figures, classify them by common attributes, and describe their relative position or their location in space: 2.1 Identify, describe, and compare triangles, rectangles, squares, and circles, including the faces of three-dimensional 2.2 Classify familiar plane and solid objects by common attributes, such as color, position, shape, size, roundness, or number of corners, and explain which attributes are being used for classification. 2.3 Give and follow directions about location. 2.4 Arrange and describe objects in space by proximity, position, and direction (e.g., near, far, below, above, up, down, behind, in front of, next to, left or right of). Statistics, Data Analysis, and Probability 1.0 Students organize, represent, and compare data by category on simple graphs and charts: 1.1 Sort objects and data by common attributes and describe the categories. 1.2 Represent and compare data (e.g., largest, smallest, most often, least often) by using pictures, bar graphs, tally charts, and picture graphs. 2.0 Students sort objects and create and describe patterns by numbers, shapes, sizes, rhythms, or colors: 2.1 Describe, extend, and explain ways to get to a next element in simple repeating patterns (e.g., rhythmic, numeric, color, and shape). Mathematical Reasoning 1.0 Students make decisions about how to set up a problem: 1.1 Determine the approach, materials, and strategies to be used. 1.2 Use tools, such as manipulatives or sketches, to model problems. 2.0 Students solve problems and justify their reasoning: 2.1 Explain the reasoning used and justify the procedures selected. 2.2 Make precise calculations and check the validity of the results from the context of the problem. 3.0 Students note connections between one problem and another. By the end of grade two, students understand place value and number relationships in addition and subtraction, and they use simple concepts of multiplication. They measure quantities with appropriate units. They classify shapes and see relationships among them by paying attention to their geometric attributes. They collect and analyze data and verify the Number Sense 1.0 Students understand the relationship between numbers, quantities, and place value in whole numbers up to 1,000: 1.1 Count, read, and write whole numbers to 1,000 and identify the place value for each digit. 1.2 Use words, models, and expanded forms (e.g., 45 = 4 tens + 5) to represent numbers (to 1,000). 1.3 Order and compare whole numbers to 1,000 by using the symbols <, =, >. 2.0 Students estimate, calculate, and solve problems involving addition andsubtraction of two-and three-digit numbers: 2.1 Understand and use the inverse relationship between addition and subtraction (e.g., an opposite number sentence for 8 + 6 = 14 is 14 - 6 = 8) to solve problems and check solutions. 2.2 Find the sum or difference of two whole numbers up to three digits long. 2.3 Use mental arithmetic to find the sum or difference of two two-digit numbers. 3.0 Students model and solve simple problems involving multiplication and division: 3.1 Use repeated addition, arrays, and counting by multiples to do multiplication. 3.2 Use repeated subtraction, equal sharing, and forming equal groups with remainders to do division. 3.3 Know the multiplication tables of 2s, 5s, and 10s (to "times 10") and commit them to memory. 4.0 Students understand that fractions and decimals may refer to parts of a set and parts of a whole: 4.1 Recognize, name, and compare unit fractions from 1/12 to 1/2. 4.2 Recognize fractions of a whole and parts of a group (e.g., one-fourth of a pie, two-thirds of 15 balls). 4.3 Know that when all fractional parts are included, such as four-fourths, the result is equal to the whole and to one. 5.0 Students model and solve problems by representing, adding, and subtracting amounts of money: 5.1 Solve problems using combinations of coins and bills. 5.2 Know and use the decimal notation and the dollar and cent symbols for money. 6.0 Students use estimation strategies in computation and problem solving that involve numbers that use the ones, tens, hundreds, and thousands places: 6.1 Recognize when an estimate is reasonable in measurements (e.g., closest inch). Algebra and Functions 1.0 Students model, represent, and interpret number relationships to create and solve problems involving addition and 1.1 Use the commutative and associative rules to simplify mental calculations and to check results. 1.2 Relate problem situations to number sentences involving addition and subtraction. 1.3 Solve addition and subtraction problems by using data from simple charts, picture graphs, and number sentences. Measurement and Geometry 1.0 Students understand that measurement is accomplished by identifying a unit of measure, iterating (repeating) that unit, and comparing it to the item to be measured: 1.1 Measure the length of objects by iterating (repeating) a nonstandard or standard unit. 1.2 Use different units to measure the same object and predict whether the measure will be greater or smaller when a different unit is used. 1.3 Measure the length of an object to the nearest inch and/ or centimeter. 1.4 Tell time to the nearest quarter hour and know relationships of time (e.g., minutes in an hour, days in a month, weeks in a year). 1.5 Determine the duration of intervals of time in hours (e.g., 11:00 a.m. to 4:00 p.m.). 2.0 Students identify and describe the attributes of common figures in the plane and of common objects in space: 2.1 Describe and classify plane and solid geometric shapes (e.g., circle, triangle, square, rectangle, sphere, pyramid, cube, rectangular prism) according to the number and shape of faces, edges, and vertices. 2.2 Put shapes together and take them apart to form other shapes (e.g., two congruent right triangles can be arranged to form a rectangle). Statistics, Data Analysis, and Probability 1.0 Students collect numerical data and record, organize, display, and interpret the data on bar graphs and other 1.1 Record numerical data in systematic ways, keeping track of what has been counted. 1.2 Represent the same data set in more than one way (e.g., bar graphs and charts with tallies). 1.3 Identify features of data sets (range and mode). 1.4 Ask and answer simple questions related to data representations. 2.0 Students demonstrate an understanding of patterns and how patterns grow and describe them in general ways: 2.1 Recognize, describe, and extend patterns and determine a next term in linear patterns (e.g., 4, 8, 12 ...; the number of ears on one horse, two horses, three horses, four horses). 2.2 Solve problems involving simple number patterns. Mathematical Reasoning 1.0 Students make decisions about how to set up a problem: 1.1 Determine the approach, materials, and strategies to be used. 1.2 Use tools, such as manipulatives or sketches, to model problems. 2.0 Students solve problems and justify their reasoning: 2.1 Defend the reasoning used and justify the procedures selected. 2.2 Make precise calculations and check the validity of the results in the context of the problem. 3.0 Students note connections between one problem and another. By the end of grade three, students deepen their understanding of place value and their understanding of and skill with addition, subtraction, multiplication, and division of whole numbers. Students estimate, measure, and describe objects in space. They use patterns to help solve problems. They represent number relationships and conduct simple probability Number Sense 1.0 Students understand the place value of whole numbers: 1.1 Count, read, and write whole numbers to 10,000. 1.2 Compare and order whole numbers to 10,000. 1.3 Identify the place value for each digit in numbers to 10,000. 1.4 Round off numbers to 10,000 to the nearest ten, hundred, and thousand. 1.5 Use expanded notation to represent numbers (e.g., 3,206 = 3,000 + 200 + 6). 2.0 Students calculate and solve problems involving addition, subtraction, multiplication, and division: 2.1 Find the sum or difference of two whole numbers between 0 and 10,000. 2.2 Memorize to automaticity the multiplication table for numbers between 1 and 10. 2.3 Use the inverse relationship of multiplication and division to compute and check results. 2.4 Solve simple problems involving multiplication of multidigit numbers by one-digit numbers (3,671 x 3 = __). 2.5 Solve division problems in which a multidigit number is evenly divided by a one-digit number (135 ÷ 5 = __). 2.6 Understand the special properties of 0 and 1 in multiplication and division. 2.7 Determine the unit cost when given the total cost and number of units. 2.8 Solve problems that require two or more of the skills mentioned above. 3.0 Students understand the relationship between whole numbers, simple fractions, and decimals: 3.1 Compare fractions represented by drawings or concrete materials to show equivalency and to add and subtract simple fractions in context (e.g., 1/2 of a pizza is the same amount as 2/4 of another pizza that is the same size; show that 3/8 is larger than 1/4). 3.2 Add and subtract simple fractions (e.g., determine that 1/8 + 3/8 is the same as 1/2). 3.3 Solve problems involving addition, subtraction, multiplication, and division of money amounts in decimal notation and multiply and divide money amounts in decimal notation by using whole-number multipliers and divisors. 3.4 Know and understand that fractions and decimals are two different representations of the same concept (e.g., 50 cents is 1/2 of a dollar, 75 cents is 3/4 of a dollar). Algebra and Functions 1.0 Students select appropriate symbols, operations, and properties to represent, describe, simplify, and solve simple number relationships: 1.1 Represent relationships of quantities in the form of mathematical expressions, equations, or inequalities. 1.2 Solve problems involving numeric equations or inequalities. 1.3 Select appropriate operational and relational symbols to make an expression true (e.g., if 4 __ 3 = 12, what operational symbol goes in the blank?). 1.4 Express simple unit conversions in symbolic form (e.g., __ inches = __ feet x 12). 1.5 Recognize and use the commutative and associative properties of multiplication (e.g., if 5 x 7 = 35, then what is 7 x 5? and if 5 x 7 x 3 = 105, then what is 7 x 3 x 5?). 2.0 Students represent simple functional relationships: 2.1 Solve simple problems involving a functional relationship between two quantities (e.g., find the total cost of multiple items given the cost per unit). 2.2 Extend and recognize a linear pattern by its rules (e.g., the number of legs on a given number of horses may be calculated by counting by 4s or by multiplying the number of horses by 4). Measurement and Geometry 1.0 Students choose and use appropriate units and measurement tools to quantify the properties of objects: 1.1 Choose the appropriate tools and units (metric and U.S.) and estimate and measure the length, liquid volume, and weight/mass of given objects. 1.2 Estimate or determine the area and volume of solid figures by covering them with squares or by counting the number of cubes that would fill them. 1.3 Find the perimeter of a polygon with integer sides. 1.4 Carry out simple unit conversions within a system of measurement (e.g., centimeters and meters, hours and minutes). 2.0 Students describe and compare the attributes of plane and solid geometric figures and use their understanding to show relationships and solve problems: 2.1 Identify, describe, and classify polygons (including pentagons, hexagons, and octagons). 2.2 Identify attributes of triangles (e.g., two equal sides for the isosceles triangle, three equal sides for the equilateral triangle, right angle for the right triangle). 2.3 Identify attributes of quadrilaterals (e.g., parallel sides for the parallelogram, right angles for the rectangle, equal sides and right angles for the square). 2.4 Identify right angles in geometric figures or in appropriate objects and determine whether other angles are greater or less than a right angle. 2.5 Identify, describe, and classify common three-dimensional geometric objects (e.g.,cube, rectangular solid, sphere, prism, pyramid, cone, cylinder). 2.6 Identify common solid objects that are the components needed to make a morecomplex solid object. Statistics, Data Analysis, and Probability 1.0 Students conduct simple probability experiments by determining the number of possible outcomes and make simple 1.1 Identify whether common events are certain, likely, unlikely, or improbable. 1.2 Record the possible outcomes for a simple event (e.g., tossing a coin) and systematically keep track of the outcomes when the event is repeated many times. 1.3 Summarize and display the results of probability experiments in a clear and organized way (e.g., use a bar graph or a line plot). 1.4 Use the results of probability experiments to predict future events (e.g., use a line plot to predict the temperature forecast for the next day). Mathematical Reasoning 1.0 Students make decisions about how to approach problems: 1.1 Analyze problems by identifying relationships, distinguishing relevant from irrelevant information, sequencing and prioritizing information, and observing patterns. 1.2 Determine when and how to break a problem into simpler parts. 2.0 Students use strategies, skills, and concepts in finding solutions: 2.1 Use estimation to verify the reasonableness of calculated results. 2.2 Apply strategies and results from simpler problems to more complex problems. 2.3 Use a variety of methods, such as words, numbers, symbols, charts, graphs, tables, diagrams, and models, to explain mathematical reasoning. 2.4 Express the solution clearly and logically by using the appropriate mathematical notation and terms and clear language; support solutions with evidence in both verbal and symbolic work. 2.5 Indicate the relative advantages of exact and approximate solutions to problems and give answers to a specified degree of accuracy. 2.6 Make precise calculations and check the validity of the results from the context of the problem. 3.0 Students move beyond a particular problem by generalizing to other 3.1 Evaluate the reasonableness of the solution in the context of the original situation. 3.2 Note the method of deriving the solution and demonstrate a conceptual understanding of the derivation by solving similar problems. 3.3 Develop generalizations of the results obtained and apply them in other circumstances. By the end of grade four, students understand large numbers and addition, subtraction, multiplication, and division of whole numbers. They describe and compare simple fractions and decimals. They understand the properties of, and the relationships between, plane geometric figures. They collect, represent, and analyze data to answer questions. Number Sense 1.0 Students understand the place value of whole numbers and decimals to two decimal places and how whole numbers and decimals relate to simple fractions. Students use the concepts of negative numbers: 1.1 Read and write whole numbers in the millions. 1.2 Order and compare whole numbers and decimals to two decimal places. 1.3 Round whole numbers through the millions to the nearest ten, hundred, thousand, ten thousand, or hundred thousand. 1.4 Decide when a rounded solution is called for and explain why such a solution may be appropriate. 1.5 Explain different interpretations of fractions, for example, parts of a whole, parts of a set, and division of whole numbers by whole numbers; explain equivalents of fractions (see Standard 4.0). 1.6 Write tenths and hundredths in decimal and fraction notations and know the fraction and decimal equivalents for halves and fourths (e.g., 1/2 = 0.5 or .50; 7/4 = 1 3/4 = 1.75). 1.7 Write the fraction represented by a drawing of parts of a figure; represent a given fraction by using drawings; and relate a fraction to a simple decimal on a number line. 1.8 Use concepts of negative numbers (e.g., on a number line, in counting, in temperature, in "owing"). 1.9 Identify on a number line the relative position of positive fractions, positive mixed numbers, and positive decimals to two decimal places. 2.0 Students extend their use and understanding of whole numbers to the addition and subtraction of simple decimals: 2.1 Estimate and compute the sum or difference of whole numbers and positive decimals to two places. 2.2 Round two-place decimals to one decimal or the nearest whole number and judge the reasonableness of the rounded 3.0 Students solve problems involving addition, subtraction, multiplication, and division of whole numbers and understand the relationships among the operations: 3.1 Demonstrate an understanding of, and the ability to use, standard algorithms for the addition and subtraction of multidigit numbers. 3.2 Demonstrate an understanding of, and the ability to use, standard algorithms for multiplying a multidigit number by a two-digit number and for dividing a multidigit number by a one-digit number; use relationships between them to simplify computations and to check results. 3.3 Solve problems involving multiplication of multidigit numbers by two-digit numbers. 3.4 Solve problems involving division of multidigit numbers by one-digit numbers. 4.0 Students know how to factor small whole numbers: 4.1 Understand that many whole numbers break down in different ways (e.g., 12 = 4 x 3 = 2 x 6 = 2 x 2 x 3). 4.2 Know that numbers such as 2, 3, 5, 7, and 11 do not have any factors except 1 and themselves and that such numbers are called prime numbers. Algebra and Functions 1.0 Students use and interpret variables, mathematical symbols, and properties to write and simplify expressions and 1.1 Use letters, boxes, or other symbols to stand for any number in simple expressionsor equations (e.g., demonstrate an understanding and the use of the concept of a variable). 1.2 Interpret and evaluate mathematical expressions that now use parentheses. 1.3 Use parentheses to indicate which operation to perform first when writing expressions containing more than two terms and different operations. 1.4 Use and interpret formulas (e.g., area = length x width or A = lw) to answer questions about quantities and their 1.5 Understand that an equation such as y = 3x + 5 is a prescription for determining a second number when a first number is given. 2.0 Students know how to manipulate equations: 2.1 Know and understand that equals added to equals are equal. 2.2 Know and understand that equals multiplied by equals are equal. Measurement and Geometry 1.0 Students understand perimeter and area: 1.1 Measure the area of rectangular shapes by using appropriate units, such as square centimeter (cm2), square meter (m2), square kilometer (km2), square inch (in2), square yard (yd2), or square mile (mi2). 1.2 Recognize that rectangles that have the same area can have different perimeters. 1.3 Understand that rectangles that have the same perimeter can have different areas. 1.4 Understand and use formulas to solve problems involving perimeters and areas of rectangles and squares. Use those formulas to find the areas of more complex figures by dividing the figures into basic shapes. 2.0 Students use two-dimensional coordinate grids to represent points and graph lines and simple figures: 2.1 Draw the points corresponding to linear relationships on graph paper (e.g., draw 10 points on the graph of the equation y = 3x and connect them by using a straight line). 2.2 Understand that the length of a horizontal line segment equals the difference of the x-coordinates. 2.3 Understand that the length of a vertical line segment equals the difference of the y-coordinates. 3.0 Students demonstrate an understanding of plane and solid geometric objects and use this knowledge to show relationships and solve problems: 3.1 Identify lines that are parallel and perpendicular. 3.2 Identify the radius and diameter of a circle. 3.3 Identify congruent figures. 3.4 Identify figures that have bilateral and rotational symmetry. 3.5 Know the definitions of a right angle, an acute angle, and an obtuse angle. Understand that 90°, 180°, 270°, and 360° are associated, respectively, with 1/4, 1/2, 3/4, and full turns. 3.6 Visualize, describe, and make models of geometric solids (e.g., prisms, pyramids) in terms of the number and shape of faces, edges, and vertices; interpret two-dimensional representations of three-dimensional objects; and draw patterns (of faces) for a solid that, when cut and folded, will make a model of the solid. 3.7 Know the definitions of different triangles (e.g., equilateral, isosceles, scalene) and identify their attributes. 3.8 Know the definition of different quadrilaterals (e.g., rhombus, square, rectangle, parallelogram, trapezoid). Statistics, Data Analysis, and Probability 1.0 Students organize, represent, and interpret numerical and categorical data and clearly communicate their findings: 1.1 Formulate survey questions; systematically collect and represent data on a number line; and coordinate graphs, tables, and charts. 1.2 Identify the mode(s) for sets of categorical data and the mode(s), median, and any apparent outliers for numerical data sets. 1.3 Interpret one-and two-variable data graphs to answer questions about a situation. 2.0 Students make predictions for simple probability situations: 2.1 Represent all possible outcomes for a simple probability situation in an organized way (e.g., tables, grids, tree 2.2 Express outcomes of experimental probability situations verbally and numerically (e.g., 3 out of 4; 3 /4). Mathematical Reasoning 1.0 Students make decisions about how to approach problems: 1.1 Analyze problems by identifying relationships, distinguishing relevant from irrelevant information, sequencing and prioritizing information, and observing patterns. 1.2 Determine when and how to break a problem into simpler parts. 2.0 Students use strategies, skills, and concepts in finding solutions: 2.1 Use estimation to verify the reasonableness of calculated results. 2.2 Apply strategies and results from simpler problems to more complex problems. 2.3 Use a variety of methods, such as words, numbers, symbols, charts, graphs, tables, diagrams, and models, to explain mathematical reasoning. 2.4 Express the solution clearly and logically by using the appropriate mathematical notation and terms and clear language; support solutions with evidence in both verbal and symbolic work. 2.5 Indicate the relative advantages of exact and approximate solutions to problems and give answers to a specified degree of accuracy. 2.6 Make precise calculations and check the validity of the results from the context of the problem. 3.0 Students move beyond a particular problem by generalizing to othersituations: 3.1 Evaluate the reasonableness of the solution in the context of the original situation. 3.2 Note the method of deriving the solution and demonstrate a conceptual understanding of the derivation by solving similar problems. 3.3 Develop generalizations of the results obtained and apply them in other circumstances. By the end of grade five, students increase their facility with the four basic arithmetic operations applied to fractions, decimals, and positive and negative numbers. They know and use common measuring units to determine length and area and know and use formulas to determine the volume of simple geometric figures. Students know the concept of angle measurement and use a protractor and compass to solve problems. They use grids, tables, graphs, and charts to record and analyze data. Number Sense 1.0 Students compute with very large and very small numbers, positive integers, decimals, and fractions and understand the relationship between decimals, fractions, and percents. They understand the relative magnitudes of numbers: 1.1 Estimate, round, and manipulate very large (e.g., millions) and very small (e.g., thousandths) numbers. 1.2 Interpret percents as a part of a hundred; find decimal and percent equivalents for common fractions and explain why they represent the same value; compute a given percent of a whole number. 1.3 Understand and compute positive integer powers of nonnegative integers; compute examples as repeated 1.4 Determine the prime factors of all numbers through 50 and write the numbers as the product of their prime factors by using exponents to show multiples of a factor (e.g., 24 = 2 x 2 x 2 x 3 = 23 x 3). 1.5 Identify and represent on a number line decimals, fractions, mixed numbers, and positive and negative integers. 2.0 Students perform calculations and solve problems involving addition, subtraction, and simple multiplication and division of fractions and decimals: 2.1 Add, subtract, multiply, and divide with decimals; add with negative integers; subtract positive integers from negative integers; and verify the reasonableness of the results. 2.2 Demonstrate proficiency with division, including division with positive decimals and long division with multidigit divisors. 2.3 Solve simple problems, including ones arising in concrete situations, involving the addition and subtraction of fractions and mixed numbers (like and unlike denominators of 20 or less), and express answers in the simplest form. 2.4 Understand the concept of multiplication and division of fractions. 2.5 Compute and perform simple multiplication and division of fractions and apply these procedures to solving problems. Algebra and Functions 1.0 Students use variables in simple expressions, compute the value of the expression for specific values of the variable, and plot and interpret the results: 1.1 Use information taken from a graph or equation to answer questions about a problem situation. 1.2 Use a letter to represent an unknown number; write and evaluate simple algebraic expressions in one variable by 1.3 Know and use the distributive property in equations and expressions with variables. 1.4 Identify and graph ordered pairs in the four quadrants of the coordinate plane. 1.5 Solve problems involving linear functions with integer values; write the equation; and graph the resulting ordered pairs of integers on a grid. Measurement and Geometry 1.0 Students understand and compute the volumes and areas of simple objects: 1.1 Derive and use the formula for the area of a triangle and of a parallelogram by comparing it with the formula for the area of a rectangle (i.e., two of the same triangles make a parallelogram with twice the area; a parallelogram is compared with a rectangle of the same area by cutting and pasting a right triangle on the parallelogram). 1.2 Construct a cube and rectangular box from two-dimensional patterns and use these patterns to compute the surface area for these objects. 1.3 Understand the concept of volume and use the appropriate units in common measuring systems (i.e., cubic centimeter [cm3], cubic meter [m3], cubic inch [in3], cubic yard [yd3]) to compute the volume of rectangular solids. 1.4 Differentiate between, and use appropriate units of measures for, two-and three-dimensional objects (i.e., find the perimeter, area, volume). 2.0 Students identify, describe, and classify the properties of, and the relationships between, plane and solid geometric 2.1 Measure, identify, and draw angles, perpendicular and parallel lines, rectangles, and triangles by using appropriate tools (e.g., straightedge, ruler, compass, protractor, drawing software). 2.2 Know that the sum of the angles of any triangle is 180° and the sum of the angles of any quadrilateral is 360° and use this information to solve problems. 2.3 Visualize and draw two-dimensional views of three-dimensional objects made from rectangular solids. Statistics, Data Analysis, and Probability 1.0 Students display, analyze, compare, and interpret different data sets, including data sets of different sizes: 1.1 Know the concepts of mean, median, and mode; compute and compare simple examples to show that they may 1.2 Organize and display single-variable data in appropriate graphs and representations (e.g., histogram, circle graphs) and explain which types of graphs are appropriate for various data sets. 1.3 Use fractions and percentages to compare data sets of different sizes. 1.4 Identify ordered pairs of data from a graph and interpret the meaning of the data in terms of the situation depicted by the graph. 1.5 Know how to write ordered pairs correctly; for example, (x, y). Mathematical Reasoning 1.0 Students make decisions about how to approach problems: 1.1 Analyze problems by identifying relationships, distinguishing relevant from irrelevant information, sequencing and prioritizing information, and observing patterns. 1.2 Determine when and how to break a problem into simpler parts. 2.0 Students use strategies, skills, and concepts in finding solutions: 2.1 Use estimation to verify the reasonableness of calculated results. 2.2 Apply strategies and results from simpler problems to more complex problems. 2.3 Use a variety of methods, such as words, numbers, symbols, charts, graphs, tables, diagrams, and models, to explain mathematical reasoning. 2.4 Express the solution clearly and logically by using the appropriate mathematical notation and terms and clear language; support solutions with evidence in both verbal and symbolic work. 2.5 Indicate the relative advantages of exact and approximate solutions to problems and give answers to a specified degree of accuracy. 2.6 Make precise calculations and check the validity of the results from the context of the problem. 3.0 Students move beyond a particular problem by generalizing to other situations: 3.1 Evaluate the reasonableness of the solution in the context of the original situation. 3.2 Note the method of deriving the solution and demonstrate a conceptual understanding of the derivation by solving similar problems. 3.3 Develop generalizations of the results obtained and apply them in other circumstances. By the end of grade six, students have mastered the four arithmetic operations with whole numbers, positive fractions, positive decimals, and positive and negative integers; they accurately compute and solve problems. They apply their knowledge to statistics and probability. Students understand the concepts of mean, median, and mode of data sets and how to calculate the range. They analyze data and sampling processes for possible bias and misleading conclusions; they use addition and multiplication of fractions routinely to calculate the probabilities for compound events. Students conceptually understand and work with ratios and proportions; they compute percentages (e.g., tax, tips, interest). Students know about p and the formulas for the circumference and area of a circle. They use letters for numbers in formulas involving geometric shapes and in ratios to represent an unknown part of an expression. They solve one-step linear equations. Number Sense 1.0 Students compare and order positive and negative fractions, decimals, and mixed numbers. Students solve problems involving fractions, ratios, proportions, and percentages: 1.1 Compare and order positive and negative fractions, decimals, and mixed numbers and place them on a number line. 1.2 Interpret and use ratios in different contexts (e.g., batting averages, miles per hour) to show the relative sizes of two quantities, using appropriate notations (a/b, a to b, a:b). 1.3 Use proportions to solve problems (e.g., determine the value of N if 4/7 = N/21, find the length of a side of a polygon similar to a known polygon). Use cross-multiplication as a method for solving such problems, understanding it as the multiplication of both sides of an equation by a multiplicative inverse. 1.4 Calculate given percentages of quantities and solve problems involving discounts at sales, interest earned, and tips. 2.0 Students calculate and solve problems involving addition, subtraction, multiplication, and division: 2.1 Solve problems involving addition, subtraction, multiplication, and division of positive fractions and explain why a particular operation was used for a given situation. 2.2 Explain the meaning of multiplication and division of positive fractions and perform the calculations (e.g., 5/8 + 15/16 = 5/8 x 16/15 = 2/3). 2.3 Solve addition, subtraction, multiplication, and division problems, including those arising in concrete situations, that use positive and negative integers and combinations of these operations. 2.4 Determine the least common multiple and the greatest common divisor of whole numbers; use them to solve problems with fractions (e.g., to find a common denominator to add two fractions or to find the reduced form for a fraction). Algebra and Functions 1.0 Students write verbal expressions and sentences as algebraic expressions and equations; they evaluate algebraic expressions, solve simple linear equations, and graph and interpret their results: 1.1 Write and solve one-step linear equations in one variable. 1.2 Write and evaluate an algebraic expression for a given situation, using up to three variables. 1.3 Apply algebraic order of operations and the commutative, associative, and distributive properties to evaluate expressions; and justify each step in the process. 1.4 Solve problems manually by using the correct order of operations or by using a scientific calculator. 2.0 Students analyze and use tables, graphs, and rules to solve problems involving rates and proportions: 2.1 Convert one unit of measurement to another (e.g., from feet to miles, from centimeters to inches). 2.2 Demonstrate an understanding that rate is a measure of one quantity per unit value of another quantity. 2.3 Solve problems involving rates, average speed, distance, and time. 3.0 Students investigate geometric patterns and describe them algebraically: 3.1 Use variables in expressions describing geometric quantities (e.g., P = 2w + 2l, A = 1/2bh, C = pd - the formulas for the perimeter of a rectangle, the area of a triangle, and the circumference of a circle, respectively). 3.2 Express in symbolic form simple relationships arising from geometry. Measurement and Geometry 1.0 Students deepen their understanding of the measurement of plane and solid shapes and use this understanding to solve 1.1 Understand the concept of a constant such as p; know the formulas for the circumference and area of a circle. 1.2 Know common estimates of p (3.14; 22/7) and use these values to estimate and calculate the circumference and the area of circles; compare with actual measurements. 1.3 Know and use the formulas for the volume of triangular prisms and cylinders (area of base x height); compare these formulas and explain the similarity between them and the formula for the volume of a rectangular solid. 2.0 Students identify and describe the properties of two-dimensional figures: 2.1 Identify angles as vertical, adjacent, complementary, or supplementary and provide descriptions of these terms. 2.2 Use the properties of complementary and supplementary angles and the sum of the angles of a triangle to solve problems involving an unknown angle. 2.3 Draw quadrilaterals and triangles from given information about them (e.g., a quadrilateral having equal sides but no right angles, a right isosceles triangle). Statistics, Data Analysis, and Probability 1.0 Students compute and analyze statistical measurements for data sets: 1.1 Compute the range, mean, median, and mode of data sets. 1.2 Understand how additional data added to data sets may affect these computations of measures of central tendency. 1.3 Understand how the inclusion or exclusion of outliers affects measures of central tendency. 1.4 Know why a specific measure of central tendency (mean, median, mode) provides the most useful information in a given context. 2.0 Students use data samples of a population and describe the characteristics and limitations of the samples: 2.1 Compare different samples of a population with the data from the entire population and identify a situation in which it makes sense to use a sample. 2.2 Identify different ways of selecting a sample (e.g., convenience sampling, responses to a survey, random sampling) and which method makes a sample more representative for a population. 2.3 Analyze data displays and explain why the way in which the question was asked might have influenced the results obtained and why the way in which the results were displayed might have influenced the conclusions reached. 2.4 Identify data that represent sampling errors and explain why the sample (and the display) might be biased. 2.5 Identify claims based on statistical data and, in simple cases, evaluate the validity of the claims. 3.0 Students determine theoretical and experimental probabilities and use these to make predictions about events: 3.1 Represent all possible outcomes for compound events in an organized way (e.g., tables, grids, tree diagrams) and express the theoretical probability of each outcome. 3.2 Use data to estimate the probability of future events (e.g., batting averages or number of accidents per mile driven). 3.3 Represent probabilities as ratios, proportions, decimals between 0 and 1, and percentages between 0 and 100 and verify that the probabilities computed are reasonable; know that if P is the probability of an event, 1-P is the probability of an event not occurring. 3.4 Understand that the probability of either of two disjoint events occurring is the sum of the two individual probabilities and that the probability of one event following another, in independent trials, is the product of the two probabilities. 3.5 Understand the difference between independent and dependent events. Mathematical Reasoning 1.0 Students make decisions about how to approach problems: 1.1 Analyze problems by identifying relationships, distinguishing relevant from irrelevant information, identifying missing information, sequencing and prioritizing information, and observing patterns. 1.2 Formulate and justify mathematical conjectures based on a general description of the mathematical question or problem posed. 1.3 Determine when and how to break a problem into simpler parts. 2.0 Students use strategies, skills, and concepts in finding solutions: 2.1 Use estimation to verify the reasonableness of calculated results. 2.2 Apply strategies and results from simpler problems to more complex problems. 2.3 Estimate unknown quantities graphically and solve for them by using logical reasoning and arithmetic and algebraic 2.4 Use a variety of methods, such as words, numbers, symbols, charts, graphs, tables, diagrams, and models, to explain mathematical reasoning. 2.5 Express the solution clearly and logically by using the appropriate mathematical notation and terms and clear language; support solutions with evidence in both verbal and symbolic work. 2.6 Indicate the relative advantages of exact and approximate solutions to problems and give answers to a specified degree of accuracy. 2.7 Make precise calculations and check the validity of the results from the context of the problem. 3.0 Students move beyond a particular problem by generalizing to other situations: 3.1 Evaluate the reasonableness of the solution in the context of the original situation. 3.2 N, ote the method of deriving the solution and demonstrate a conceptual understanding of the derivation by solving similar problems. 3.3 Develop generalizations of the results obtained and the strategies used and apply them in new problem situations.
{"url":"http://www.cnusd.k12.ca.us/domain/458","timestamp":"2014-04-19T19:35:13Z","content_type":null,"content_length":"144286","record_id":"<urn:uuid:f1def84b-139d-4819-85e6-6b9fa64aaf16>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Can someone check this for me? (completing the square) • 5 months ago • 5 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/52894881e4b0f4659e432438","timestamp":"2014-04-18T08:34:56Z","content_type":null,"content_length":"124404","record_id":"<urn:uuid:96ca44b3-e2ec-41b4-be7f-204b58cab0e1>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Bonferroni-type inequalities with applications. (English) Zbl 0869.60014 Probability and Its Applications. New York, NY: Springer. ix, 269 p. DM 88.00; öS 642.40; sFr 77.50 (1996). This is the first book devoted solely to the subject of Bonferroni-type inequalities. It is a useful guide in this exciting theory and the reader can find there a large variety of classical and new problems, results, methods for proof and applications. The book contains ten chapters. The first four chapters are devoted to actual methods for proof of Bonferroni-type inequalities: the method of indicators, the method of polynomials, the geometrie method and the linear programming method. The method of indicators is a method of proving a probabilistic inequality involving probabilities of Boolean functions by the equivalent non-probabilistic inequality involving the corresponding indicator functions. It is based on a celebrated Rényi’s theorem (1958). Chapter I is devoted to this method and also includes inequalities known in the literature as graph-sieves. The method of polynomials (Chapter II) is justified by the observation that the validity of certain Bonferroni-type inequalities is equivalent to the validity of a corresponding collection of polynomial inequalities. This is the best method suited for finding new inequalities. The geometric method (Chapter III) is based on the geometric properties of the range of the binomial moments that is a convex polytope. It provides sharp Bonferroni-type bounds which are not linear over the whole space. Optimal Bonferroni-type inequalities can alternatively be viewed as solutions of maximization (minimization) problems of the linear programming. In Chapter IV the basic results of this theory are presented and an algorithm is used to derive Bonferroni-type Chapter V is devoted to linear bounds on the multivariate distribution of the number of occurrences in several sequences of events. In fact, the multivariate Bonferroni-type inequalities are at a much less developed stage than the univariate cases are. Yet a large number of interesting results obtained in recent years are unified in this chapter. Various applications to combinatorics, number theory, statistics and extreme value theory are considered in Chapters VI through IX. A special attention is payed to the applications of Bonferroni-type inequalities in extreme value theory. Limit theorems for maxima of r.v.’s are given in different models, namely the classical model of i.i.d. r.v.’s, the exchangeable model, the model where the r.v.’s are independent but not identically distributed, the graph dependent model. Miscellaneous topics are discussed in the last Chapter X, such as probability of occurrence for infinite sequences of events, quadratic inequalities, Borel-Cantelli lemmas proved by Bonferroni-type bounds. At the end the authors list some modern fields of applications developed in the recent years. This book may be of interest to a wide readership since no previous knowledge of the subject matters is required. The necessary information is contained in an introduction section heading each 60E15 Inequalities in probability theory; stochastic orderings 60-02 Research monographs (probability theory)
{"url":"http://zbmath.org/?q=an:0869.60014&format=complete","timestamp":"2014-04-18T11:05:19Z","content_type":null,"content_length":"23471","record_id":"<urn:uuid:09f1674e-c657-456f-bc29-bf8fc6e78bd0>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
Encrypting the Internet The Advanced Encryption Standard and the RSA Algorithm AES is the United States Government's standard for symmetric encryption, defined by FIPS Publication #197 (2001) [2, 3]. It is used in a large variety of applications where high throughput and security are required. In HTTPS, it can be used to provide confidentiality for the information that is transmitted over the Internet. AES is a symmetric encryption algorithm, which means that the same key is used for converting a plaintext to ciphertext, and vice versa. The structuree of AES is shown in Figure 2. AES first expands a key (that can be 128, 192, or 256 bits long) into a key schedule. A key schedule is a sequence of 128-bit words, called "round keys", that are used during the encryption process. The encryption process itself is a succession of a set of mathematical transformations called AES rounds. During an AES round the input to the round is first XOR'd with a round key from the key schedule. The exclusive OR (XOR) logical operation can also be seen as addition without generating carries. In the next step of a round, each of the 16 bytes of the AES state is replaced by another value by using a non-linear transformation called "S-box". The AES S-box consists of two stages. The first stage is an inversion, not in regular integer arithmetic, but in a finite field arithmetic based on the set GF(2^8). The second stage is an affine transformation. During encryption, the input x, which is considered an element of GF(2^8); that is, an 8-bit vector, is first inverted, and then an affine map is applied to the result. During decryption, the input (y) goes through the inverse affine map and is then inverted in GF(2^8). The GF(2^8) inversions just mentioned are performed in The GF(2^8), defined by the irreducible polynomial p(x) = x^8 + x^4 + x^3 + x + 1 or 0x11B. Next, the replaced byte values undergo two linear transformations called ShiftRows and MixColumns. ShiftRows is just a byte permutation. The MixColumns transformation operates on the columns of a matrix representation of the AES state. Each column is replaced by another one that results from a matrix multiplication. The transformation used for encryption is shown in Equation 1. In this equation, matrix-times-vector multiplications are performed according to the rules of the arithmetic of GF(28) with the same irreducible polynomial that is used in the AES S-box, namely, p(x) = x^8 + x^4 + x^3 + x + 1. During decryption, inverse ShiftRows is followed by inverse MixColumns. The inverse MixColumns transformation is shown in Equation 2. Note that while the MixColumns transformation multiplies the bytes of each column with the factors 1, 1, 2 and 3, the inverse MixColumns transformation multiplies the bytes of each column by the factors 0x9, 0xE, 0xB, and 0xD. The same process is repeated 10, 12, or 14 times depending on the key size (128, 192, or 256 bits). The last AES round omits the MixColumns transformation. RSA is a public key cryptographic scheme. The main idea behind public key cryptography is that encryption techniques can be associated with back doors. By back doors we mean secrets, known only to at least one of the communicating parties, which can simplify the decryption process. In public key cryptography, a message is encrypted by using a public key. A public key is associated with a secret called the private key. Without knowledge of the private key it is difficult to decrypt a message. Similarly, it is very difficult for an attacker to determine what the plaintext is. We further explain how public key cryptography works by presenting the RSA algorithm as an example. In this algorithm, the communicating parties choose two random large prime numbers p and q. For maximum security, p and q are of equal length. The communicating parties then compute the product: Then the parties choose the public key E, such that the numbers E and (p-1) * (q-1) are relatively prime. The private key associated with the public key is a number D, such that: The encryption formula is simply: where M is the plaintext and C is the ciphertext. The decryption formula is similarly: One can show that the decryption formula is correct by using elements of number theory: The above calculation is correct since (p-1) * (q-1) is the Euler function of the product p * q, and we know from number theory (by using the Little Fermat Theorem) that: for some l. D and E can be used interchangeably, meaning that encryption can be done by using D, and decryption can be done by using E. RSA is typically implemented using Chinese Remainder Theorem that reduces a single modular exponentiation operation into two operations of half length. Each modular exponentiation in turn is implemented, by using the square-and-multiply technique that reduces the exponentiation operation into a sequence of modular squaring and modular multiplication operations. Square-and-multiply may also be augmented with some windowing method for reducing the number of modular multiplications. Finally, modular squaring and multiplication operations can be reduced to big number multiplications by using reduction techniques such as Montgomery's or Barrett's [4, 5].
{"url":"http://www.drdobbs.com/security/encrypting-the-internet/218102294?pgno=2","timestamp":"2014-04-20T06:35:45Z","content_type":null,"content_length":"96822","record_id":"<urn:uuid:58b3d1b4-c0f8-4951-8c65-96f7af52eff5>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
Review of Mathematical Symbols x and · are both used to express multiplication. So 3 x 4 = 12, and 2 · 3 = 6. Inequality is expressed with the < “less than” or > “greater than” sign. Signs are sometimes combined; ³ means “greater than or equal to”. Absolute value is symbolized by vertical lines surrounding the value, such as êa ê. This means that regardless of the sign of a, its absolute value is positive. ê-3 ê= 3. The Greek capital sigma, S, is used to indicate a summation. S a[i] tells you to add all of the values in a set of a’s, a[1] + a[2] + a[3] + a[4] + ... where the number of a values is not specified. Powers (Exponents) k^n is read is “k to the n-th power”. n is referred to as the exponent, while k is referred to as the base. k^n means that you should multiply k by itself n times to get the answer. Squaring, or taking k to the second power, is the most familiar example. 3^2 = 9. If your calculator has an “x^y” button, you can do the operation quickly. Enter the value of k, then hit the “x^y” button, then enter the value of n, then hit the “=” button. If your calculator does not have the specialized button, you can still get the desired value by entering the value of k, then hitting the “x” (multiply) button. Then hit the “=” button n times. When you multiply two values that have the same base, such as 3^2 x 3^3, you can save some arithmetic by adding the exponents – the answer is equal to 3^5. When you multiply two values that have different bases, you need to find each power separately, then multiply the results. 2^3 x 3^3 = 4 x 27 = 108. Any base to the 0th power, such as 3^0 or 500^0, is equal to 1. The factorial operation is a shorthand way of expressing the product of a positive integer and all of the integers below it. Its symbol is an exclamation point. 5! = 5 x 4 x 3 x 2 x 1 = 120. 0! is special – it is equal to 1.
{"url":"http://instructional1.calstatela.edu/dweiss/Psy302/Symbols.htm","timestamp":"2014-04-21T07:04:17Z","content_type":null,"content_length":"11057","record_id":"<urn:uuid:b83d562f-04d7-45eb-ad77-22bc3596e45c>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
• John Ong, coordinator Applied mathematics explores the connections between mathematics and the physical world, and uses mathematics in studying and solving real-world problems. In this interdisciplinary major, students learn the techniques of modeling, analysis, computing, simulation and data manipulation as applied to their area of interest, such as engineering, biology, chemistry, physics, or economics. Students can pursue a BS with a major in applied mathematics in two ways, either at the college or through the MBC-UVA dual degree program in Engineering. This major requires a substantial portion of the coursework to be completed at the Staunton campus. • The four year program in Applied Mathematics (Option A) Students who are interested in the intersection of mathematics with another discipline at the college should choose this option. Requirements for the Bachelor of Science in Applied Mathematics (Option A) MATH 211 MATH 212 MATH 231 MATH 233 MATH 301 MATH 302 MATH 304 MATH 306 MATH 322 MATH 398 MATH 401 PHYS 201 PHYS 202 A math elective above the 100-level A minor in a discipline of interest. (Common disciplines include Biology, Chemistry, Physics, Business, Economics, Sociology, Philosophy, and Art and Literature, although most disciplines are Note: MATH 401 in this applied mathematics program consists of an in-depth study of mathematics in the student’s chosen minor. The committee formed for evaluating the student’s senior project must include both the mathematics faculty and a member of the faculty from the minor discipline. MBC-UVA dual degree program in Engineering (Option B) Mary Baldwin College students may elect to participate in a dual degree program in engineering offered by the School of Engineering and Applied Science at the University of Virginia. Qualified students attend Mary Baldwin for three years and then, based on their academic performance, are accepted into the University of Virginia for two or more years of study, leading to a Bachelor of Science degree in applied mathematics from Mary Baldwin College and a master’s degree in engineering from the University of Virginia. Interested students should contact Dr. Ong during their first semester at the College, and must sign up and complete the Calculus and Physics sequence during their freshmen year. Requirements for the Bachelor of Science in Applied Mathematics (Option B) MATH 211 MATH 212 MATH 231 MATH 233 MATH 301 MATH 302 MATH 304 MATH 306 MATH 322 MATH 400 MATH 401 CHEM 121 PHYS 201 PHYS 202 Plus 21 semester hours of coursework transferred from the University of Virginia. Note: Credit that counts toward the master’s degree at U. Va. cannot be transferred. Note: MATH 401 in this applied mathematics program consists of a study of partial differential equations, or a comparable area of mathematics as applied to an engineering problem. The student will present her faculty-approved math 401 project in the spring of her third (last) year at the College. It is recommended that each student in the program complete an internship or a summer course in engineering or programming.
{"url":"http://www.mbc.edu/catalog/undergraduate-offerings/mathematics-applied/","timestamp":"2014-04-20T13:25:12Z","content_type":null,"content_length":"39863","record_id":"<urn:uuid:6fc47959-c4d3-469e-9c4e-d69ef49f5898>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
Anti-Aliasing with Threshold 12-18-2003, 05:35 PM #11 Join Date Jun 2003 Bonnie Auld Scotlin\' Thats double dutch to me Phil. I sure hope some of the more advanced members (Colin, Malachi, Al W) know what your talking about. I just wanted to say more about the pictures your experimented with, rather than your theory etc. Image one=100% Image two=400% Image three=100% It seems to me that the higher you go in percentage, the less the quality of the image seems to change - though this is somewhat affected because a standard monitor is only 72dpi. I guess printed out would show the REAL differences. Look at one - it looks very pixelated and is noticable throughout compared to image two and three. Number 2 looks so much more better and the only way really to distinguish between two and three is the long shaped antennae (or whatever they are). So as a conclusion, would you say the largest increase in quality would be from say 100% to 200% than say 400% to 500%? Its hard to distinguish the differences the higher the anti-aliasing goes... As it should. So the theory goes, the anti-alias should not stray more than one pixel from your edge. It&#039;s not supposed to blur it&#039;s supposed to average. But in image 3, the average is better, because there was more points to draw from. Status update: I&#039;m just looking for the error in this code... Updated a bit for logical problems, still need to fix syntax. EDIT: Whoopsie. Don&#039;t need those ones... Edit: Now with notes! ::If a pixel is not Black and...:: (src(x,y,z)!=0) && src(x,(y-1),z) || src((x-1),y,z) || src((x+1),y,z) || == 0) ? ::then if the bottom and left one are not white:: ::average them with the current pixel but give the current pixel value twice the weight:: ::if the top and left one are:: ::repeat the same:: ::same for bottom and right:: ::...and top and right:: ::the pixel is white:: [Edited on 18-12-2003 by Phil_The_Rodent] [Edited on 18-12-2003 by Phil_The_Rodent] [Edited on 18-12-2003 by Phil_The_Rodent] [Edited on 18-12-2003 by Phil_The_Rodent] 12-18-2003, 05:51 PM #12 Join Date Aug 2004
{"url":"http://photoshopcafe.com/cafe/showthread.php?10988-Anti-Aliasing-with-Threshold/page2","timestamp":"2014-04-18T14:42:16Z","content_type":null,"content_length":"49350","record_id":"<urn:uuid:e38b4f39-0d27-4bbb-971f-729f8e57ab31>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
The Switch In The Circuit Of Fig. 4 Has Been Open ... | Chegg.com A question with an answer I like solutions by handwriting. Thanks Image text transcribed for accessibility: The switch in the circuit of Fig. 4 has been open for a long time. Assume the switch is closed at time t = 0. Find current ic(t) for t > 0. Calculate in the time domain, do not use the Laplace transform. Fig. 4 Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/switch-circuit-fig-4-open-long-time-assume-switch-closed-time-t-0-find-current-ic-t-t-0-ca-q2589699","timestamp":"2014-04-16T18:07:36Z","content_type":null,"content_length":"20577","record_id":"<urn:uuid:ca3dd839-2a66-415f-8d5b-1b6132d88e3e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
Rota reflects on math and mathematicians Behind most great science and engineering discoveries stands the work of a host of mathematicians. But their research, like the support of a silent consort, often goes unrecognized. Gian-Carlo Rota, professor of applied mathematics and philosophy -- the sole MIT claimant to that title -- is not silent. His interest in communicating with mathematicians, as well as the rest of us, has manifested itself in some very public ways. He spoke at Family Weekend this year ("Ten Predictions about Science") and last year ("Ten Lessons of an MIT Education"), was the Killian Lecturer in 1997 ("Mathematical Snapshots"), speaker at the Provost's Seminar in 1998 ("Ten Remarks on Husserl and Phenomenology"), and presenter of the 1998 American Mathematical Society (AMS) Colloquium Lectures -- a series of three talks presented each year by one of the world's most eminent mathematicians, according to the Professor Rota also engages people, very graciously, on an individual level. He's known among students for his accessibility and his clear presentation of material in his math and philosophy courses. He's also respected for his deep understanding of those subjects and revered for his love of communicating. "The first course of his I took changed my life. It's the only class at MIT that really has done that," said Eric "Krevice" Prebys, a senior in math and computer science. "He helped me to see the world in a totally different way. And that's what I wanted out of college." Mr. Prebys took Professor Rota's Introduction to Phenomenology (24.171) his sophomore year, and later, Probability (18.313) -- "the best probability course at MIT." He is currently enrolled in Professor Rota's phenomenology course on Martin Heidegger, Being and Time (24.172). Professor Richard Stanley of mathematics, a former student of Professor Rota, credits his advisor with having transformed combinatorics, their mathematical specialty, from a "Mickey Mouse area" of research into a "respectable subject." Professor Stanley was one of the organizers of Rotafest, a four-day conference on combinatorics held at MIT in 1996 honoring Professor Rota's 64th birthday. Of course, not everyone loves Professor Rota. His latest book, Indiscrete Thoughts (Birkh������������������user, 1997) includes essays that debunk the "myth of monolithic personality" through sketches of the lives of notable mathematicians. When first published, one mathematician wrote that he would not speak to Professor Rota again; another threatened a In an interview with MIT Tech Talk, Professor Rota shared his ideas about mathematicians, the mathematics profession and why they remain poorly understood. What's it like to be a mathematician? It's the least rewarding profession except one: music. Musicians live an impoverished life. Mathematicians -- for what they do -- are really poorly rewarded. And it's a very competitive field, almost as bad as being a concert pianist. You've got to be really an egoist. You've got to be terribly self-centered. Why are there so few women in the field? Women are more realistic than men -- they can see that it's a flight from reality. What they don't see is that it's a flight from reality that works. The distribution of mathematics talent among men and women is exactly the same. But in 40 years of teaching I've seen really good women mathematicians leave the profession, including one very close friend, to my great chagrin. I almost cried. Why don't we hear about the work of mathematicians? Mathematicians have bad personalities. They're snobs. Among them, and at MIT, there's a tendency to judgment: people who don't write formulas are tolerated. Mathematicians also make terrible salesmen. Physicists can discover the same thing as a mathematician and say 'We've discovered a great new law of nature. Give us a billion dollars.' And if it doesn't change the world, then they say, 'There's an even deeper thing. Give us another billion dollars.' Are mathematicians really so different from other scientists and engineers? The more experimental scientists and engineers are, the more common sense they have, and so on until you get to the mathematicians, who are totally devoid of common sense. What do mathematicians do? They work on problems. There are historical problems floating around. You are in competition with people who came before you. Sometimes you discover the competition wasn't that good after all. How do they choose the problems? People like to think that scientists see a need and try to solve that problem. Engineers may work that way. But in math, you don't have an application when you work on a problem. It's not the need prompting the science. The reality is, it's the other way around. You say to yourself, 'I have a feeling there's something to this problem' and you work on it, but not alone. Many people throughout history work on a single problem, not a "lone genius." That's another phony-baloney theory. And once the problem has been solved? Applications are found after the theory is developed, not before. A math problem gets solved, then by accident some engineer gets hold of it and says, 'Hey, isn't this similar to���������������������������? Let's try it.' For instance, the laws of aerodynamics are basic math. They were not discovered by an engineer studying the flight of birds, but by dreamers -- real mathematicians -- who just thought about the basic laws of nature. If you tried to do it by studying birds' flight, you'd never get it. You don't examine data first. You first have an idea, then you get the data to prove your idea. What is combinatorics? Combinatorics is putting different-colored marbles in different-colored boxes, seeing how many ways you can divide them. I could rephrase it in Wall Street terms, but it's really just about marbles and boxes, putting things in sets. Actually, some of my best students have gone to Wall Street. It turns out that the best financial analysts are either mathematicians or theoretical physicists. We're also interested in the mathematical properties of knotting and braiding. Someone in 1910 started with knots. You take one, cut it and you get a braid. It's actually one of the hottest topics in math today and holds the secret to a number of problems (I have a gut feeling). If we understand braids well enough, we'll solve all the problems of physics. Do these have applications for other sciences? Protein folding is very closely related to this process. But biologists are just at the beginning. As they get deeper and deeper into the DNA structure, they'll need so much mathematical theory they'll have to become mathematicians. There aren't more than two or three people right now who know both math and biology. It takes a tremendous effort. But it's very probable that an understanding of genetics is dependent on understanding knotting. What sorts of problems have combinatorics solved in the past? One example is quantum mechanics, which was discovered 30 years ago. The mathematics behind quantum mechanics had been worked out 20 years before by a mathematician who didn't know what it was good What would you like to tell the public about math and science? Basic science is essential. The need for public relations is essential. We won't survive -- continue to get funding -- without it. People think we've got enough basic science. But the fact is, basic science costs so little compared to, say, developing a new kind of submarine. It's a law of nature: the things that get cut first are the least [expensive]. Take [the funding for] the National Endowment for the Arts -- that was peanuts. A version of this article appeared in MIT Tech Talk on October 28, 1998.
{"url":"http://newsoffice.mit.edu/1998/rota-1028","timestamp":"2014-04-17T16:19:56Z","content_type":null,"content_length":"87699","record_id":"<urn:uuid:780914e3-eca5-4ab5-8399-ca92c4b8f9d8>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
* fmaf.c * by Ian Ollmann * Copyright (c) 2007, Apple Inc. All Rights Reserved. * C implementation of C99 fmaf() function. #include <math.h> #include <stdint.h> float fmaf( float a, float b, float c ) double product = (double) a * (double) b; //exact double dc = (double) c; //exact #warning fmaf not completely correct // Simply adding C here is incorrect about 1 in a billion times. // While the double precision add here is correctly rounded, // we take a second rounding on conversion to float on return // which may cause us to be off by very slightly over half an ulp // in round to nearest. double sum = product + dc; // ideally, we should test here and patch up the result. // I think the problem only occurs in round to nearest for // exact half way cases in product with a non-zero c. // Presumably, we could check to see if the difference between // (float) sum and sum is a power of two (the right exact power // of two) and c is non-zero, and it rounded the wrong way, then // we might tweak the answer by an ulp using something like nextafter. // Happily denormals are not a problem during this check. // Alternatively, if we figure out the problem of correctly rounded // 3-way adds, the product could be broken into 2 floats, and we // could do a 3-way add of prodHi, prodLo and c. Crlibm has a function // that might do the job (DoRenormalize3), bu Im thinking that it doesnt. // Finally, to be completely right, we'd have to detect rounding mode. // The half way cases are different in other rounding modes. return (float) sum;
{"url":"http://opensource.apple.com/source/Libm/Libm-315/Source/ARM/fmaf.c","timestamp":"2014-04-19T15:16:16Z","content_type":null,"content_length":"4495","record_id":"<urn:uuid:cf93491f-3c7c-41ac-a812-135a20c7add0>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: DUAL EQUIVALENCE GRAPHS, RIBBON TABLEAUX AND MACDONALD SAMI H. ASSAF Abstract. We make a systematic study of a new combinatorial construction called a dual equivalence graph. We axiomatize such constructions and prove that the generating functions of such graphs are Schur positive. We construct a graph on k-ribbon tableaux which we conjecture to be a dual equivalence graph, and we prove the conjecture for k 3. This implies the Schur positivity of the k-ribbon tableaux generating functions introduced by Lascoux, Leclerc and Thibon. From Haglund's formula for Macdonald polynomials, this has the further consequence of a combinatorial expansion of the transformed Macdonald-Kostka polynomials eKµ, which we prove when µ is a partition with at most 3 columns. 1. Introduction The immediate purpose of this paper is to establish a combinatorial formula for the Schur expansion of LLT polynomials when k 3. As a corollary, this yields a combinatorial formula for the Kostka-Macdonald polynomials for partitions with at most 3 columns. Furthermore, we conjecture that the construction used generalizes to arbitrary k. Our real purpose, however, is not only to obtain the above results, but also to introduce a new combinatorial construction, called a dual equivalence graph, by which one can establish the Schur positivity of polynomials expressed in terms of monomials. In Section 2, we introduce notation for familiar objects in symmetric function theory, for the most part following the notation of [15]. Section 3 is devoted to the development of the theory of dual equivalence graphs. We review the original definition of dual equivalence given in [8], and in Section 3.1 show how from
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/608/2149437.html","timestamp":"2014-04-16T07:52:07Z","content_type":null,"content_length":"8782","record_id":"<urn:uuid:ba7968e6-6f00-46a7-a00a-60f33989a62d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
1911 Encyclopædia Britannica/Apollonius of Perga From Wikisource ←Apollonius Molon 1911 Encyclopædia Britannica, Volume 2 Apollonius of Rhodes→ Apollonius of Perga APOLLONIUS OF PERGA [Pergaeus], Greek geometer of the Alexandrian school, was probably born some twenty-five years later than Archimedes, i.e. about 262 B.C. He flourished in the reigns of Ptolemy Euergetes and Ptolemy Philopator (247-205 B.C.). His treatise on Conics gained him the title of The Great Geometer, and is that by which his fame has been transmitted to modern times. All his numerous other treatises have perished, save one, and we have only their titles handed down, with general indications of their contents, by later writers, especially Pappus. After the Conics in eight Books had been written in a first edition, Apollonius brought out a second edition, considerably revised as regards Books i.-ii., at the instance of one Eudemus of Pergamum; the first three books were sent to Eudemus at intervals, as revised, and the later books were dedicated (after Eudemus’ death) to King Attalus I. (241-197 B.C.). Only four Books have survived in Greek; three more are extant in Arabic; the eighth has never been found. Although a fragment has been found of a Latin translation from the Arabic made in the 13th century, it was not until 1661 that a Latin translation of Books v.-vii. was available. This was made by Giovanni Alfonso Borelli and Abraham Ecchellensis from the free version in Arabic made in 983 by Abu ’l-Fath of Ispahan and preserved in a Florence MS. But the best Arabic translation is that made as regards Books i.-iv. by Hilāl ibn Abī Hilāl (d. about 883), and as regards Books v.-vii. by Tobit ben Korra (836-901). Halley used for his translation an Oxford MS. of this translation of Books v.-vii., but the best MS. (Bodl. 943) he only referred to in order to correct his translation, and it is still unpublished except for a fragment of Book v. published by L. Nix with German translation (Drugulin, Leipzig, 1889). Halley added in his edition (1710) a restoration of Book viii., in which he was guided by the fact that Pappus gives lemmas “to the seventh and eighth books” under that one heading, as well as by the statement of Apollonius himself that the use of the seventh book was illustrated by the problems solved in the The degree of originality of the Conics can best be judged from Apollonius’ own prefaces. Books i.-iv. form an “elementary introduction,” i.e. contain the essential principles; the rest are specialized investigations in particular directions. For Books i.-iv. he claims only that the generation of the curves and their fundamental properties in Book i. are worked out more fully and generally than they were in earlier treatises, and that a number of theorems in Book iii. and the greater part of Book iv. are new. That he made the fullest use of his predecessors’ works, such as Euclid’s four Books on Conics, is clear from his allusions to Euclid, Conon and Nicoteles. The generality of treatment is indeed remarkable; he gives as the fundamental property of all the conics the equivalent of the Cartesian equation referred to oblique axes (consisting of a diameter and the tangent at its extremity) obtained by cutting an oblique circular cone in any manner, and the axes appear only as a particular case after he has shown that the property of the conic can be expressed in the same form with reference to any new diameter and the tangent at its extremity. It is clearly the form of the fundamental property (expressed in the terminology of the “application of areas”) which led him to call the curves for the first time by the names parabola, ellipse, hyperbola. Books v.-vii. are clearly original. Apollonius’ genius takes its highest flight in Book v., where he treats of normals as minimum and maximum straight lines drawn from given points to the curve (independently of tangent properties), discusses how many normals can be drawn from particular points, finds their feet by construction, and gives propositions determining the centre of curvature at any point and leading at once to the Cartesian equation of the evolute of any conic. The other treatises of Apollonius mentioned by Pappus are—1st, Λόγου ἀποτομή, Cutting off a Ratio; 2nd, Χωρίου ἀποτομή, Cutting of an Area; 3rd, Διωρισμένη τομή, Determinate Section; 4th, Έπαφαί, Tangencies; 5th, Νεύσεις, Inclinations; 6th, Τόποι ἐπίπεδοι, Plane Loci. Each of these was divided into two books, and, with the Data, the Porisms and Surface-Loci of Euclid and the Conics of Apollonius were, according to Pappus, included in the body of the ancient analysis. 1st. De Rationis Sectione had for its subject the resolution of the following problem: Given two straight lines and a point in each, to draw through a third given point a straight line cutting the two fixed lines, so that the parts intercepted between the given points in them and the points of intersection with this third line may have a given ratio. 2nd. De Spatii Sectione discussed the similar problem which requires the rectangle contained by the two intercepts to be equal to a given rectangle. An Arabic version of the first was found towards the end of the 17th century in the Bodleian library by Dr Edward Bernard, who began a translation of it; Halley finished it and published it along with a restoration of the second treatise in 1706. 3rd. De Sectione Determinata resolved the problem: Given two, three or four points on a straight line, to find another point on it such that its distances from the given points satisfy the condition that the square on one or the rectangle contained by two has to the square on the remaining one or the rectangle contained by the remaining two, or to the rectangle contained by the remaining one and another given straight line, a given ratio. Several restorations of the solution have been attempted, one by W. Snellius (Leiden, 1698), another by Alex. Anderson of Aberdeen, in the supplement to his Apollonius Redivivus (Paris, 1612), but by far the best is by Robert Simson, Opera quaedam reliqua (Glasgow, 1776). 4th. De Tactionibus embraced the following general problem: Given three things (points, straight lines or circles) in position, to describe a circle passing through the given points, and touching the given straight lines or circles. The most difficult case, and the most interesting from its historical associations, is when the three given things are circles. This problem, which is sometimes known as the Apollonian Problem, was proposed by Vieta in the 16th century to Adrianus Romanus, who gave a solution by means of a hyperbola. Vieta thereupon proposed a simpler construction, and restored the whole treatise of Apollonius in a small work, which he entitled Apollonius Gallus (Paris, 1600). A very full and interesting historical account of the problem is given in the preface to a small work of J. W. Camerer, entitled Apollonii Pergaei quae supersunt, ac maxime Lemmata Pappi in hos Libros, cum Observationibus, &c. (Gothae, 1795, 8vo). 5th. De Inclinationibus had for its object to insert a straight line of a given length, tending towards a given point, between two given (straight or circular) lines. Restorations have been given by Marino Ghetaldi, by Hugo d’Omerique (Geometrical Analysis, Cadiz, 1698), and (the best) by Samuel Horsley (1770). 6th. De Locis Planis is a collection of propositions relating to loci which are either straight lines or circles. Pappus gives somewhat full particulars of the propositions, and restorations were attempted by P. Fermat (Œuvres, i., 1891, pp. 3-51), F. Schooten (Leiden, 1656) and, most successfully of all, by R. Simson (Glasgow, 1749). Other works of Apollonius are referred to by ancient writers, viz. (1) Περὶ τοῦ πυρίου, On the Burning-Glass, where the focal properties of the parabola probably found a place; (2) Περὶ τοῦ κοχλίου, On the Cylindrical Helix (mentioned by Proclus); (3) a comparison of the dodecahedron and the icosahedron inscribed in the same sphere; (4) Ή καθόλου πραγματεία, perhaps a work on the general principles of mathematics in which were included Apollonius’ criticisms and suggestions for the improvement of Euclid’s Elements; (5) Ώκυτόκιον (quick bringing-to-birth), in which, according to Eutocius, he showed how to find closer limits for the value of π than the 31⁄7 and 310⁄71 of Archimedes; (6) an arithmetical work (as to which see Pappus) on a system of expressing large numbers in language closer to that of common life than that of Archimedes’ Sand-reckoner, and showing how to multiply such large numbers; (7) a great extension of the theory of irrationals expounded in Euclid, Book x., from binomial to multinomial and from ordered to unordered irrationals (see extracts from Pappus’ comm. on Eucl. x., preserved in Arabic and published by Woepcke, 1856). Lastly, in astronomy he is credited by Ptolemy with an explanation of the motion of the planets by a system of epicycles; he also made researches in the lunar theory, for which he is said to have been called Epsilon (ε). The best editions of the works of Apollonius are the following: (1) Apollonii Pergaei Conicorum libri quatuor, ex versione Frederici Commandini (Bononiae, 1566), fol.; (2) Apollonii Pergaei Conicorum libri octo, et Sereni Antissensis de Sectione Cylindri et Coni libri duo (Oxoniae, 1710), fol. (this is the monumental edition of Edmund Halley); (3) the edition of the first four books of the Conics given in 1675 by Barrow; (4) Apollonii Pergaei de Sectione, Rationis libri duo: Accedunt ejusdem de Sectione Spatii libri duo Restituti: Praemittitur, &c., Opera et Studio Edmundi Halley (Oxoniae, 1706), 4to; (5) a German translation of the Conics by H. Balsam (Berlin, 1861); (6) the definitive Greek text of Heiberg (Apollonii Pergaei quae Graece exstant Opera, Leipzig, 1891-1893); (7) T. L. Heath, Apollonius, Treatise on Conic Sections (Cambridge, 1896); see also H. G. Zeuthen, Die Lehre von den Kegelschnitten im Altertum (Copenhagen, 1886 and 1902). (T. L. H.)
{"url":"http://en.wikisource.org/wiki/1911_Encyclop%C3%A6dia_Britannica/Apollonius_of_Perga","timestamp":"2014-04-16T05:55:46Z","content_type":null,"content_length":"37569","record_id":"<urn:uuid:44a70313-f067-480c-8dd6-777bce020c21>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
User:Daviddaved/Schrodinger equation The Schrodinger equation provides a link between the local and spectral/global properties of solutions of Laplace-Beltrami equation. The inverse boundary problem for the Schrodinger equation can be reduced to the Calderon problem due to the identities below that hold for graphs and surfaces. Suppose u on $\Omega$ satisfies the Laplace equation in the domain, $\Delta_{\gamma}u = abla\cdot(\gammaabla u) = 0.$ $(\Delta - q)(u\sqrt{\gamma}) = 0,$ $q = \frac{\Delta\sqrt{\gamma}}{\sqrt{\gamma}}.$ For the analog of this system to work on networks, one can define the solution of the Schrodinger equation u on the nodes and the square of the solution on the edges by the following formula: $\gamma^2(v_l,v_m) = u(v_l)u(v_m).$ Exercise (*). Express the Dirichlet-to-Neumann operator for the Schrodinger equation in terms of the Dirichlet-to-Neumann operator for the corresponding Laplace equation on the network with the same underlying graph. (Hint). Let $\Lambda_q = A-B(C+D_q)^{-1}B^T,$ $K = \begin{pmatrix} A & B \\ B^T & C + D_q \end{pmatrix} .$ $\tilde{K} = \begin{pmatrix} A+D_y & BD_x \\ D_x B^T & D_x(C+D_q)D_x \end{pmatrix}$ is the Laplace matrix of the network with $\Lambda(\tilde{K}) = A + D_y - B D_x (D_x (C+D_q) D_x)^{-1} D_x B^T = \Lambda_q + D_y,$ $x = - (C+D_q)^{-1}B^T1.$ Exercise (**). Reduce the inverse problem for Schrodinger operator to the inverse problem for the Laplace operator on the network w/same underlying graph (w/ possibly signed conductivity). Last modified on 25 October 2012, at 01:24
{"url":"http://en.m.wikibooks.org/wiki/User:Daviddaved/Schrodinger_equation","timestamp":"2014-04-16T13:06:52Z","content_type":null,"content_length":"16845","record_id":"<urn:uuid:ab03b162-82a9-4a5d-a6a6-4cee100318bb>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
Expressions (Transact-SQL) Is a combination of symbols and operators that the SQL Server Database Engine evaluates to obtain a single data value. Simple expressions can be a single constant, variable, column, or scalar function. Operators can be used to join two or more simple expressions into a complex expression. { constant | scalar_function | [ table_name. ] column | variable | ( expression ) | ( scalar_subquery ) | { unary_operator } expression | expression { binary_operator } expression | ranking_windowed_function | aggregate_windowed_function Term Definition constant Is a symbol that represents a single, specific data value. For more information, see Constants (Transact-SQL). scalar_function Is a unit of Transact-SQL syntax that provides a specific service and returns a single value. scalar_function can be built-in scalar functions, such as the SUM, GETDATE, or CAST functions, or scalar user-defined functions. [ table_name.] Is the name or alias of a table. column Is the name of a column. Only the name of the column is allowed in an expression. variable Is the name of a variable, or parameter. For more information, see DECLARE @local_variable (Transact-SQL). ( expression ) Is any valid expression as defined in this topic. The parentheses are grouping operators that make sure that all the operators in the expression within the parentheses are evaluated before the resulting expression is combined with another. Is a subquery that returns one value. For example: ( scalar_subquery ) SELECT MAX(UnitPrice) FROM Products Is an operator that has only one numeric operand: • + indicates a positive number. { unary_operator } • - indicates a negative number. • ~ indicates the one's complement operator. Unary operators can be applied only to expressions that evaluate to any one of the data types of the numeric data type category. Is an operator that defines the way two expressions are combined to yield a single result. binary_operator can be an arithmetic operator, the assignment operator (=), a { binary_operator } bitwise operator, a comparison operator, a logical operator, the string concatenation operator (+), or a unary operator. For more information about operators, see Operators (Transact-SQL). ranking_windowed_function Is any Transact-SQL ranking function. For more information, see Ranking Functions (Transact-SQL). aggregate_windowed_function Is any Transact-SQL aggregate function with the OVER clause. For more information, see OVER Clause (Transact-SQL). For a simple expression made up of a single constant, variable, scalar function, or column name: the data type, collation, precision, scale, and value of the expression is the data type, collation, precision, scale, and value of the referenced element. When two expressions are combined by using comparison or logical operators, the resulting data type is Boolean and the value is one of the following: TRUE, FALSE, or UNKNOWN. For more information about Boolean data types, see Comparison Operators (Transact-SQL). When two expressions are combined by using arithmetic, bitwise, or string operators, the operator determines the resulting data type. Complex expressions made up of many symbols and operators evaluate to a single-valued result. The data type, collation, precision, and value of the resulting expression is determined by combining the component expressions, two at a time, until a final result is reached. The sequence in which the expressions are combined is defined by the precedence of the operators in the expression. Two expressions can be combined by an operator if they both have data types supported by the operator and at least one of these conditions is true: • The expressions have the same data type. • The data type with the lower precedence can be implicitly converted to the data type with the higher data type precedence. If the expressions do not meet these conditions, the CAST or CONVERT functions can be used to explicitly convert the data type with the lower precedence to either the data type with the higher precedence or to an intermediate data type that can be implicitly converted to the data type with the higher precedence. If there is no supported implicit or explicit conversion, the two expressions cannot be combined. The collation of any expression that evaluates to a character string is set by following the rules of collation precedence. For more information, see Collation Precedence (Transact-SQL). In a programming language such as C or Microsoft Visual Basic, an expression always evaluates to a single result. Expressions in a Transact-SQL select list follow a variation on this rule: The expression is evaluated individually for each row in the result set. A single expression may have a different value in each row of the result set, but each row has only one value for the expression. For example, in the following SELECT statement both the reference to ProductID and the term 1+2 in the select list are expressions: USE AdventureWorks2008R2; SELECT ProductID, 1+2 FROM Production.Product; The expression 1+2 evaluates to 3 in each row in the result set. Although the expression ProductID generates a unique value in each result set row, each row only has one value for ProductID.
{"url":"http://msdn.microsoft.com/en-us/library/ms190286(v=sql.105).aspx","timestamp":"2014-04-16T22:43:53Z","content_type":null,"content_length":"54077","record_id":"<urn:uuid:a9cf6dd1-0f3b-419e-bead-4eb9a5693bb1>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
First moment of a spectrum (Al Bregman ) Subject: First moment of a spectrum From: Al Bregman <BREGMAN(at)HEBB.PSYCH.MCGILL.CA> Date: Sun, 11 Jun 2000 23:44:36 -0400 Dear List Members, I wish I knew more math, but I don't. So I have to ask this question. In relation to a measure of the central point in a spectrum, somebody wrote about the "first moment" of the spectrum. From context, it seems that the first moment is the frequency at which the sum of the positive deviations of frequencies above it, multiplied by their amplitudes, is equal to the sum of the negative deviations in frequency, of the frequencies below it, multiplied by their amplitudes. In other words, if the spectrum were inscribed on a piece of tin, and then cut out, the first moment would be the frequency at which it balanced. Is this a correct interpretation? Albert S. Bregman, Emeritus Professor Dept of Psychology, McGill University 1205 Docteur Penfield Avenue Montreal, QC, Canada H3A 1B1 Tel: +1 (514) 398-6103 Fax: +1 (514) 398-4896 This message came from the mail archive maintained by: DAn Ellis <dpwe@ee.columbia.edu> Electrical Engineering Dept., Columbia University
{"url":"http://www.auditory.org/postings/2000/169.html","timestamp":"2014-04-19T12:01:45Z","content_type":null,"content_length":"1975","record_id":"<urn:uuid:1bbf23b7-d744-474e-aacb-26a3964b2391>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Randomized Quicksort ;; Randomized Quicksort ;; Quicksort has very unfortunate performance characteristics. It's ;; very fast on randomized data, and uses very little extra memory, ;; but if there's too much order in its input, it becomes a quadratic ;; algorithm which blows the stack. ;; Such inputs are very rare, but very easy to find (there's glory for you!) ;; Luckily there's an easy fix, which is why Quicksort's a very popular sorting algorithm. ;; Here's Quicksort as we left it: (defn swap [v i1 i2] (assoc v i2 (v i1) i1 (v i2))) (defn ^:dynamic partition-by-pivot [v start end] (loop [v v i (inc start) j (inc start)] ;; this extra loop just sets up the indices i and j (if (> j end) [(dec i) (swap v start (dec i))] (if (< (v start) (v j)) (recur v i (inc j)) (recur (swap v i j) (inc i) (inc j)))))) (defn ^:dynamic qsort ([v start end] (if (> (+ 1 start) end) v ;; short array, nothing to do (let [[index newv] (partition-by-pivot v start end) ;; otherwise partition leftsorted (qsort newv start (dec index))] ;; and sort the left half (qsort leftsorted (inc index) end))))) (defn quicksort [v] (let [vec (into [] v)] (qsort vec 0 (dec (count vec))))) ;; Potentially a very fast sorting algorithm, which apart from its recursion stack ;; needs no extra memory. ;; But although it reliably looks good on random input: (time (first (quicksort (shuffle (range 2048))))) "Elapsed time: 435.670347 msecs" ;; If you feed it an array which is already in order: (time (first (quicksort (range 2048)))) "Elapsed time: 2413.521649 msecs" ;; performance drops off horribly (time (first (quicksort (range 4096)))) ;; and the stack blows. ;; The underlying problem is that when the data has a structure which makes the pivots ;; far from the medians of the arrays to be partitioned, the recursion tree unbalances, and ;; rather than doing roughly log 2048=11 recursions, quicksort does 2048 recursions. ;; If only we could get the usual performance characteristics of quicksort on random data ;; on the sort of almost-sorted data that we often encounter in practice. ;; One way to do this, of course, would be to shuffle the data before sorting it. ;; The chances of hitting a bad case with properly shuffled data are astronomically small. ;; It would work, but isn't that an odd thing to do? And shuffling is just as difficult as sorting. ;; Turns out we don't have to. If we pick the pivot at random, then we get the same performance. ;; There are still bad cases, but if the random number generator is uncorrelated with the data to ;; be sorted, we'll never hit one in a million years. ;; How shall we do that? Turns out it's dead easy to frig ;; partition-by-pivot. We just swap the first element of the array ;; with a randomly chosen one before we pick the pivot: (defn ^:dynamic partition-by-pivot [v start end] (let [randomel (+ start (rand-int (- end start))) v (swap v start randomel)] ;; swap first element with randomly chosen one (loop [v v i (inc start) j (inc start)] (if (> j end) [(dec i) (swap v start (dec i))] (if (< (v start) (v j)) (recur v i (inc j)) (recur (swap v i j) (inc i) (inc j))))))) (time (first (quicksort (range 2048)))) "Elapsed time: 666.069407 msecs" ;; The bad cases are still around. But they're very hard to find now. ;; It occurs to me that one way to blow up randomized quicksort might ;; be to use a pseudo-random number generator and a well chosen ;; algorithm to shuffle the data, and then use the same PRNG to ;; quicksort it. So if you've got a clever adversary who knows what ;; you mean by random he should be able to make your quicksorts ;; explode. ;; We're something like a factor of twenty away from clojure's built in sort. ;; That's really not bad for an unoptimized algorithm. (time (first (sort (shuffle (range 2048))))) "Elapsed time: 33.509778 msecs" ;; It should be possible to speed our quicksort up quite a bit (especially if ;; we restrict ourselves to arrays of integers). ;; I leave this as an exercise for the reader. 2 comments: 1. Hi, Just wanted to say thanks for the great Clojure blog and share with you my qsort. Sorry for the bad indentation... (defn qsort [s] (when-let [[x & xs] s] (let [{:keys [lt gt]} (group-by #(if (< % x) :lt :gt) xs)] (concat (qsort lt) (cons x (qsort gt))))))) It performs well on random data and badly on sorted data as expected. I don't think this implementation would lend itself easily to the optimization you describe in this post, however, I'd like to point out in reality most of the time you know enough about the data you're sorting to know whether quicksort or some other sort is most appropriate. 1. That's really pretty! Thank you!
{"url":"http://www.learningclojure.com/2013/07/randomized-quicksort.html","timestamp":"2014-04-20T05:54:11Z","content_type":null,"content_length":"64060","record_id":"<urn:uuid:0fee447e-d4c2-4fc6-b095-a9557d6bd682>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
ICS 163 - Graph Algorithms Homework 6 ICS 163 - Graph Algorithms Homework 6, 50 Points Due: Wednesday, March 12, 2003 1. 10 points. Use Kuratowski's Theorem to prove that the following graph, called Petersen's graph, is nonplanar. 2. 10 points. Show how to draw each of the following graphs on the surface of a torus (i.e., a "donut" with a hole in the middle) so that no two two edges cross. a. K[5] b. K[3,3] 3. 10 points. Draw a biconnected planar graph G with 10 vertices that has a separating cycle C with 6 vertices such that there are 5 pieces with respect to C that induce an interlacement graph with 6 edges. 4. 10 points. Give an example of a biconnected graph G and a separating cycle C in G such that the interlacement graph for the pieces of G with respect to C has at least cn^2 edges, for some constant c > 0. 5. 10 points. Let P be a piece of a biconnected graph with respect to a cycle C: a. Show that if P has at least one vertex, the number of edges of P is great than or equal to the number of attachments of P. b. Show that the graph obtained by adding P to C is biconnected.
{"url":"http://www.ics.uci.edu/~goodrich/teach/ics163/hw/hw6.html","timestamp":"2014-04-23T08:59:46Z","content_type":null,"content_length":"1881","record_id":"<urn:uuid:56214d77-8b67-4200-b6af-be65587789e3>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
American Mathematical Society Bulletin Notices AMS Sectional Meeting Program by Day Current as of Tuesday, April 12, 2005 15:09:41 Program | Deadlines | Registration/Housing/Etc. | Inquiries: meet@ams.org 1997 Fall Southeastern Sectional Meeting Atlanta, GA, October 17-19, 1997 Meeting #926 Associate secretaries: Robert J Daverman , AMS Sunday October 19, 1997 • Sunday October 19, 1997, 7:30 a.m.-11:00 a.m. Meeting Registration Room 255, Skiles Classroom Building • Sunday October 19, 1997, 7:30 a.m.-11:00 a.m. Exhibit and Book Sale Room 255, Skiles Classroom Building • Sunday October 19, 1997, 8:00 a.m.-10:50 a.m. Special Session on Set-Theoretic Techniques in Topology and Analysis, IV Room 168, Skiles Classroom Building Gary F. Gruenhage, Auburn University garyg@mail.auburn.edu Piotr Koszmider, Auburn University piotr@mail.auburn.edu • Sunday October 19, 1997, 8:30 a.m.-11:50 a.m. Special Session on Modern Banach Space Theory, IV Room 249, Skiles Classroom Building Stephen Dilworth, University of South Carolina dilworth@math.sc.edu Maria K. Girardi, University of South Carolina girardi@math.sc.edu • Sunday October 19, 1997, 8:30 a.m.-11:20 a.m. Special Session on Nonlinear Dynamics and Applications, IV Room 146, Skiles Classroom Building Wenxian Shen, Auburn University ws@math.auburn.edu Yingfei Yi, Georgia Institute of Technology yi@math.gatech.edu • Sunday October 19, 1997, 9:00 a.m.-10:50 a.m. Special Session on The Dynamics and Topology of Low Dimensional Flows, I Room 149, Skiles Classroom Building Michael C. Sullivan, Southern Illinois University at Carbondale mcs@math.nwu.edu Robert W. Ghrist, University of Texas at Austin ghrist@math.utexas.edu • Sunday October 19, 1997, 9:00 a.m.-11:20 a.m. Special Session on Computer Proofs in Set Theory and Logic, II Room 171, Skiles Classroom Building Johan G. F. Belinfante, Georgia Institute of Technology belinfan@math.gatech.edu • Sunday October 19, 1997, 9:00 a.m.-11:50 a.m. Special Session on Stochastic Inequalities and Their Applications, IV Room 202, Skiles Classroom Building Theodore P. Hill, Georgia Institute of Technology hill@math.gatech.edu Christian Houdr\'e, Georgia Institute of Technology houdre@math.gatech.edu • Sunday October 19, 1997, 9:00 a.m.-10:50 a.m. Special Session on Discrete Conformal Geometry, IV Room 243, Skiles Classroom Building Philip Lee Bowers, Florida State University bowers@math.fsu.edu □ 9:00 a.m. On symmetries of hyperbolic 3-manifolds Albert Marden*, □ 9:40 a.m. Ramanujan partition identities Hershel M. Farkas, The Hebrew University of Jerusalem Irwin Kra*, □ 10:20 a.m. Multi dimensional Theta identities Yaacov Kopeliovich*, • Sunday October 19, 1997, 9:00 a.m.-11:50 a.m. Special Session on Complex and Algebraic Dynamics and Applications, III Room 143, Skiles Classroom Building Marek R. Rychlik, University of Arizona rychlik@math.arizona.edu □ 9:00 a.m. Geometry of the boundary of Siegel disks Jacek Graczyk*, Michigan State University Peter Jones, □ 9:30 a.m. Measure-theoretic Properties of Analytic Maps of ${\Bbb P}^1$ and ${\Bbb P}^2$. Lorelei M Koss*, □ 10:00 a.m. A Classification of the Julia Sets of Hyperbolic Functions with Polynomial Schwartzian Derivative. Paul F Strack*, UNC - Chapel Hill □ 10:30 a.m. Holomorphic Dynamics in ${\Bbb C}^n$ Stefan M Heinemann*, □ 11:00 a.m. Discussion and Problem Session • Sunday October 19, 1997, 9:00 a.m.-10:45 a.m. Special Session on Applications of Symbolic Computation to Differential Equations, IV Room 270, Skiles Classroom Building James Herod, Georgia Tech herod@math.gatech.edu Maria Clara Nucci, University of Perugia, Italy nucci@unipg.it □ 9:00 a.m. Algorithms for classifying subalgebras of Lie algebras Pavel Winternitz*, Universite de Montreal □ 10:00 a.m. Exact Solutions of Nonlinear Differential Equations through Symmetries Maria Clara Nucci*, Universit\`a di Perugia (Italy) Inquiries: meet@ams.org
{"url":"http://ams.org/meetings/sectional/2015_program_sunday.html","timestamp":"2014-04-16T19:23:56Z","content_type":null,"content_length":"51953","record_id":"<urn:uuid:c3ceec62-ea35-440f-b021-9f12bfb5438c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Graphing Worksheets for Practice Here is a graphic preview for all of the graphing worksheets. You can select different variables to customize these graphing worksheets for your needs. The graphing worksheets are randomly created and will never repeat so you have an endless supply of quality graphing worksheets to use in the classroom or at home. We also produce blank Standard Graphing paper, Coordinate Plane Graphing Paper, and Polar Coordinate Graphing Paper for your use. Our graphing worksheets are free to download, easy to use, and very flexible. These graphing worksheets are a great resource for children in Kindergarten, 1st Grade, 2nd Grade, 3rd Grade, 4th Grade, 5th Grade, and 6th Grade. Click here for a Detailed Description of all the Graphing Worksheets. Quick Link for All Graphing Worksheets Click the image to be taken to that Graphing Worksheet. Single Quadrant Ordered Pair Worksheets These graphing worksheets will produce a single quadrant coordinate grid and a set of questions on ordered pairs. Four Quadrant Ordered Pair Worksheets These graphing worksheets will produce a four quadrant coordinate grid and a set of questions on ordered pairs. Four Quadrant Graphing Puzzle Worksheets These graphing worksheets will produce a four quadrant coordinate grid and a set of ordered pairs that when correctly plotted and connected will produce a picture. Four Quadrant Graphing Characters Worksheets These graphing worksheets will produce a four quadrant coordinate grid and a set of ordered pairs that when correctly plotted and connected will produce different characters. Standard Graphing Paper Worksheets These graphing worksheets will produce a blank page of standard graph paper for various types of scales. Coordinate Plane Graph Paper Worksheets These graphing worksheets will produce a single or four quadrant coordinate grid for the students to use in coordinate graphing problems. Polar Coordinate Graph Paper Worksheets These graphing worksheets will produce a polar coordinate grid for the students to use in polar coordinate graphing problems.
{"url":"http://www.math-aids.com/Graphing/","timestamp":"2014-04-19T10:55:00Z","content_type":null,"content_length":"33505","record_id":"<urn:uuid:e843e71e-6657-40d3-a909-363c54c61d47>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
Faraday's Induction and capacitance circuits Hi again, So I managed to source a huge capacitor, 10F, but couldn't get anything to even show signs of life, so took to testing the generator straight onto a 2.5V LED but nothing! I tried it on three different LEDs, I doubled up the magnets, tried a different wire (in case the copper isn't insulated) and even built a container to stabilise the voltage - before, the magnets were on a rod being turned back and forth, the motion used to make fire with a stick, now it is spinning constantly. I'm using these powerful magnets: with the flat surfaces out, not down. The wire is wrapped perpendicular to rotating motion. Any ideas on what I'm doing so wrong? If the description isn't clear, I can attach photos. Thanks for any help,
{"url":"http://www.physicsforums.com/showthread.php?t=345910","timestamp":"2014-04-18T15:44:38Z","content_type":null,"content_length":"74539","record_id":"<urn:uuid:383248da-6979-4577-b26e-54c7ac2decd6>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
Why is my xlabel cut off in my matplotlib plot? up vote 16 down vote favorite I am plotting a dataset using matplotlib where I have an xlabel that is quite "tall" (it's a formula rendered in TeX that contains a fraction and is therefore has the height equivalent of a couple of lines of text). In any case, the bottom of the formula is always cut off when I draw the figures. Changing figure size doesn't seem to help this, and I haven't been able to figure out how to shift the x-axis "up" to make room for the xlabel. Something like that would be a reasonable temporary solution, but what would be nice would be to have a way to make matplotlib recognize automatically that the label is cut off and resize accordingly. Here's an example of what I mean: import matplotlib.pyplot as plt while you can see the entire ylabel, the xlabel is cut off at the bottom. In the case this is a machine-specific problem, I am running this on OSX 10.6.8 with matplotlib 1.0.0 python matplotlib 1 You could post on an image hosting site and link it here. – agf Jul 21 '11 at 9:39 1 It helps if you can post a minimalistic sample code that triggers this issue. This way, people can understand and reproduce your problem faster, and they will be more likely to help you. – Denilson Sá Jul 21 '11 at 9:40 Your code works just fine (display the formula fully visible) on my machine (ubuntu 11.04 64bit). Maybe is a machine-specific problem [like a font with wrong dimensional information being used in the image?]. You could perhaps specify the system you are using in your question. – mac Jul 21 '11 at 10:21 add comment 3 Answers active oldest votes to make room for the label. up vote 19 down vote Since i gave the answer, matplotlib has added the tight_layout function. So i suggest to use it: should make room for the xlabel. excellent. that did the trick. thanks very much. – Andrew Jul 21 '11 at 16:15 11 I find it pretty weird that one would need to make an extra call to make room for an essential part of a plot. What's the reasoning behind this? – a different ben Apr 9 '12 at 2 Just out of curiousity, why do you have gcf().subplots_adjust rather than plt.subplots_adjust? Is there a difference? – juniper- Jun 6 '13 at 14:25 No, there is no difference. – tillsten Jun 9 '13 at 19:48 2 What are gcf and gca? You neglected to explain! – Colonel Panic Mar 6 at 11:21 add comment An easy option is to configure matplotlib to automatically adjust the plot size. It works perfectly for me and I'm not sure why it's not activated by default. Method 1 Set this in your matplotlibrc file figure.autolayout : True See here for more information on customizing the matplotlibrc file: http://matplotlib.org/users/customizing.html up vote 10 down vote Method 2 Update the rcParams during runtime like this from matplotlib import rcParams rcParams.update({'figure.autolayout': True}) The advantage of using this approach is that your code will produce the same graphs on differently-configured machines. I had some problems with too large colorbars when using this option. One possible solution is discussed here: matplotlib.org/users/tight_layout_guide.html – PiQuer Jul 22 '13 at add comment You can also set custom padding as defaults in your $HOME/.matplotlib/matplotlib_rc as follows. In the example below I have modified both the bottom and left out-of-the-box padding: # The figure subplot parameters. All dimensions are a fraction of the # figure width or height up vote 2 down figure.subplot.left : 0.1 #left side of the subplots of the figure vote #figure.subplot.right : 0.9 figure.subplot.bottom : 0.15 Fantastic suggestion. I'm so tired of fighting with matplotlib on this particular point that methinks I'll set some huge buffers and just use pdfcrop to do the appropriate trimming. – Matthew G. Apr 10 '13 at 21:15 add comment Not the answer you're looking for? Browse other questions tagged python matplotlib or ask your own question.
{"url":"http://stackoverflow.com/questions/6774086/why-is-my-xlabel-cut-off-in-my-matplotlib-plot/17390833","timestamp":"2014-04-18T04:07:06Z","content_type":null,"content_length":"82691","record_id":"<urn:uuid:b0342514-c09a-4fc7-aa82-fb1c1edc3713>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
Approximation Reformulations Daniel S. Weld Although computers are widely used to simulate complex physical systems, crafting the underlying models that enable computer analysis remains difficult. When a model is created for one task, it is often impossible to reuse the model for another purpose because each task requires a different set of simplifying assumptions. By representing modeling assumptions explicitly as approximation reformulations, we have developed qualitative techniques for switching between models. We assume that automated reasoning proceeds in three phases: 1) model selection, 2) quantitative analysis using the model, and 3) validation that the assumptions underlying the model were appropriate for the task at hand. If validation discovers a serious discrepancy between predicted and observed behavior, a new model must be chosen. We present a domain independent method for performing this model shift when the models are related by an approximation reformulation and describe a Common Lisp implementation of the theory. This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.
{"url":"http://www.aaai.org/Library/AAAI/1990/aaai90-062.php","timestamp":"2014-04-20T00:43:39Z","content_type":null,"content_length":"2837","record_id":"<urn:uuid:cea0c435-8ade-4906-86b5-d64d1adcdfc3>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
discussion from point of a topos Todd: How much does it matter? It matters if for example you want to say that points of $Sh(X)$, $X$ a sober space, are in bijection with points of $X$. Otherwise one can just refer back to equivalence of categories, unless you see a problem with that. Mike Shulman?: I would argue that “a point” of a topos really should mean “a geometric morphism from $Set$,” not “an isomorphism class of geometric morphisms from $Set$,” for the same reason that “a group” means, well, “a group” and not “an isomorphism class of groups.” Following from that, I would say that it’s not really correct to say that points of $Sh(X)$ (for $X$ sober) are in bijection with $X$, but rather that the category of points of $Sh(X)$ is equivalent to the category of points of $X$. Note that that’s actually a stronger statement than saying that their sets of isomorphism classes of objects are in bijection. Zoran Škoda: I want to use reconstruction theorems to get some geometric spaces; I need really to get points of underlying spaces without multiplicities! The equivalence is not satisfactory for my purposes, as I would like to use the (more general situations) in which one has some category $T$ of nice categories (e.g. abelian, topoi etc.) with a subcategory $A'$, where the morphisms are adjoint functors with possibly additional properties; possibly I want to pass to a comma category of the whole thing, for a specific object (the reasons for that are very specific and somewhat nontrivial, having to do with affinity of morphisms). Then I have an equivalence of categories between $A'$ and some category of local or test objects $NAff$, which is in my examples some category of noncommutative algebras. Then I look at categories in $T$ which are obtained from gluing objects in $A'$, where gluing is via descent using say localizations with some flatness properties; this way I get some bigger category $A''$. I do not assume that the localizations commute, i.e. the covers are more general than in the picture of Grothendieck topologies. Then I want to say that $A''$ are represented by some class of presheaves on $NAff$. For that I need to look at morphisms from objects in $A'$ to objects in $A''$ without spurious multiplicity. Of course I can look at 2-Yoneda and getting some presheaf of categories on $A'$ and then afterwards try to decategorify to get down to a presheaf of sets on $A$. I do not know what is the best approach. Any advice ? Mike Shulman?: It sounds to me like you want to prove that the resulting (pseudo) presheaves of categories are essentially discrete, and hence are equivalent to presheaves of sets. Urs Schreiber?: yes, I think, too, that this is what Zoran is talking about. I think effectively he has the setup discussed at notions of space only that there $(\infty,1)$-toposes are usesd in place where Zoran wants to use abelian categories, $A_\infty$-categories and eventually stable $\infty$-categories as formal duals of spaces. In that context, Mike: how do I see that the category $[Set,T]_{geom}$ of geometric topos morphisms with natural transformations between them is equivalent to a set? Mike Shulman?: It depends on what $T$ is. For an arbitrary topos $T$, of course $[Set,T]_{geom}$ will not be equivalent to a set. What sort of $T$ are you considering? Zoran Škoda: My main examples are not in topos theory, but I would like to see the way similar proofs work. Instead of 2cat of topoi I need to consider certain slice 2cat of abelian categories. More precisely, start with a 2cat $pCT$ whose objects are pairs $(a,O)$ where $a$ is an abelian category and $O$ an object in $a$; the morphisms are pairs of additive adjoint functors (no additional assumptions at start) together with maps $O'\to f_* O$. The slice category is over a category $k-Mod$ where $k$ is a fixed unital ring, commutative or not, it does not matter. This is a ground category. The subcategory $A'\subset pCT$ is given by the requirement that the pair of adjoint functors to the ground category is supposed to be affine (the right adjoint is faithful and has its own right adjoint). This forces the objects in sub2category $A'$ to be equivalent to $R-Mod$ for some $k$-ring $R$; the fact that we are in 2-category means that the triangles in slice category commute up to isomorphisms, this nontrivially forces that the maps between two different $R-Mod$ will not be general tensoring with a bimodule but really something coming from a ring map (affine morphisms satisfy such factorization conditions: similarly if $c$ is monadic over $a$ and $b$ over $a$ then $c$ is monadic over $b$ what is a special case of one of the adjoint lifting theorems; monadicity is weaker than affiness. In particular that means that in decategorified version (classes of geometric functors) the morphisms between categories of modules and underlying rings are the same (the Morita morphisms are excluded by the slice category trick). Now I glue such representable functors on $NAff_k = (k-Rings)^op$ like in gluing categories from localizations. I can assume that the cover is not only comonadic but in fact forms a noncommutative scheme of Rosenberg (plus that we work with choice of object $O$ not stated there, though automatic as inverse image of $R$ in $k-Mod$ via the grounding morphism). Now I want to use some decategorification theorem to state that instead of gluing categories $R-Mod$ I can glue representable presheaves $h_X$ with $X = R^op$; notice that localizations do not commute and the consecutive localizations do not form pullbacks, so we do not have stability axiom of Grothendieck topologies. I would like to be able to present all information on the glued category (noncommutative scheme) by a presheaf of sets on $NAff \cong A'$; or understand if I really need presheaf of cats on $NAff$. The strange locality given by localizations should give a subcategory of “sheaves” which is not a topos, but some subcategory of presheaves whose embedding into presheaves has weaker exactness conditions. Notice that while I glue representable presheaves on NAff, the consecutive (double) localizations where I compare them for gluing are NOT representable by objects in NAff, but only in the big ambient slice category of all abelian categories. In commutative case this may happen for nonsemiseparated schemes, but then we have still represent by the locally ringed spaces where we do not deal with 2-categories. Zoran (P.S.) Mike, the main question for you before was if $[Set,T]_{geom}$ is equivalent to a set when $T$ is a topos of sheaves over a topological space (the assertion is below in fact in a form of bijection which spurred the question). What or where is the proof ? (elephant?) P.S. 2 But I was asking all the time actually a different question, the domain is not Set but any of the members of a subcategory/family of local models. But I do not know good examples of such families in topoi (which have also decategorifications). P.S.3 Here is however an attempt for an example in Topoi but I am not sure if it is. Take the category Top of topological spaces. Then topological stacks are 1-stacks with some representability conditions; in particular they have an atlas by usual topological space. Now I do not know, but I suppose that the category of sheaves on a topological stack is still an elementary topos, though maybe not Grothendieck topos. Is it true that if I take $[Sh(X), Sh (Y)]_{geom}$ where $X$ is any topological space and $Y$ a fixed topological stack, then this equivalent to a set? P.S. 4 Here is a further intuition. While the points of topoi are geom morphisms from Set, and Set is good enough to probe toplogical spaces, because they are made out of points, could not there be a more general statement that if one takes generalized S-points for S in some sub-2-category MODELS of Topoi which is equivalent to some 1-category, and if we look at topoi which are sheaf on some class of STACKS on MODELS possesing usual atlas conditions (I want in the sense of gluing localization but to start with maybe gluing in Grothendieck topology is good starter) are the S-points for all S in MODELS enough in 1-categorical sense ? I said earlier topological stacks now “possessing usual atlas conditions” not just 1-stacks in usual sense because I need atlas to make sense of the category of sheaves on the stack.
{"url":"http://ncatlab.org/zoranskoda/show/discussion+from+point+of+a+topos","timestamp":"2014-04-17T10:16:28Z","content_type":null,"content_length":"30781","record_id":"<urn:uuid:59c38309-12d7-46e0-8d56-4a4e80d098b5>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
A certain swimming pool of 10000 liters has three pipes; Author Message A certain swimming pool of 10000 liters has three pipes; [#permalink] 06 Aug 2003, 09:25 A certain swimming pool of 10000 liters has three pipes; pipe A to fill and pipes B and C to pour out. A fills the pool at 1000 liters per hour; B pours out at 200 lph; C pours Joined: 03 Feb 2003 out at 100 lph. All the pipes start to work simultaneously. Unfortunately, due to some problem, A stops for 5 minutes after each hour of working. B and C stop for 5 minutes after each half an hour of working. However, all the pipes work until the pool is full. How long does it take them to fill the pool completely? Posts: 1619 Followers: 5 stolyar wrote: A certain swimming pool of 10000 liters has three pipes; pipe A to fill and pipes B and C to pour out. A fills the pool at 1000 liters per hour; B pours out at 200 lph; C pours out at 100 lph. All the pipes start to work simultaneously. Unfortunately, due to some problem, A stops for 5 minutes after each hour of working. B and C stop for 5 minutes Konstantin Lynov after each half an hour of working. However, all the pipes work until the pool is full. How long does it take them to fill the pool completely? Manager A admit I used a calculator because this challenge is not a 2,(027) min question. Joined: 24 Jun 2003 Answers are: Posts: 94 7h54min with faulty A,B,C, vaults and Location: Moscow 7h42min if A,B,C work constantly. Followers: 1 _________________ Konstantin Lynov The faulty valts answer is calculated on the conditions of a 5 min per hour break for all the vaults, where as the original Q states that B&C rest after every 30 min. Therefore, it will take some 10-15 min faster to fill the pool. My fault. Joined: 24 Jun 2003 Posts: 94 Location: Moscow Followers: 1 Manager Stolyar, my answer comes to 15.01 hours Joined: 22 Jul 2003 Posts: 63 Location: CA Followers: 1 Sorry guys, it's late in the evening here and I needed to eat some water melon to reconsider my answer. The exam date is approaching and I often catch myself thinking about it even in the most peculiar places. Unswer is 15 hours. Konstantin Lynov Logic is as follows: Manager A pours in 1000*55/60 liters an hour (l/h) Joined: 24 Jun 2003 B pours out 200*50/60 l/h Posts: 94 C pours out 100*50/60 l/h Location: Moscow To get a total amount of water that is added into the swimming pool every hour, substract B&C from A. It's 666,(6)liters. Followers: 1 Time=work/rate => 10000/666(6)=15 hours. We see that pipe A fills in 1000 gal in 65 minutes (1 hr filling and 5 min. rest). We see that B & C drains out 300 gal. in 70 minutes (1/2 hr draining, 5 min rest, 1/2 hr draining and another 5 min. rest) prakuda2000 So, I suppose the total time taken is 910x. Why? Because, the LCM of 65 and 70 is 910. I used a factor x with it because I do not know yet whether 910 is the time taken in minutes, YET. Now, A fills in 1000/65*910x gallons in 910x minutes Joined: 22 Jul 2003 or, A fills in 14000x gallons Posts: 63 B&C drains 300/70*910x gallons in 910x minutes or, B&C drains 3900x gallons Location: CA So, at the end of the total time (910x minutes) the pool will be filled. Followers: 1 => 14000x - 3900x = 10000 => x = 10000/10100 = 100/101 So, total time = 910x = 910 * 100/101 = 15.01 hours AkamaiBrah stolyar wrote: GMAT Instructor A certain swimming pool of 10000 liters has three pipes; pipe A to fill and pipes B and C to pour out. A fills the pool at 1000 liters per hour; B pours out at 200 lph; C pours out at 100 lph. All the pipes start to work simultaneously. Unfortunately, due to some problem, A stops for 5 minutes after each hour of working. B and C stop for 5 minutes Joined: 07 Jul 2003 after each half an hour of working. However, all the pipes work until the pool is full. How long does it take them to fill the pool completely? Posts: 771 Becareful the way you word/interpret the problems. The problem actually states that A stops working AFTER working for one hour, and pipes B and C stop working AFTER working for 30 minutes. This is entirely different than if A stops working once every hour (i.e., after working for 55 minutes) and B and C stop working once every half hour (i.e., after Location: New York NY working 25 minutes), which is the problem that everyone solved. Stolyar, what is your intention here? Schools: Haas, MFE; Anderson, MBA; USC, _________________ Followers: 9 Kudos [?]: 21 [0], Former Senior Instructor, Manhattan GMAT and VeritasPrep given: 0 Vice President, Midtown NYC Investment Bank, Structured Finance IT MFE, Haas School of Business, UC Berkeley, Class of 2005 MBA, Anderson School of Management, UCLA, Class of 1993 Manager AkamaiBrah, I don't agree. The problem states "A stops for 5 minutes after each hour of working. B and C stop for 5 minutes after each half an hour of working." Joined: 22 Jul 2003 If "A stops working AFTER working for one hour, and pipes B and C stop working AFTER working for 30 minutes" then the pool will never be filled. Posts: 63 Location: CA Followers: 1 prakuda2000 wrote: AkamaiBrah I don't agree. The problem states "A stops for 5 minutes after each hour of working. B and C stop for 5 minutes after each half an hour of working." GMAT Instructor If "A stops working AFTER working for one hour, and pipes B and C stop working AFTER working for 30 minutes" then the pool will never be filled. Joined: 07 Jul 2003 I'm sure he means 5 mnutes after Posts: 771 each Location: New York NY hour and half hour respectively, but your observation is besides my point. My point is that 5 minutes "after" each hour and half-hour is still different from 5 minutes "at the 10024 end" of each hour and half-hour respectively. The solution of 15 hours is the solution to the latter interpretation, which is not what Stolyar asked. The answer to the first interpretation is a few minutes less. Schools: Haas, MFE; Anderson, MBA; USC, The answers are not different by much, BUT THEY ARE STILL DIFFERENT. Followers: 9 Kudos [?]: 21 [0], given: 0 AkamaiBrah Former Senior Instructor, Manhattan GMAT and VeritasPrep Vice President, Midtown NYC Investment Bank, Structured Finance IT MFE, Haas School of Business, UC Berkeley, Class of 2005 MBA, Anderson School of Management, UCLA, Class of 1993 great problem... the way i modeled this in order to be able to do this in about two minutes was to say that pipe A filled at a rate of 1000 * 11/12 lph and i mentally combined pipes B and C into Joined: 10 Jun 2003 one pipe emptying at 300 * 5/6 lph. Posts: 213 Then I set up the equation: Location: Maryland X (11000/12 - 1500/6) = 10000 Followers: 2 By hand, X is about 15, a little more. Hope that is accurate enough for the GMAT!!! Kudos [?]: 3 [0], given: 0 Manager AkamaiBrah, Joined: 22 Jul 2003 Absolutely. You are correct. I see what you mean. Currently trying to formulate a way to see if I can solve that. Thanks Posts: 63 Location: CA Followers: 1 AkamaiBrah wrote: stolyar wrote: A certain swimming pool of 10000 liters has three pipes; pipe A to fill and pipes B and C to pour out. A fills the pool at 1000 liters per hour; B pours out at 200 lph; C pours SVP out at 100 lph. All the pipes start to work simultaneously. Unfortunately, due to some problem, A stops for 5 minutes after each hour of working. B and C stop for 5 minutes after each half an hour of working. However, all the pipes work until the pool is full. How long does it take them to fill the pool completely? Joined: 03 Feb 2003 Becareful the way you word/interpret the problems. The problem actually states that A stops working AFTER working for one hour, and pipes B and C stop working AFTER working for Posts: 1619 30 minutes. This is entirely different than if A stops working once every hour (i.e., after working for 55 minutes) and B and C stop working once every half hour (i.e., after working 25 minutes), which is the problem that everyone solved. Followers: 5 Stolyar, what is your intention here? Sure you are correct! A works for a COMPLETE HOUR (60 minutes) then stops for 5 minutes. B and C work for HALF AN HOUR (30 min) and then stop for 5 min. And so on and so forth, until the pull is full. I invented the question occasionally yesterday, so figures can be raw. A hint: re-consider rates. My solution is to go public soon. My solution: 1. Unite pouring pipes in one that works at the rate of 300 lph 2. Translate nominal rates into actual ones. Since we deal with minutes, it is reasonable to get rid of hours the nominal R(fill)=1000/60 lpm Joined: 03 Feb 2003 the nominal R(pour)=300/60 lpm Posts: 1619 the actual R(fill)=1000/65=200/13 lpm (because of a 5-minute stop after each 60 m) the actual R(pour)=150/35=30/7 lpm (because of a 5-minute stop after each 30 m) Followers: 5 10000=(200/13-30/7)*X in minutes X=910000/1010=901 munutes approximately, or 15 hours and 1 minute.
{"url":"http://gmatclub.com/forum/a-certain-swimming-pool-of-10000-liters-has-three-pipes-1887.html","timestamp":"2014-04-16T20:16:32Z","content_type":null,"content_length":"181676","record_id":"<urn:uuid:a9377d4f-d7d9-420c-8f44-014b126b7fc7>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
real life sample problems involving polynomial function Author Message Cbeedie Posted: Friday 16th of Jul 18:31 Hi, I need some immediate help on real life sample problems involving polynomial function. I’ve searched through various websites for topics like roots and point-slope but none could help me solve my doubt relating to real life sample problems involving polynomial function. I have an exam in a couple now and if I don’t start working on my problem then I might just fail my exam. I’ve even tried calling a few of my peers, but they seem to be struggling as well. So guys, please help me. Back to top IlbendF Posted: Saturday 17th of Jul 08:24 Can you be a bit more detailed about real life sample problems involving polynomial function ? I may perhaps be able to help you if I knew few more details . A good quality software can help you solve your problem instead of paying for a algebra tutor. I have tried many algebra program and guarantee that Algebra Buster is the best program that I have stumbled onto . This Algebra Buster will solve any algebra problem that you enter and it also clarifies every step of the solution – you can exactly write it down as your homework assignment. However, this Algebra Buster should also help you to learn algebra rather than only use it to copy answers. Back to top 3Di Posted: Monday 19th of Jul 09:21 It would really be nice if you could let us know about a tool that can provide both. If you could get us a home tutoring software that would give a step-by-step solution to our problem, it would really be nice. Please let us know the genuine links from where we can get the software. 45°26' N, 09°10' E Back to top agxofy Posted: Tuesday 20th of Jul 08:21 I would like to give it a try. Where can I find the software? Back to top TihBoasten Posted: Wednesday 21st of Jul 13:52 A truly piece of algebra software is Algebra Buster. Even I faced similar difficulties while solving quadratic formula, adding matrices and rational equations. Just by typing in the problem workbookand clicking on Solve – and step by step solution to my algebra homework would be ready. I have used it through several algebra classes - Algebra 1, Remedial Algebra and College Algebra. I highly recommend the program. Back to top Noddzj99 Posted: Thursday 22nd of Jul 11:06 This one is actually quite unique. I am recommending it only after trying it myself. You can find the details about the software at http://www.algebra-online.com/algebra-testimonials.htm. From: the Back to top
{"url":"http://www.algebra-online.com/algebra-homework-soft/multiplying-fractions/real-life-sample--problems.html","timestamp":"2014-04-21T12:08:27Z","content_type":null,"content_length":"30124","record_id":"<urn:uuid:2f272a9b-a827-4271-96b7-50fa68235988>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
Kirkland, WA Precalculus Tutor Find a Kirkland, WA Precalculus Tutor ...Or, if you came to this topic looking for help putting together a webpage with a database backend, we can work on that too. I'm a stickler for grammar and proper use of English. If you need help finessing your writing, or editing a thesis, or proofreading, I'm happy to help. 61 Subjects: including precalculus, English, writing, calculus ...If you don't understand something or can't solve a problem, I can simplify it until you get it and solve it all by yourself. I like to help high school students prepare SAT math, making sure that they understand everything and are well-prepared before the test. With my years of tutoring experience, I've helped many students improve their math scores. 13 Subjects: including precalculus, geometry, Chinese, algebra 1 ...The 6 years I taught in the classroom, I taught algebra every year and find it my most enjoyable subject to tutor. I have a master's in Teaching and teaching certification in 4th through 12th math and English. I formerly taught Algebra 2 in high schools here and abroad, and my favorite subject to teach is algebra. 39 Subjects: including precalculus, reading, writing, algebra 1 ...At the end of it there was a multiplication problem. I said 'take the first number. Draw that many circles. 17 Subjects: including precalculus, calculus, statistics, geometry ...I had some experience tutoring college level stats while working at Chaminade University as well. Since I started tutoring with WyzAnt, I have had many students taking stats at various colleges. You may read some of their responses. 20 Subjects: including precalculus, reading, calculus, geometry Related Kirkland, WA Tutors Kirkland, WA Accounting Tutors Kirkland, WA ACT Tutors Kirkland, WA Algebra Tutors Kirkland, WA Algebra 2 Tutors Kirkland, WA Calculus Tutors Kirkland, WA Geometry Tutors Kirkland, WA Math Tutors Kirkland, WA Prealgebra Tutors Kirkland, WA Precalculus Tutors Kirkland, WA SAT Tutors Kirkland, WA SAT Math Tutors Kirkland, WA Science Tutors Kirkland, WA Statistics Tutors Kirkland, WA Trigonometry Tutors
{"url":"http://www.purplemath.com/kirkland_wa_precalculus_tutors.php","timestamp":"2014-04-19T04:47:52Z","content_type":null,"content_length":"23984","record_id":"<urn:uuid:e6510f60-cc8c-4557-8458-0b890d3e6392>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about Ratios on Let's Play Math! [Feature photo above by Baluart.net.] Here’s a blast from the Let’s Play Math! blog archives: Seven years ago, one of my math club students was preparing for a speech contest. His mother emailed me to check some figures, which led to a couple of blog posts on solving proportion problems. I hope you enjoy… Putting Bill Gates in Proportion A friend gave me permission to turn our email discussion into an article… Can you help us figure out how to figure out this problem? I think we have all the information we need, but I’m not sure: The average household income in the United States is $60,000/year. And a man’s annual income is $56 billion. Is there a way to figure out what this man’s value of $1mil is, compared to the person who earns $60,000/year? In other words, I would like to say — $1,000,000 to us is like 10 cents to Bill Gates. Let the Reader Beware When I looked up Bill Gates at Wikipedia, I found out that $56 billion is his net worth, not his income. His salary is $966,667. Even assuming he has significant investment income, as he surely does, that is still a difference of several orders of magnitude. But I didn’t research the details before answering my email — and besides, it is a lot more fun to play with the really big numbers. Therefore, the following discussion will assume my friend’s data are accurate… [Click here to go read Putting Bill Gates in Proportion.] Bill Gates Proportions II Another look at the Bill Gates proportion… Even though I couldn’t find any data on his real income, I did discover that the median American family’s net worth was $93,100 in 2004 (most of that is home equity) and that the figure has gone up a bit since then. This gives me another chance to play around with proportions. So I wrote a sample problem for my Advanced Math Monsters workshop at the APACHE homeschool conference: The median American family has a net worth of about $100 thousand. Bill Gates has a net worth of $56 billion. If Average Jane Homeschooler spends $100 in the vendor hall, what would be the equivalent expense for Gates? Cool Fibonacci Conversion Trick Maria explains how to use the Fibonacci Numbers to convert distance measurements between miles and kilometers: P.S.: Congratulations to Maria for her Math Mammoth program being featured in the latest edition of Cathy Duffy’s 100 Top Picks for Homeschool Curriculum! And Home School Buyer’s Co-op has a sale on Cathy Duffy’s book through the end of July. Get all our new math tips and games: Subscribe in a reader, or get updates by Email. PUFM 1.5 Multiplication, Part 1 Photo by Song_sing via flickr. In this Homeschooling Math with Profound Understanding (PUFM) Series, we are studying Elementary Mathematics for Teachers and applying its lessons to home education. My apologies to those of you who dislike conflict. This week’s topic inevitably draws us into a simmering Internet controversy. Thinking my way through such disputes helps me to grow as a teacher, to re-think on a deeper level things I thought I understood. This is why I loved Liping Ma’s book when I first read it, and it’s why I thoroughly enjoyed Terezina Nunes and Peter Bryant’s book Children Doing Mathematics. Multiplication of whole numbers is defined as repeated addition. — Thomas H. Parker & Scott J. Baldridge Elementary Mathematics for Teachers Multiplication simply is not repeated addition, and telling young pupils it is inevitably leads to problems when they subsequently learn that it is not… Adding numbers tells you how many things (or parts of things) you have when you combine collections. Multiplication is useful if you want to know the result of scaling some quantity. — Keith Devlin It Ain’t No Repeated Addition Radiation Sanity Chart With news reports of radiation from Japan being found from California to Massachusetts — and now even in milk — math teachers need to help our students put it all in perspective. xkcd to the rescue! Pajamas Media offers a brief history of radiation, plus an analysis of our exposure in Banana Equivalent Doses: And the EPA offers a FAQ: [T]he levels being seen now are 25 times below the level that would be of concern even for infants, pregnant women or breastfeeding women, who are the most sensitive to radiation… At this time, there is no need to take extra precautions… Iodine-131 disappears relatively quickly in the environment. — Centers for Disease Control and Prevention (CDC) pages 4-5 of EPA FAQ [Hat tip: Why Homeschool.] Don’t miss any of “Let’s Play Math!”: Subscribe in a reader, or get updates by Email. Probability Issue: Hints and Answers Remember the Math Adventurer’s Rule: Figure it out for yourself! Whenever I give a problem in an Alexandria Jones story, I will try to post the answer soon afterward. But don’t peek! If I tell you the answer, you miss out on the fun of solving the puzzle. So if you haven’t worked these problems yet, go back to the original posts. If you’re stuck, read the hints. Then go back and try again. Figure them out for yourself — and then check the answers just to prove that you got them right. This post offers hints and answers to puzzles from these blog posts: Rate Puzzle: How Fast Does She Read? [Photo by Arwen Abendstern.] If a girl and a half can read a book and a half in a day and a half, then how many books can one girl read in the month of June? Kitten reads voraciously, but she decided to skip our library’s summer reading program this year. The Border’s Double-Dog Dare Program was a lot less hassle and had a better prize: a free book! Of course, it didn’t take her all summer to finish 10 books. How fast does Kitten read? Hobbit Math: Elementary Problem Solving 5th Grade [Photo by OliBac. Visit OliBac's photostream for more.] The elementary grades 1-4 laid the foundations, the basics of arithmetic: addition, subtraction, multiplication, division, and fractions. In grade 5, students are expected to master most aspects of fraction math and begin working with the rest of the Math Monsters: decimals, ratios, and percents (all of which are specialized fractions). Word problems grow ever more complex as well, and learning to explain (justify) multi-step solutions becomes a first step toward writing proofs. This installment of my elementary problem solving series is based on the Singapore Primary Mathematics, Level 5A. For your reading pleasure, I have translated the problems into the world of J.R.R. Tolkien’s classic, The Hobbit. [Note: No decimals or percents here. Those are in 5B, which will need an article of its own. But first I need to pick a book. I'm thinking maybe Naya Nuki...] Printable Worksheet In case you’d like to try your hand at the problems before reading my solutions, I’ve put together a printable worksheet: Can You Read the Flu Map? [Map as of early afternoon on May 4th, found at the NY Times.] Compare the dark circles (confirmed cases) for Mexico, New York and Nova Scotia in the top part, or Mexico and the U.S. in the lower part of the map. It’s easy to see which has more cases of the flu — but how many more? Which would you guess is the closest estimate: Mexico : New York : Nova Scotia • = 7:3:2 or 20:5:3 or 16:2:1? U.S. : Mexico • = 1:2 or 2:5 or 3:7? Review: Math Doesn’t Suck We’ve all heard the saying, Don’t judge a book by its cover, but I did it anyway. Well, not by the cover, exactly — I also flipped through the table of contents and read the short introduction. And I said to myself, “I don’t talk like this. I don’t let my kids talk like this. Why should I want to read a book that talks like this? I’ll leave it to the public school kids, who are surely used to Okay, I admit it: I’m a bit of a prude. And it caused me to miss out on a good book. But now Danica McKellar‘s second book is out, and the first one has been released in paperback. A friendly PR lady emailed to offer me a couple of review copies, so I gave Math Doesn’t Suck a second chance. I’m so glad I did. Christmas in July Math Problem [Photo by Reenie-Just Reenie.] In honor of my Google searchers, to demonstrate the power of bar diagrams to model ratio problems, and just because math is fun… Eccentric Aunt Ethel leaves her Christmas tree up year ’round, but she changes the decorations for each passing season. This July, Ethel wanted a patriotic theme of flowers, ribbons, and colored When she stretched out her three light strings (100 lights each) to check the bulbs, she discovered that several were broken or burned-out. Of the lights that still worked, the ratio of red bulbs to white ones was 7:3. She had half as many good blue bulbs as red ones. But overall, she had to throw away one out of every 10 bulbs. How many of each color light bulb did Ethel have? Before reading further, pull out some scratch paper. How would you solve this problem? How would you teach it to a middle school student? An Ancient Mathematical Crisis [When Alexandria Jones and her family visited an excavation in southern Italy, they learned several tidbits about the ancient school of mathematics and philosophy founded by Pythagoras. Here is Alex's favorite story.] It hit the Pythagorean Brotherhood like an earthquake, a crisis of faith which shook the foundations of their universe. Some say Pythagoras himself made the dread discovery, others blame Hippasus of Something certainly did happen with Hippasus. The Brotherhood sent him into exile for insubordination, or for breaking the rule of secrecy — or was it for proving the unthinkable? According to legend, Hippasus drowned at sea, but was it a mere shipwreck or the wrath of the gods? Some say the irate Pythagoreans threw him overboard… The Golden Christmas Tree Last time, Alexandria Jones and her family were on their way to Uncle William’s tree farm to find the perfect Christmas tree, and Dr. Jones taught us about the Golden Section: $The \; Golden \; Section \; ratio$ $A \; is \; to \; B \; as \; \left(A + B \right) \; is \; to \; A, \; or . . .$ $\frac{A}{B} = \frac{A + B}{A} = \: ?$ I gave you three algebra puzzles to solve. Did you try them? • What is the exact value of the Golden Section ratio? • If a 7-foot tree will fit in the Jones family’s living room, allowing for the tree stand and for a star on top, how wide will the tree be? • Approximately how much surface area will Alex and Leon have to fill with lights and ornaments? Math Adventurer’s Rule: Figure It Out for yourself Whenever I give a problem in an Alexandria Jones story, I will try to post the answer soon afterward. But don’t peek! If I tell you the answer, you miss out on the fun of solving the puzzle. So if you have not worked these problems yet, go back to the original post. Figure them out for yourself — and then check the answers just to prove that you got them right. A-Hunting They Will Go Alexandria Jones and her family piled into the car for a drive in the country. This year, they were determined to find an absolutely perfect Christmas tree at Uncle William Jones’s tree farm. “I want the tallest tree in Uncle Will’s field,” Alex said. “Hold it,” said her mother. “I refuse to cut a hole in the roof.” “But, Mom!” Leon whined. “The Peterkin Papers…” “Too bad. Our ceiling will stay a comfortable 8 feet high.” Reading to Learn Math [Photo by Betsssssy.] Do you ever take your kids’ math tests? It helps me remember what it is like to be a student. I push myself to work quickly, trying to finish in about 1/3 the allotted time, to mimic the pressure students feel. And whenever I do this, I find myself prone to the same stupid mistakes that students make. Even teachers are human. In this case, it was a multi-step word problem, a barrage of information to stumble through. In the middle of it all sat this statement: …and there were 3/4 as many dragons as gryphons… My eyes saw the words, but my mind heard it this way: …and 3/4 of them were dragons… What do you think — did I get the answer right? Of course not! Every little word in a math problem is important, and misreading even the smallest word can lead a student astray. My mental glitch encompassed several words, and my final tally of mythological creatures was correspondingly screwy. But here is the more important question: Can you explain the difference between these two statements? Trouble with Percents Can your students solve this problem? There are 20% more girls than boys in the senior class. What percent of the seniors are girls? This is from a discussion of the semantics of percent problems and why students have trouble with them, going on over at MathNotations. (Follow-up post here.) Our pre-algebra class just finished a chapter on percents, so I thought Chickenfoot might have a chance at this one. Nope! He leapt without thought to the conclusion that 60% of the class must be girls. After I explained the significance of the word “than”, he solved the follow-up problem just fine. Don’t miss any of “Let’s Play Math!”: Subscribe in a reader, or get updates by Email. Have more fun on Let’s Play Math! blog: How Old Are You, in Nanoseconds? Conversion factors are special fractions that contain problem-solving information. Why are they called conversion factors? “Conversion” means change, and conversion factors help you change the numbers and units in your problem. “Factors” are things you multiply with. So to use a conversion factor, you will multiply it by something. For instance, if I am driving an average of 60 mph on the highway, I can use that rate as a conversion factor. I may use the fraction $\frac{60 \: miles}{1 \: hour}$, or I may flip it over to make $\ frac{1 \: hour}{60 \: miles}$. It all depends on what problem I want to solve. After driving two hours, I have traveled: $\left(2 \: hours \right) \times \frac{60 \: miles}{1 \: hour} = 120$miles so far. But if I am planning to go 240 more miles, and I need to know when I will arrive: $\left(240 \: miles \right) \times \frac{1 \: hour}{60 \: miles} = 4$hours to go. Any rate can be used as a conversion factor. You can recognize them by their form: this per that. Miles per hour, dollars per gallon, cm per meter, and many, many more. Of course, you will need to use the rate that is relevant to the problem you are trying to solve. If I were trying to figure out how far a tank of gas would take me, it wouldn’t be any help to know that an M1A1 Abrams tank gets 1/3 mile per gallon. I won’t be driving one of those. Using Conversion Factors Is Like Multiplying by One If I am driving 65 mph on the interstate highway, then driving for one hour is exactly the same as driving 65 miles, and: $\frac{65 \: miles}{1 \: hour} = the \: same \: thing \: divided \: by \: itself = 1$ This may be easier to see if you think of kitchen measurements. Two cups of sour cream are exactly the same as one pint of sour cream, so: $\frac{2 \: cups}{1 \: pint} = \left(2 \: cups \right) \div \left(1 \:pint \right) = 1$ If I want to find out how many cups are in 3 pints of sour cream, I can multiply by the conversion factor: $\left(3 \: pints \right) \times \frac{2 \: cups}{1 \: pint} = 6 \: cups$ Multiplying by one does not change the original number. In the same way, multiplying by a conversion factor does not change the original amount of stuff. It only changes the units that you measure the stuff in. When I multiplied 3 pints times the conversion factor, I did not change how much sour cream I had, only the way I was measuring it. Conversion Factors Can Always Be Flipped Over If there are $\frac{60 \: minutes}{1 \: hour}$, then there must also be $\frac{1 \: hour}{60 \: minutes}$. If I draw house plans at a scale of $\frac{4 \: feet}{1 \: inch}$, that is the same as saying $\frac{1 \: inch}{4 \: feet}$. If there are $\frac{2\: cups}{1 \: pint}$, then there is $\frac{1\: pint}{2 \: cups} = 0.5 \: \frac{pints}{cup}$. Or if an airplane is burning fuel at $\frac{8\: gallons}{1 \: hour}$, then the pilot has only 1/8 hour left to fly for every gallon left in his tank. This is true for all conversion factors, and it is an important part of what makes them so useful in solving problems. You can choose whichever form of the conversion factor seems most helpful in the problem at hand. How can you know which form will help you solve the problem? Look at the units you have, and think about the units you need to end up with. In the sour cream measurement above, I started with pints and I wanted to end up with cups. That meant I needed a conversion factor with cups on top (so I would end up with that unit) and pints on bottom (to cancel out). You Can String Conversion Factors Together String several conversion factors together to solve more complicated problems. Just as numbers cancel out when the same number is on the top and bottom of a fraction (2/2 = 2 ÷ 2 = 1), so do units cancel out if you have the same unit in the numerator and denominator. In the following example, quarts/quarts = 1. How many cups of milk are there in a gallon jug? $\left(1\: gallon \right) \times \frac{4\: quarts}{1\: gallon} \times \frac{2\: pints}{1\: quart} \times \frac{2\: cups}{1\: pint} = 16\: cups$ As you write out your string of factors, you will want to draw a line through each unit as it cancels out, and then whatever is left will be the units of your answer. Notice that only the units cancel — not the numbers. Even after I canceled out the quarts, the 4 was still part of my calculation. Let’s Try One More The true power of conversion factors is their ability to change one piece of information into something that at first glance seems to be unrelated to the number with which you started. Suppose I drove for 45 minutes at 55 mph in a pickup truck that gets 18 miles to the gallon, and I wanted to know how much gas I used. To find out, I start with a plain number that I know (in this case, the 45 miles) and use conversion factors to cancel out units until I get the units I want for my answer (gallons of gas). How can I change minutes into gallons? I need a string of conversion $\left(45\: min. \right) \times \frac{1\: hour}{60\: min.} \times \frac{55\: miles}{1\: hour} \times \frac{1\: gallon}{18\: miles} = 2.3\: gallons$ How Old Are You, Anyway? If you want to find your exact age in nanoseconds, you need to know the exact moment at which you were born. But for a rough estimate, just knowing your birthday will do. First, find out how many days you have lived: $Days\: I\:have\: lived = \left(my\: age \right) \times \frac{365\: days}{year}$ $+ \left(number\: of\: leap\: years \right) \times \frac{1\: extra\: day}{leap\: year}$ $+ \left(days\: since\: my\: last\: birthday,\: inclusive \right)$ Once you know how many days you have lived, you can use conversion factors to find out how many nanoseconds that would be. You know how many hours are in a day, minutes in an hour, and seconds in a minute. And just in case you weren’t quite sure: $One\: nanosecond = \frac{1}{1,000,000,000} \: of\: a\: second$ Have fun playing around with conversion factors. You will be surprised how many problems these mathematical wonders can solve. [Note: This article is adapted from my out-of-print book, Master the Math Monsters.] Don’t miss any of “Let’s Play Math!”: Subscribe in a reader, or get updates by Email. Have more fun on Let’s Play Math! blog: Historical Tidbits: The Pharaoh’s Treasure [Read the story of the pharaoh's treasure here: Part 1, Part 2, and Part 3.] I confess: I lied — or rather, I helped to propagate a legend. Scholars tell us that the Egyptian rope stretchers did not use a 3-4-5 triangle for right-angled corners. They say it is a myth, like the corny old story of George Washington and the cherry tree, which bounces from one storyteller to the next — as I got it from a book I bought as a library discard. None of the Egyptian papyri that have been found show any indication that the Egyptians knew of the Pythagorean Theorem, one of the great theorems of mathematics, which is the basis for the 3-4-5 triangle. Unless a real archaeologist finds a rope like Alexandria Jones discovered in my story, or a papyrus describing how to use one, we must assume the 3-4-5 rope triangle is an unfounded rumor. Bill Gates Proportions II [Feature photo above by Remy Steinegger via Wikimedia Commons (CC BY 2.0).] Another look at the Bill Gates proportion… Even though I couldn’t find any data on his real income, I did discover that the median American family’s net worth was $93,100 in 2004 (most of that is home equity) and that the figure has gone up a bit since then. This gives me another chance to play around with proportions. So I wrote a sample problem for my Advanced Math Monsters workshop at the APACHE homeschool conference: The median American family has a net worth of about $100 thousand. Bill Gates has a net worth of $56 billion. If Average Jane Homeschooler spends $100 in the vendor hall, what would be the equivalent expense for Gates? Putting Bill Gates in Proportion [Feature photo above by Baluart.net.] A friend gave me permission to turn our email discussion into an article… Can you help us figure out how to figure out this problem? I think we have all the information we need, but I’m not sure: The average household income in the United States is $60,000/year. And a man’s annual income is $56 billion. Is there a way to figure out what this man’s value of $1mil is, compared to the person who earns $60,000/year? In other words, I would like to say — $1,000,000 to us is like 10 cents to Bill Gates. Percents: The Search for 100% [Rescued from my old blog.] Percents are one of the math monsters, the toughest topics of elementary and junior high school arithmetic. The most important step in solving any percent problem is to figure out what quantity is being treated as the basis, the whole thing that is 100%. The whole is whatever quantity to which the other things in the problem are being compared. Percents: Key Concepts and Connections [Rescued from my old blog.] Paraphrased from a homeschool math discussion forum: “I am really struggling with percents right now, and feel I am in way over my head!” Percents are one of the math monsters, the toughest topics of elementary and junior high school arithmetic. Here are a few tips to help you understand and teach percents.
{"url":"http://letsplaymath.net/tag/ratios/","timestamp":"2014-04-18T18:10:45Z","content_type":null,"content_length":"134973","record_id":"<urn:uuid:8fa47e92-6913-4b72-ab52-56eb714ae49f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
Web Resources FunBrain - Math Baseball For this interactive game, FUNBRAIN gives the student a math problem. The student enters the answer to the problem and hits the "Swing" button. If the answer is correct, the student will get a hit. FUNBRAIN will decide if the hit is a single, double, triple, or home run based on the difficulty of the problem. If the answer is wrong, the student will get an out. The game is over after three outs. One or 2 players may play and problems are available for addition, subtraction, multiplication, division, or all of the above. Divisibility Rules This is an interactive game requiring students to choose which integers are evenly divisible. Learning Activities FunBrain - Math Baseball For this interactive game, FUNBRAIN gives the student a math problem. The student enters the answer to the problem and hits the "Swing" button. If the answer is correct, the student will get a hit. FUNBRAIN will decide if the hit is a single, double, triple, or home run based on the difficulty of the problem. If the answer is wrong, the student will get an out. The game is over after three outs. One or 2 players may play and problems are available for addition, subtraction, multiplication, division, or all of the above.
{"url":"http://alex.state.al.us/weblinks_category.php?stdID=53724","timestamp":"2014-04-17T12:43:08Z","content_type":null,"content_length":"27370","record_id":"<urn:uuid:3add69cd-bd4c-40a4-bf0b-b831453ad204>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by mathstudent on Thursday, February 21, 2008 at 5:25pm. sigma is the standard deviation of a population of size N S is the standard deviation of a sample of size n from within the population. What is the estimated value of S^2? If the population was infinitely large (size N = infinity), what would the estimated value of S^2 be? • statistics - mathstudent, Thursday, February 21, 2008 at 5:25pm I meant "expected" value, not "estimated" value. Sorry about that. • statistics - Damon, Thursday, February 21, 2008 at 5:38pm As far as I know (statistics is not really my specialty), the sigma of the sample depends only on the sample size, not the size of the population the sample is chosen from. Therefore in my ignorance I would say: S^2 sample = sigma^2 of population / n sigma sample = sigma population/sqrt(n) □ statistics - mathstudent, Thursday, February 21, 2008 at 5:46pm Damon, that can't be right. As n approaches infinity, S^2 should approach sigma^2. Also, the wikipedia entry does use both sample size and population size in their formula which is one reason that I wanted to see it derived. • statistics - Damon, Thursday, February 21, 2008 at 7:06pm You are right. I think I have that backwards and it is too simple anyway. I hope a statistics expert comes by here. • statistics - Count Iblis, Thursday, February 21, 2008 at 7:36pm You can formally write down everything in terms of the probability distribution function. Let's say the population consists of N elements and each element can be in some state denoted by a continuous variable x distributed according to the same probability density If all the variables are independent, the joint probability distribution factorizes: p(x1, x2, ...,xN) = p(x1)p(x2)...p(xN) If you measure S^2, you take n of the variables xi, say, x1 , x2, ...xn and compute the standard deviation in the usual way: S^2 = <x^2> - <x>^2 where <x> and <x^2> denote averages of the n numbers x1, x2, ..., xn, not an average using p(x) as S^2 depends on the actual numbers in the sample. So, we don't know what S^2 will be, but we can compute the probability distribution, expectation value etc. of S^2 in terms of the function p(x). The expectation value is given by: Integral dx1 dx2...dxn p(x1)p(x2)...p(xn)[<x^2> - <x>^2]. Insert in here: <x^2> = 1/n(x1^2 + x2^2 + ...+xn^2) <x>^2 = 1/n^2(x1 + x2 + ...+xn)^2 = 1/n^2(x1^2 + x2^2 + ...+xn^2) + 1/n^2 (sum over xi xj for i not equal to j) Now let's compute the integrations. Let's use the notation <<f(x)>> for an average relative to p(x). So <<x>> means Integral dx x p(x) Then you see that you only got terms like <<x>>^2 and <<x^2>> and you just need to count how many of each and what the prefactors are. You should find: <<S^2>> = (1-1/n) [<<x^2>> - <<x>>^2] Now, the term in the square brackets is the sigma^2 if the sample size is infinite (because if you sample over an infinite sample size you are computing the exact average which is also given by the integral over the probability distribution). The factor(1-1/n) explains why when estimating the standard deviation from a finite sample you have a term 1/(n-1) in the denominator in the aquare root instead of 1/n. S^2 will, on average be the true standard deviation times (1-1/n), so you divide by this factor, i.e. multiply by n/(n-1) • statistics - Damon, Thursday, February 21, 2008 at 7:55pm Thank you ! Related Questions sample size/ standard deviations - How do we compute sample size fluctuations? ... statistics - How do we compute sample size fluctuations? Suppose we have a ... Stats - A simple random sample will be obtained from a normally distributed ... Math (Statistic) - Considered the sampling distribution of a sample mean ... statistics - what are the mean and standard deviation of a sampling distribution... statistics - You want to determine if your widgets from machine 1 are the same ... statistics - a simple random sample will be obtained from a normally distributed... Statistics - For sample mean x = 150, sample size n = 36 and sample standard ... statistics - Random samples of size n are selected from a normal population ... statistics - The scores of students on the ACT college entrance examination in ...
{"url":"http://www.jiskha.com/display.cgi?id=1203632711","timestamp":"2014-04-18T09:36:57Z","content_type":null,"content_length":"12309","record_id":"<urn:uuid:0659da20-7083-46f1-ac8c-b39faf456d71>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
Two loudspeakers separated by 3.0 meters emit out-of-phase sound waves. Both speakers are playing a 686 Hz tone. If... - Homework Help - eNotes.com Two loudspeakers separated by 3.0 meters emit out-of-phase sound waves. Both speakers are playing a 686 Hz tone. If you are standing 4.0 meters directly in front of one of the loudspeakers. Do you hear maximum sound intensity, minimum sound intensity, or something in between? Justify. Let us suppose the loudspeaker A emits waves with initial phase zero, and loudspeaker B emits waves with initial phase `phi` . See the figure below. Thus they emit out of phase sounds. `Y_A(x_1,t) = A*sin(2*pi*(x_1/lambda -t/T))` (1) `Y_B(x_2,t) = A*sin(2*pi(x_2/lambda-t/T) +fi)` (2) Because both loudspeakers emit the same tone `F =686 Hz` the period of both waves is `T =1/F =0.00146s =1.46 ms` and the wavelength is `lambda = v*T =343*0.00146 =0.5 m` (speed of sound in air at 20 degree Celsius is `v =343 m/s` ) The path difference between the two waves coming from A and B is `P =(x_1-x_2)= sqrt(D^2+d^2) -D =sqrt(16+9) -4 =5-4 =1m` Therefore the difference in paths is an integer multiple of lambda. `P =2*lambda` Now, looking at expressions (1) and (2), at the same moment in time there are three different cases: a) if `phi =2*k*pi` with `k` integer (including zero) then a person at point C will hear a maximum sound intensity. b) if `phi=(2*k+1)*pi/2` , with `k` integer (including zero) then a person at point C will hear a minimum of sound intensity c) for other dephasing between the two loudspeakers the person at point C will hear an intermediate sound intensity between maxima and minima. Answer: Unless special out of phase conditions of points a) and b) above, the person will hear something between maxima and minimum intensity. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/two-loudspeakers-separated-by-3-0-meters-emit-out-421977","timestamp":"2014-04-20T02:30:57Z","content_type":null,"content_length":"28335","record_id":"<urn:uuid:02bf21c5-32d8-424c-b33d-4ac6f32e88b3>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Polynomials and L^p(R) up vote 3 down vote favorite As someone who mostly does symbolic computation, I've always been puzzled by the fascination mathematicians seem to have with L^p(R) (for p<∞)? To be more precise, there are no non-trivial polynomials in that space and, to me, polynomials are not only the simplest functions, they are the building blocks of most everything which can be (easily) manipulated algorithmically. And restricting to a compact support is really a non-answer, since one of the great things about polynomials is that they are global, analytic functions. To ask a more precise question: are there some spaces of (total, real-valued) functions which are both nice from a functional analysis point of view, and contain all the polynomials? ca.analysis-and-odes fa.functional-analysis polynomials 1 By any measure (ha!), characteristic functions of intervals have to be considered simpler than polynomials. – Qiaochu Yuan Feb 14 '10 at 20:49 add comment 3 Answers active oldest votes You are referring to $L^p(\mathbb{R}, \mathcal{B}, \mu)$ in the case that $\mathbb{R}$ is endowed with Lebesgue measure $\mu$. Consider instead the measure $\nu$ given by $d\nu = f d\ up vote 12 mu$, where $f$ is in the Schwartz space and $f$ does not take the value zero. Because the product of a polynomial with $f$ is also in the Schwartz space, and the Schwartz space is down vote contained in $L^p(\mathbb{R}, \mathcal{B}, \mu)$, it follows that polynomials are in $L^p(\mathbb{R}, \mathcal{B}, \nu)$. Wonderful answer. I guess the only drawback is that such measures are no longer translation-invariant? – Jacques Carette Feb 14 '10 at 16:45 They are not, but you can do lots of nice things with (e.g.) Gaussian measures: en.wikipedia.org/wiki/Gaussian_measure – Steve Huntsman Feb 14 '10 at 16:47 add comment If one wants to do algebra, or symbolic computation, then polynomials are indeed the simplest type of function. But if one wants to do analysis, or numerical computation, then actually the best functions are the bump functions - they are infinitely smooth, but also completely localised. (Gaussians are perhaps the best compromise, being extremely well behaved in both algebraic and analytic senses.) up vote 11 That said, I'm not sure what your question is really after. If you want a function space that contains the polynomials, you could just take ${\bf R}[x]$. Of course, this space does not down vote come equipped with a special norm, but polynomials, being algebraic objects rather than analytic ones, are not naturally equipped with any canonical notion of size. Due to their growth at infinity, any such notion of size would have to be mostly localised, as is the case with the weighted spaces and distribution spaces given in other answers. The question is really about trying to find links between the algebraic world where we have lots of exact algorithms and functional analysis. As you well know, lot of successful mathematics is about finding ways to transport theorems from one setting to another. But, before I encountered tempered distributions and the Schwartz space, I could not find links between the two that were 'natural'. And yes, I want a norm (or a metric), since that's a fundamental tool of functional analysis. – Jacques Carette Feb 17 '10 at 3:38 add comment Distributions (or tempered distributions). up vote 3 down Distributions are total functions? I always thought of them as equivalence classes, thus being essentially impossible to evaluate pointwise. Thanks for the pointer to tempered distributions, I had not encountered them before. – Jacques Carette Feb 14 '10 at 16:35 Hm. Well, I suppose you could look at distributions that are locally in some Sobolev space or something like that. – Akhil Mathew Feb 14 '10 at 16:55 Heh, tempered distributions are somewhat "dual" to the schwartz space mentioned in the answer above, Jacques. – Harry Gindi Feb 14 '10 at 22:09 add comment Not the answer you're looking for? Browse other questions tagged ca.analysis-and-odes fa.functional-analysis polynomials or ask your own question.
{"url":"http://mathoverflow.net/questions/15265/polynomials-and-lpr/15267","timestamp":"2014-04-19T00:02:13Z","content_type":null,"content_length":"66034","record_id":"<urn:uuid:9ca9a9be-7acd-4e90-ac4d-ba355c7883f4>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
Oakland Gardens ACT Tutors ...I provide a comprehensive review of all the concepts tested on the GMAT, and focus on the specific subjects you remember least well, so that by the time Test Day rolls around you are familiar with what you need to know to do your best. In many ways the ISEE and SSAT are miniaturized versions of ... 18 Subjects: including ACT Math, geometry, GRE, algebra 1 ...I love mathematics and I love to help people learn mathematics. Alg. II and Trigonometry is a fairly advanced course that requires expertise and experience. 9 Subjects: including ACT Math, geometry, algebra 2, algebra 1 ...We review important concepts such as: the basics of functions including: graphing, finding their inverses and compositions of functions; graphing transformations of basic functions; complex numbers; the unit circle; maximizing word problems and other word problems. Allow me to use the experience... 21 Subjects: including ACT Math, calculus, economics, precalculus ...Along with proper review at timed intervals using various resources, my students and clients have been able to achieve long-term memory retention and improved problem solving skills. I will ensure you will receive a professional service you deserve just as I intend to serve my future patients wi... 17 Subjects: including ACT Math, chemistry, algebra 1, MCAT ...I have received A's and help others receive A's. I was named best instructor in 2008 for the Pathfinder club at the summer camp. I have help others to gain confidence in speaking in churches and in school which is a step forward in the right direction since it ranks first in critical areas as t... 27 Subjects: including ACT Math, Spanish, reading, chemistry
{"url":"http://www.algebrahelp.com/Oakland_Gardens_act_tutors.jsp","timestamp":"2014-04-20T03:22:35Z","content_type":null,"content_length":"24880","record_id":"<urn:uuid:bf0e10f6-8f9b-4c9b-9768-0a2f2e1d08f4>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
Skillman Prealgebra Tutor Find a Skillman Prealgebra Tutor ...I will try one way of explaining and if it does not work or you do not understand I will try something else. I try to make the learning and instruction fun so that I does not seem as bad as you think it is. I enjoy seeing the "light bulb" moment when students get what I am trying to teach them. 6 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...I tutored both of my children, and my daughter has just moved on to Algebra 1, so I am well acquainted with this material, also of use in everyday life. Precalculus is a fundamental prerequisite of use in all sciences and engineering, economics and finance, statistics. As a professor and physicist I continue to make daily contact with the subject. 12 Subjects: including prealgebra, English, physics, calculus ...My name is Will. I have a BSBA (Bachelor of Science in Business Administration) degree from Rider University. I have tutored elementary, junior high and high school students for nearly five 21 Subjects: including prealgebra, reading, accounting, algebra 1 ...I also have experience teaching to the SAT and ACT, and my scores on these exams reflect my understanding and ability to convey not only the test subjects, but test strategies as well. I scored 800 on the Math section of the SAT, and a 34 for my overall ACT score. I have also recently passed th... 22 Subjects: including prealgebra, reading, English, chemistry ...I have included that I can teach music theory and violin, as well; after teaching myself much of the material, I earned a 4 on the AP Music Theory Exam, and have also been playing violin for 14 years. While I am not part of a union, my professional credits include playing in the orchestra pit fo... 12 Subjects: including prealgebra, algebra 1, algebra 2, trigonometry
{"url":"http://www.purplemath.com/Skillman_prealgebra_tutors.php","timestamp":"2014-04-21T14:58:57Z","content_type":null,"content_length":"24071","record_id":"<urn:uuid:ecf6d73f-3db2-49aa-8591-4f649e7c2ae2>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
Laplace Transform help May 6th 2010, 05:23 PM #1 Feb 2010 Laplace Transform help Hey, not sure if this should go in the physics forum, but I suppose it really only applies to this: I'm only in AP Calc BC doing a year end project, but I want to be able to do this by hand (no tables). How would I go about doing that? It's a circuit diagram with a lot of conclusions drawn based from Kirchoff's Voltage Law, voltage through each component, etc. You want the entire thing? It's fairly long winded and took up about one and a half pages written. Perhaps I can see if I can get hold of a scanner. I ask because I find it improbable that that Laplace transform should be needed It sort of is. I have to relate this to Butterworth tables in order to find out my constants for L and C, inductance and capacitance respectively. At any rate, I did figure it out. Applying Laplace transforms to each individual function of t did the trick. Thanks for any effort put forth. May 6th 2010, 11:18 PM #2 Grand Panjandrum Nov 2005 May 8th 2010, 06:24 PM #3 Feb 2010 May 8th 2010, 09:01 PM #4 Grand Panjandrum Nov 2005 May 9th 2010, 06:22 PM #5 Feb 2010
{"url":"http://mathhelpforum.com/calculus/143462-laplace-transform-help.html","timestamp":"2014-04-18T23:45:10Z","content_type":null,"content_length":"42051","record_id":"<urn:uuid:f72c1c1a-3147-4b24-b297-9cfd3dec03d6>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
Pi with Machin's formula (Java) From LiteratePrograms Other implementations: Erlang | Haskell | Java | Lisp | Python [edit] Machin's formula A simple way to compute the mathematical constant π ≈ 3.14159 with any desired precision is due to John Machin. In 1706, he found the formula $\frac{\pi}{4} = 4 \, \arccot \,5 - \arccot \,239$ which he used along with the Taylor series expansion of the arc cotangent function, $\arccot x = \frac{1}{x} - \frac{1}{3 x^3} + \frac{1}{5 x^5} - \frac{1}{7 x^7} + \dots$ to calculate 100 decimals by hand. The formula is well suited for computer implementation, both to compute π with little coding effort (adding up the series term by term) and using more advanced strategies (such as binary splitting) for better speed. In order to obtain n digits, we will use fixed-point arithmetic to compute π × 10^n as a Java BigDecimal. [edit] Implementation import java.math.BigDecimal; import java.math.RoundingMode; public final class Pi { private static final BigDecimal TWO = new BigDecimal("2"); private static final BigDecimal FOUR = new BigDecimal("4"); private static final BigDecimal FIVE = new BigDecimal("5"); private static final BigDecimal TWO_THIRTY_NINE = new BigDecimal("239"); private Pi() {} public static BigDecimal pi(int numDigits) { int calcDigits = numDigits + 10; return FOUR.multiply((FOUR.multiply(arccot(FIVE, calcDigits))) .subtract(arccot(TWO_THIRTY_NINE, calcDigits))) .setScale(numDigits, RoundingMode.DOWN); private static BigDecimal arccot(BigDecimal x, int numDigits) { BigDecimal unity = BigDecimal.ONE.setScale(numDigits, BigDecimal sum = unity.divide(x, RoundingMode.DOWN); BigDecimal xpower = new BigDecimal(sum.toString()); BigDecimal term = null; boolean add = false; for (BigDecimal n = new BigDecimal("3"); term == null || !term.equals(BigDecimal.ZERO); n = n.add(TWO)) { xpower = xpower.divide(x.pow(2), RoundingMode.DOWN); term = xpower.divide(n, RoundingMode.DOWN); sum = add ? sum.add(term) : sum.subtract(term); add = ! add; return sum; [edit] Applying Machin's formula The method pi above uses Machin's formula to compute π using the necessary level of precision, which in turn invokes the private arccot method for arccot computation. To avoid rounding errors in the result, we set the scale of the BigDecimal to 10 more than the requested number of digits, and later round the return value. [edit] High-precision arccot computation To calculate arccot of an argument value, we start setting the scale of the number 1 to a value which will provide sufficient precision and avoid rounding errors (number of digits requested + 10) and use this as our unity value. This unity value is divided by x to obtain the first term. Note that all BigDecimal division uses the DOWN rounding mode to emulate integer division. We then repeatedly divide by x^2 and a counter value that runs over 3, 5, 7, ..., to obtain each next term. The summation is stopped at the first zero term, which in this fixed-point representation corresponds to a real value less than 10^-n. Example usage: The program can be used to compute tens of thousands of digits in just a few seconds on a modern computer. (More sophisticated techniques are necessary to calculate millions or more digits in reasonable time, although in principle this program will also work.) hijacker hijacker
{"url":"http://en.literateprograms.org/Pi_with_Machin's_formula_(Java)","timestamp":"2014-04-20T06:27:13Z","content_type":null,"content_length":"31733","record_id":"<urn:uuid:c9c24b71-151f-4120-a982-1da642dc78d1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about Math on Measure of Doubt People talk about how Bayes’ Rule is so central to rationality, and I agree. But given that I don’t go around plugging numbers into the equation in my daily life, how does Bayes actually affect my A short answer, in my new video below: (This is basically what the title of this blog was meant to convey — quantifying your uncertainty.)
{"url":"http://measureofdoubt.com/category/math/","timestamp":"2014-04-17T12:32:59Z","content_type":null,"content_length":"108652","record_id":"<urn:uuid:773c6c2c-0ea2-4b30-a66d-bf5eb6501518>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
Why do we integrate a function to find the area under it? Why, when finding the area by definite integral, we have to find the indefinite integral first? You don't have to. If you have, in general, infinite time at your disposal. As I understand, to find the area of under the curve, all we need is the equation of the curve. On the other hand, the indefinite integral helps us to find the original function from its derivative. So what does this have to do with finding the area? That is the truly beautiful insight in the fundamental theorem of calculus: To sum up the area beneath some curve, essentially an INFINITE process, can trivially be done by finding an anti-derivative to the defining curve.
{"url":"http://www.physicsforums.com/showthread.php?t=372159","timestamp":"2014-04-17T12:34:40Z","content_type":null,"content_length":"50133","record_id":"<urn:uuid:bbd4b69b-bfca-40e2-aa55-76e56f45424b>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
Rockrimmon, CO Algebra Tutor Find a Rockrimmon, CO Algebra Tutor ...My previous experience with tutoring started when I was asked to tutor a few special needs children. Then during the later few years of my college career, I was employed by the college as a professor's aide. During this time I graded entry and mid-level engineering math and engineering courses. 15 Subjects: including algebra 2, algebra 1, calculus, trumpet ...In return, I will be completely committed and do everything I possibly can to help you achieve your goals.The study of physics encompasses the fundamental laws that govern our universe! What's not interesting about that? It's all in the name. 16 Subjects: including algebra 1, algebra 2, physics, calculus I have worked with computers (doing software development and testing) for about 10 years. When I was in my 30's I attended community college and Cal Poly, to earn a BS in Computer Science. During community college I earned A's in 3 semesters of calculus. 9 Subjects: including algebra 1, algebra 2, calculus, trigonometry ...Yes, this is the dreaded geometrical proof. It's really easy to get overwhelmed by a proof because it feels you're trying to drive somewhere using a blank map or a navigation unit that can't access the satellites. Being successful with geometrical proofs or any other part of geometry requires a balance between stepping back to see the big picture and looking closer at the details. 14 Subjects: including algebra 1, physics, Microsoft Excel, general computer ...I also have been an amatuer writer for many years. I recently published a juvenile fantasy book based in Colorado. I have worked with students with special needs for over five years. 57 Subjects: including algebra 2, reading, algebra 1, writing Related Rockrimmon, CO Tutors Rockrimmon, CO Accounting Tutors Rockrimmon, CO ACT Tutors Rockrimmon, CO Algebra Tutors Rockrimmon, CO Algebra 2 Tutors Rockrimmon, CO Calculus Tutors Rockrimmon, CO Geometry Tutors Rockrimmon, CO Math Tutors Rockrimmon, CO Prealgebra Tutors Rockrimmon, CO Precalculus Tutors Rockrimmon, CO SAT Tutors Rockrimmon, CO SAT Math Tutors Rockrimmon, CO Science Tutors Rockrimmon, CO Statistics Tutors Rockrimmon, CO Trigonometry Tutors Nearby Cities With algebra Tutor Buckskin Joe, CO algebra Tutors Crystal Hills, CO algebra Tutors Deckers, CO algebra Tutors Edison, CO algebra Tutors Elkton, CO algebra Tutors Ellicott, CO algebra Tutors Fair View, CO algebra Tutors Goldfield, CO algebra Tutors Ilse, CO algebra Tutors Parkdale, CO algebra Tutors Penitentiary, CO algebra Tutors Stratmoor Hills, CO algebra Tutors Tarryall, CO algebra Tutors Truckton, CO algebra Tutors Twin Rock, CO algebra Tutors
{"url":"http://www.purplemath.com/Rockrimmon_CO_Algebra_tutors.php","timestamp":"2014-04-16T18:58:12Z","content_type":null,"content_length":"24167","record_id":"<urn:uuid:6b3aaba3-2c3a-4a05-968f-09fd1e8e8047>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
Skillman Prealgebra Tutor Find a Skillman Prealgebra Tutor ...I will try one way of explaining and if it does not work or you do not understand I will try something else. I try to make the learning and instruction fun so that I does not seem as bad as you think it is. I enjoy seeing the "light bulb" moment when students get what I am trying to teach them. 6 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...I tutored both of my children, and my daughter has just moved on to Algebra 1, so I am well acquainted with this material, also of use in everyday life. Precalculus is a fundamental prerequisite of use in all sciences and engineering, economics and finance, statistics. As a professor and physicist I continue to make daily contact with the subject. 12 Subjects: including prealgebra, English, physics, calculus ...My name is Will. I have a BSBA (Bachelor of Science in Business Administration) degree from Rider University. I have tutored elementary, junior high and high school students for nearly five 21 Subjects: including prealgebra, reading, accounting, algebra 1 ...I also have experience teaching to the SAT and ACT, and my scores on these exams reflect my understanding and ability to convey not only the test subjects, but test strategies as well. I scored 800 on the Math section of the SAT, and a 34 for my overall ACT score. I have also recently passed th... 22 Subjects: including prealgebra, reading, English, chemistry ...I have included that I can teach music theory and violin, as well; after teaching myself much of the material, I earned a 4 on the AP Music Theory Exam, and have also been playing violin for 14 years. While I am not part of a union, my professional credits include playing in the orchestra pit fo... 12 Subjects: including prealgebra, algebra 1, algebra 2, trigonometry
{"url":"http://www.purplemath.com/Skillman_prealgebra_tutors.php","timestamp":"2014-04-21T14:58:57Z","content_type":null,"content_length":"24071","record_id":"<urn:uuid:ecf6d73f-3db2-49aa-8591-4f649e7c2ae2>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
Encyclopaedia Index Free-surface flows involve the interaction of two or more distinctly different fluids, separated by sharply defined interfaces. The position of the interface is not known a priori. The mathematical model will need to: • Locate the unknown inter-fluid boundaries, at which discontinuities exist in one or more flow quantities. • Satisfy the field equations governing conservation of mass, momentum, energy, etc. • Be consistent with the boundary conditions. Two special techniques for treating free surface flows are available in PHOENICS. The Scalar-Equation Method deduces the interface position from the solution of the conservation equation for a scalar "fluid-marker" variable. This works well, but requires special techniques to combat numerical diffusion. The Height-of-Liquid Method, when it is applicable, i.e. when the interface is not convoluted, needs no anti dispersion device, and is simple, effective and economical. The two methods are similar, in that a "fluid-marker" is used to determine the physical properties. They differ in the method used to determine the distribution of the marker variable. This lecture concentrates on the Scalar-Equation Method. Other ways in which a free surface can be represented in PHOENICS include: • IPSA with donor-acceptor differencing. • Surface tracking with moving particles. • Moving porosities (with a blocked region adapting to the shape of a liquid surface). • Moving body-fitted coordinates. • Shallow wave theory, using density to represent depth. These methods suffer from various drawbacks, such as smearing of the interface, computational expense, or providing solutions on only one side of the interface. The Scalar Equation (and Height of Liquid) methods are both essentially single phase treatments. The two fluids separated by the interface are treated as both belonging to a single phase, i.e. there is only one value of each velocity component, temperature, concentration, etc., for each computational cell. The relevant governing equations need to be solved, accounting for the discontinuities in the physical properties at fluid interfaces. The location of the interface is deduced from some marker variable. We will now look at the set of equations that need to be solved. The single phase continuity equation can be written as: dr/dt + d(ru[i] )/ dx[i] = 0 This can also be written as: D(lnr)/Dt + du[i]/dx[i] = 0 Assuming incompressible flow: du[i]/dx[i] = 0 This implies that knowledge of the density is immaterial for the solution of the continuity equation if the flow is incompressible. The continuity condition in terms of volumetric conservation is valid even when the density changes from point to point at the interface. This is embodied in GALA (Gas And Liquid Algorithm), available in PHOENICS. The conservation equations for other variables (i.e. momentum, energy, etc.) are solved in the conventional manner. However, density and viscosity fields which are dependent on the local fluid type are required for the solution of these equations. The remainder of this lecture concentrates on the Scalar Equation Method, which is a means of determining the property fields. This method deduces the fluid interfaces from the solution of a conservation equation for a scalar "fluid-marker" variable. The local physical properties such as density and viscosity are set to those of the appropriate fluid according to the value of the scalar marker variable. The transient convection of a scalar variable is described by the following equation: • dF/dt + div(Fvel) = 0 (1) transient convection The scalar variable F is used as a marker according to which the fluid properties are set: F = 0.0 - fluid 1 F = 1.0 - fluid 2 The governing hydrodynamic equations are solved in tandem with the transport equation for the scalar-marker variable. The marker variable is used to update the fluid properties. It is well known that the numerical discretisation of Equation (1) creates unphysical smearing of the discontinuity at the interface, known as numerical diffusion. Several remedies exist to reduce this undesirable effect. Special care in the specification of the properties: At the interface, the properties are assumed to vary linearly from those of one fluid to those of the other. The gradient of the property interface can be varied by limiting the range of PHI within which properties can vary. Van Leer discretisation of the scalar-convection terms. The "fully implicit upwind" (PHOENICS default) scheme considers the transport of a variable, F, across a cell face (e.g. the east face) according to the following formula: F[e] = F[P] for u[e] > 0 and F[e] = F[E] for u[e] < 0 where F[P] and F[E] are the values of F at the grid nodes at the end of the time step. Figure 2: Higher-Order Differencing Template The van Leer approach is an explicit finite-difference scheme which modifies the first order upwind formulation using the theory of characteristics: F[e]= F[P] + (dF/dx)[P] (dx - u[e]dt )/2 for u[e]>0 F[e]= F[E] - (dF/dx)[E] (dx + u[e]dt )/2 for u[e]<0 where F[P] and F[E] are the values of F at the end of the time step. The dF/dx is based on values at the start of the time step, which implies that the scheme is explicit in nature. This gradient is specified such that the scheme reduces to the purely upwind form in the presence of local extrema. This Total Variation Diminishing (TVD) approach offers the advantages of a higher-order scheme while avoiding under- or overshoots. Due to the explicit formulation, the Courant criterion places a maximum limit on the time increment for the stability of the solution: dt = min ( dx/u, dy/v, dz/w ) where this minimum is with respect to every cell in the grid, not just at the interface. The SEM is applicable to: • Unsteady flows; • Two- or three-dimensional flows; • Cartesian, polar or curvilinear coordinate; • Multiple, convoluted and overturning interfaces. Examples of applications: The method cannot be used to produce steady-state solutions directly. However, steady-state conditions can be achieved asymptotically from a suitable transient simulation. Surface tension effects at the interface are not included in the model. This is however a feasible extension to the model. Numerical diffusion, despite treatments for its limitation, still imposes a restriction on minimum grid requirements. Due to the explicit nature of the van Leer scheme, restrictions on the size of the time step increment can make long transient processes computationally expensive. The Scalar Equation method has been implemented in the subroutine GXSURF which is available with PHOENICS. It can be activated from PHOENICS-VR, by selecting Scalar Equation Method from the Free Surface Models menu of the Models panel of the Main menu. Should the user wish to activate SEM 'by hand' from the Q1, the following commands are required: In Group 7 STORE ( DEN1 , PRPS ) SOLVE ( SURN , VFOL ) SURN stores the scalar-marker variable, the value of which is unity in cells completely filled with the heavier fluid. VFOL stores the value of SURN at the old time step and provides information on the inflow volumetric fluxes. Note that STORE(VFOL) will suffice if there is no inflow of fluid (i.e. for a sealed container). If inflows occur, VFOL should be SOLVED as it is required for the correct setting of the inflow boundary conditions. In Group 8 GALA = T Activates a solution procedure based on volume continuity. TERMS(SURN,N,N,N,N,P,P); TERMS(VFOL,N,N,N,N,P,P) TERMS ensures that the solution of SURN and VFOL is performed completely in GXSURF. This is required because the equation has no density in it. RLOLIM is the value of SURN below which the cell will be regarded as being full of the lighter fluid. RUPLIM is the value of SURN above which the cell will be regarded as being full of the heavier These two parameters are used to sharpen the fluid interface. Typical values might be RLOLIM = 0.4 and RUPLIM = 0.6 In Group 9 The flow properties are calculated in GXPRPS, based on the local property marker, PRPS. PRPS is updated from the VFOL field at the start of every time step. No specific settings are needed in Group 9 - the PRPS values for the heavy and light fluids are set in Group 19. In Group 11 Conditions are required for SURN, VFOL, DEN1 and PRPS to determine the initial position of the interface. FIINIT, PATCH and INIT commands can be used to set the initial distributions of the above variables. Typical settings to place a volume of water in surrounding air would be: FIINIT(SURN)=0; FIINIT(VFOL)=0 FIINIT(PRPS)=0; FIINIT(DEN1) = 1.189 PATCH (WATER, INIVAL, IXF,IXL, IYF,IYL,IZF,IZL, 1,1) INIT (WATER, SURN, 0, 1) INIT (WATER, VFOL, 0, 1) INIT (WATER, PRPS, 0, 67) INIT (WATER, DEN1, 0, 1000.5) In the VR-Editor, the material for a Blockage object can be set to Light fluid or Heavy fluid. In Group 13 PATCH ( NAME , CELL ,1 ,NX ,1 ,NY ,1 ,NZ ,1 ,LSTEP) COVAL ( NAME , SURN , GRND , GRND ) This source reintroduces the transient term into the equation for SURN which was removed by the TERMS command. Inflow conditions INLET ( NAME , AREA , IF, IL, JF, JL, KF, KL, TF, TL) VALUE ( NAME , P1 , VEL * RHOM ) VALUE ( NAME , U1 , UIN) VALUE ( NAME , V1 , VIN ) VALUE ( NAME , W1 , WIN) VALUE ( NAME , VFOL , 1. / RHOM ) COVAL ( NAME , SURN , FIXFLU , VEL * SURNin ) RHOM is the incoming fluid mixture density, and SURNin is the scalar value carried by the fluid. The setting of an inflow value for VFOL is a mechanism which will satisfy the continuity equation, as in GALA the default source for inflows in the overall continuity equation is: Sv=mass inflow/(cell density) If the cell density is different from that of the incoming flow, the volumetric source will not be correct. However GALA will recognise the inlet value of VFOL and set the source in the overall continuity equation as: S[v]= mass flow × incoming VFOL = (RHOM*Vel) ×(1./RHOM) = Velocity Outflow Conditions PATCH (NAME, AREA , IF, IL, JF, JL, KF, KL, TF, TL) COVAL (NAME, P1, GRND1, Pext) COVAL (NAME, SURN, FIXFLU, GRND1) Fixed pressure conditions are generally applied. If the interface crosses the boundary, the above settings ensure that the continuity and scalar marker equations are treated correctly. In Group 16 Ensure solver does only one iteration for SURN. In Group 18 VARMIN(SURN)=0; VARMAX(SURN)=1 Ensure SURN does not go outside physical limits. In Group 19 SURF = T This activates the special ground subroutine. IPRPSA is the index selecting the heavier fluid in the PROPS file. IPRPSB is the index selecting the lighter fluid in the PROPS file. For example, if the heavier fluid is water, and the lighter fluid is air at constant density, IPRPSA should be set to 67, and IPRPSB to 0.0. An account has been given of the Scalar Equation Method, and its implementation in PHOENICS. The method is provided in the subroutine GXSURF. The method has been applied successfully to a variety of free-surface flows, especially those involving multiple and overturning interfaces such as mould filling. Limitations on the application of the method have been indicated. The PHOENICS Encyclopedia contains further details under the headings: Free-Surface-Flow; Scalar-Equation Method Library examples can be found in the multi-Phase flow library, which can be accessed from the library option of the top menu, or from command mode by SEELIB(P). Active demonstrations can be found in Active Demos, under: Extra multi phase features, Free surface flow About the Scalar Equation method: Jun L, Spalding DB 1988 "Numerical simulation of flows with moving interfaces." PHOENICS Journal of Computational Fluid Dynamics Vol 10, No 5/6, pp 625-637 1988. Published by Pergamon Press Hamill IS, Jun L, Waterson N 1991 "A model for the simulation of three-dimensional mould-filling processes with complex geometries." Presented at the International Conference on Mathematical Modelling of Materials Processing, Bristol, 23-25 September 1991 Spalding DB, Jun L 1988 "Numerical simulation of flows with moving interfaces." Published in PCH PhysioChemical Hydrodynamics Vol 10, No. 5/6, pp 625-637 1988
{"url":"http://www.cham.co.uk/phoenics/d_polis/d_lecs/semlec.htm","timestamp":"2014-04-21T12:24:50Z","content_type":null,"content_length":"18641","record_id":"<urn:uuid:5e275588-b20c-4b54-90a9-79dfa055bdb1>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
• Lehrstuhl für Volkswirtschaftslehre, insbesondere Wirtschaftspolitik (6) (remove) 6 search hits Agent-based models for economic policy design : two illustrative examples (2012) Frank H. Westerhoff Reiner Franke Structural stochastic volatility in asset pricing dynamics: Estimation and model contest (2012) Reiner Franke Frank Westerhoff Why a Simple Herding Model May Generate the Stylized Facts of Daily Returns: Explanation and Estimation (2012) Reiner Franke Frank Westerhoff The paper proposes an elementary agent-based asset pricing model that, invoking the two trader types of fundamentalists and chartists, comprises four features: (i) price determination by excess demand; (ii) a herding mechanism that gives rise to a macroscopic adjustment equation for the market fractions of the two groups; (iii) a rush towards fundamentalism when the price misalignment becomes too large; and (iv) a stronger noise component in the demand per chartist trader than in the demand per fundamentalist trader, which implies a structural stochastic volatility in the returns. Combining analytical and numerical methods, the interaction between these elements is studied in the phase plane of the price and a majority index. In addition, the model is estimated by the method of simulated moments, where the choice of the moments reflects the basic stylized facts of the daily returns of a stock market index. A (parametric) bootstrap procedure serves to set up an econometric test to evaluate the model’s goodness-of-fit, which proves to be highly satisfactory. The bootstrap also makes sure that the estimated structural parameters are well identified. Converse trading strategies, intrinsic noise and the stylized facts of financial markets (2012) Frank Westerhoff Reiner Franke Structural Stochastic Volatility in Asset Pricing Dynamics: Estimation and Model Contest (2013) Reiner Franke Frank Westerhoff In the framework of small-scale agent-based financial market models, the paper starts out from the concept of structural stochastic volatility, which derives from different noise levels in the demand of fundamentalists and chartists and the time-varying market shares of the two groups. It advances several different specifications of the endogenous switching between the trading strategies and then estimates these models by the method of simulated moments (MSM), where the choice of the moments reflects the basic stylized facts of the daily returns of a stock market index. In addition to the standard version of MSM with a quadratic loss function, we also take into account how often a great number of Monte Carlo simulation runs happen to yield moments that are all contained within their empirical confidence intervals. The model contest along these lines reveals a strong role for a (tamed) herding component. The quantitative performance of the winner model is so good that it may provide a standard for future research. Agent-based models for economic policy design : two illustrative examples (2013) Frank H. Westerhoff Reiner Franke
{"url":"http://opus4.kobv.de/opus4-bamberg/solrsearch/index/search/searchtype/authorsearch/author/%22Reiner+Franke%22/start/0/rows/10/institutefq/Lehrstuhl+f%C3%BCr+Volkswirtschaftslehre%2C+insbesondere+Wirtschaftspolitik","timestamp":"2014-04-18T00:16:32Z","content_type":null,"content_length":"35108","record_id":"<urn:uuid:a625fa6b-8a28-4afb-8853-fad72da3cb23>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about modeling on Hindered Settling Grain settling is one of the most important problems in sedimentology (and therefore sedimentary geology), as neither sediment transport nor deposition can be understood and modeled without knowing what is the settling velocity of a particle of a certain grain size. Very small grains, when submerged in water, have a mass small enough that they reach a terminal velocity before any turbulence develops. This is true for clay- and silt-sized particles settling in water, and for these grain size classes Stokes’ Law can be used to calculate the settling velocity: where R = specific submerged gravity (the density difference between the particle and fluid, normalized by fluid density), g = gravitational acceleration, D is the particle diameter, C1 is a constant with a theoretical value of 18, and the greek letter nu is the kinematic viscosity. For grain sizes coarser than silt, a category that clearly includes a lot of sediment and rock types of great interest to geologists, things get more complicated. The reason for this is the development of a separation wake behind the falling grain; the appearance of this wake results in turbulence and large pressure differences between the front and back of the particle. For large grains – pebbles, cobbles – this effect is so strong that viscous forces become small compared to pressure forces and turbulent drag dominates; the settling velocity can be estimated using the empirical equation The important point is that, for larger grains, the settling velocity increases more slowly, with the square root of the grain size, as opposed to the square of particle diameter, as in Stokes’ Law. Sand grains are small enough that viscous forces still play an important role in their subaqueous settling behavior, but large enough that the departure from Stokes’ Law is significant and wake turbulence cannot be ignored. There are several empirical – and fairly complicated – equations that try to bridge this gap; here I focus on the simplest one, published in 2004 in the Journal of Sedimentary Research (Ferguson and Church, 2004): At small values of D, the left term in the denominator is much larger than the one containing the third power of D, and the equation is equivalent of Stokes’ Law. At large values of D, the second term dominates and the settling velocity converges to the solution of the turbulent drag equation. But the point of this blog post is not to give a summary of the Ferguson and Church paper; what I am interested in is to write some simple code and plot settling velocity against grain size to better understand these relationships through exploring them graphically. So what follows is a series of Python code snippets, directly followed by the plots that you can generate if you run the code yourself. I have done this using the IPyhton notebook, a very nice tool that allows and promotes note taking, coding, and plotting within one document. I am not going to get into details of Python programming and the usage of IPyhton notebook, but you can check them out here. First we have to implement the three equations as Python functions: import numpy as np import matplotlib.pyplot as plt rop = 2650.0 # density of particle in kg/m3 rof = 1000.0 # density of water in kg/m3 visc = 1.002*1E-3 # dynamic viscosity in Pa*s at 20 C C1 = 18 # constant in Ferguson-Church equation C2 = 1 # constant in Ferguson-Church equation def v_stokes(rop,rof,d,visc,C1): R = (rop-rof)/rof # submerged specific gravity w = R*9.81*(d**2)/(C1*visc/rof) return w def v_turbulent(rop,rof,d,visc,C2): R = (rop-rof)/rof w = (4*R*9.81*d/(3*C2))**0.5 return w def v_ferg(rop,rof,d,visc,C1,C2): R = (rop-rof)/rof w = ((R*9.81*d**2)/(C1*visc/rof+ return w Let’s plot these equations for a range of particle diameters: d = np.arange(0,0.0005,0.000001) ws = v_stokes(rop,rof,d,visc,C1) wt = v_turbulent(rop,rof,d,visc,C2) wf = v_ferg(rop,rof,d,visc,C1,C2) plot([0.25, 0.25],[0, 0.15],'k--') plot([0.25/2, 0.25/2],[0, 0.15],'k--') plot([0.25/4, 0.25/4],[0, 0.15],'k--') text(0.36, 0.11, 'medium sand', fontsize=13) text(0.16, 0.11, 'fine sand', fontsize=13) text(0.075, 0.11, 'v. fine', fontsize=13) text(0.08, 0.105, 'sand', fontsize=13) text(0.01, 0.11, 'silt and', fontsize=13) text(0.019, 0.105, 'clay', fontsize=13) xlabel('grain diameter (mm)',fontsize=15) ylabel('settling velocity (m/s)',fontsize=15) D = [0.068, 0.081, 0.096, 0.115, 0.136, 0.273, 0.386, 0.55, 0.77, 1.09, 2.18, 4.36] w = [0.00425, 0.0060, 0.0075, 0.0110, 0.0139, 0.0388, 0.0551, 0.0729, 0.0930, 0.141, 0.209, 0.307] The black dots are data points from settling experiments performed with natural river sands (Table 2 in Ferguson and Church, 2004). It is obvious that the departure from Stokes’ Law is already significant for very fine sand and Stokes settling is completely inadequate for describing the settling of medium sand. This plot only captures particle sizes finer than medium sand; let’s see what happens as we move to coarser sediment. A log-log plot is much better for this purpose. d = np.arange(0,0.01,0.00001) ws = v_stokes(rop,rof,d,visc,C1) wt = v_turbulent(rop,rof,d,visc,C2) wf = v_ferg(rop,rof,d,visc,C1,C2) plot([1.0/64, 1.0/64],[0.00001, 10],'k--') text(0.012, 0.0007, 'fine silt', fontsize=13, plot([1.0/32, 1.0/32],[0.00001, 10],'k--') text(0.17/8, 0.0007, 'medium silt', fontsize=13, plot([1.0/16, 1.0/16],[0.00001, 10],'k--') text(0.17/4, 0.0007, 'coarse silt', fontsize=13, plot([1.0/8, 1.0/8],[0.00001, 10],'k--') text(0.17/2, 0.001, 'very fine sand', fontsize=13, plot([0.25, 0.25],[0.00001, 10],'k--') text(0.17, 0.001, 'fine sand', fontsize=13, plot([0.5, 0.5],[0.00001, 10],'k--') text(0.33, 0.001, 'medium sand', fontsize=13, plot([1, 1],[0.00001, 10],'k--') text(0.7, 0.001, 'coarse sand', fontsize=13, plot([2, 2],[0.00001, 10],'k--') text(1.3, 0.001, 'very coarse sand', fontsize=13, plot([4, 4],[0.00001, 10],'k--') text(2.7, 0.001, 'granules', fontsize=13, text(6, 0.001, 'pebbles', fontsize=13, xlabel('grain diameter (mm)', fontsize=15) ylabel('settling velocity (m/s)', fontsize=15) This plot shows how neither Stokes’ Law, nor the velocity based on turbulent drag are valid for calculating settling velocities of sand-size grains in water, whereas the Ferguson-Church equation provides a good fit for natural river sand. Grain settling is a special case of the more general problem of flow past a sphere. The analysis and plots above are all dimensional, that is, you can quickly check by looking at the plots what is the approximate settling velocity of very fine sand. That is great, but you would have to generate a new plot – and potentially do a new experiment – if you wanted to look at the behavior of particles in some other fluid than water. A more general treatment of the problem involves dimensionless variables; in this case these variables are the Reynolds number and the drag coefficient. The classic diagram for flow past a sphere is a plot of the drag coefficient against the Reynolds number. I will try to reproduce this plot, using settling velocities that come from the three equations At terminal settling velocity, the drag force equals the gravitational force acting on the grain: We also know that the gravitational force is given by the submerged weight of the grain: The drag coefficient is essentially a dimensionless version of the drag force: At terminal settling velocity, the particle Reynolds number is Using these relationships it is possible to generate the plot of drag coefficient vs. Reynolds number: d = np.arange(0.000001,0.3,0.00001) C2 = 0.4 # this constant is 0.4 for spheres, 1 for natural grains ws = v_stokes(rop,rof,d,visc,C1) wt = v_turbulent(rop,rof,d,visc,C2) wf = v_ferg(rop,rof,d,visc,C1,C2) Fd = (rop-rof)*4/3*pi*((d/2)**3)*9.81 # drag force Cds = Fd/(rof*ws**2*pi*(d**2)/8) # drag coefficient Cdt = Fd/(rof*wt**2*pi*(d**2)/8) Cdf = Fd/(rof*wf**2*pi*(d**2)/8) Res = rof*ws*d/visc # particle Reynolds number Ret = rof*wt*d/visc Ref = rof*wf*d/visc loglog(Res,Cds,linewidth=3, label='Stokes') loglog(Ret,Cdt,linewidth=3, label='Turbulent') loglog(Ref,Cdf,linewidth=3, label='Ferguson-Church') # data digitized from Southard textbook, figure 2-2: Re_exp = [0.04857,0.10055,0.12383,0.15332,0.25681,0.3343,0.62599,0.77049,0.94788,1.05956, Cd_exp = [479.30811,247.18175,199.24072,170.60068,112.62481,80.21341,45.37168,39.89885,34.56996, loglog(Re_exp, Cd_exp, 'o', markerfacecolor = [0.6, 0.6, 0.6], markersize=8) # Reynolds number for golf ball: rof_air = 1.2041 # density of air at 20 degrees C u = 50 # velocity of golf ball (m/s) d = 0.043 # diameter of golf ball (m) visc_air = 1.983e-5 # dynamic viscosity of air at 20 degrees C Re = rof_air*u*d/visc_air loglog([Re, Re], [0.4, 2], 'k--') text(3e4,2.5,'$Re$ for golf ball',fontsize=13) xlabel('particle Reynolds number ($Re$)', fontsize=15) ylabel('drag coefficient ($C_d$)', fontsize=15); The grey dots are experimental data points digitized from the excellent textbook by John Southard, available through MIT Open Courseware. As turbulence becomes dominant at larger Reynolds numbers, the drag coefficient converges to a constant value (which is equal to C2 in the equations above). Note however the departure of the experimental data from this ideal horizontal line: at high Reynolds numbers there is a sudden drop in drag coefficient as the laminar boundary layer becomes turbulent and the flow separation around the particle is delayed, that is, pushed toward the back; the separation wake becomes smaller and the turbulent drag decreases. Golf balls are not big enough to reach this point without some additional ‘help’; this help comes from the dimples on the surface of the ball that make the boundary layer turbulent and reduce the wake. You can view and download the IPython notebook version of this post from the IPython notebook viewer site. Ferguson, R. and Church, M. (2004) A simple universal equation for grain settling velocity. Journal of Sedimentary Research 74, 933–937. Salt and sediment: A brief history of ideas Salty weirdness Salt is a weird kind of rock. At first sight, it behaves like most other rocks: if you pick up a piece, it is hard, it is heavy, and it breaks if hit with a hammer. But put it under stress for thousands of years, and salt will behave like a fluid: relatively small forces can cause it to flow toward less stressful surroundings. This often means it will try to find its way to the surface. When deposited, sand and mud have lots of pore space filled with water and have relatively low density. However, as they get buried by more sediment, much pore space is lost, both through compaction and cementation. Sediments turn into sedimentary rocks, become harder, and their density increases. In contrast, salt doesn’t have much pore space to begin with; its density will stay the same, regardless of depth of burial. As both salt and sediment are buried to greater depths, an unstable condition develops: lighter salt lying under denser material. In addition, the location of the salt layer in the sediment column is not entirely random: it is in the nature of sedimentary basins to initially place salt at the bottom of the sediment pile. Extensive salt layers usually form early in a basin’s lifetime, when seawaters invade for the first time shallow depressions on a continent that is about to split into two along a rift zone. The Dead Sea is an obvious example that comes to Layering salt and sediment in this unstable order is a recipe for a spectacular geological show. As salt is trying to find its way to the surface, it forms drop-shaped blobs called diapirs; but also ridges, walls, and salt sheets. Several sheets can connect laterally into a huge salt canopy, a new salt layer that is entirely out-of-place or allochtonous. Salt can also act as a lubricating layer at the base of a thick sequence of sedimentary rocks. But I am rushing ahead a little bit; salt tectonics is such a new – but rapidly growing – science that salt canopies, despite their widespread presence in the subsurface Gulf of Mexico, were not recognized and described until the 1980s. Tectonics vs. buoyancy, Europe vs. America Before the beginning of the twentieth century, even with the role that salt played in human history, little was known about how salt domes formed. This was an age of rampant speculation; surface data was scarce because salt does not last very long after exposed as it quickly gets dissolved and washed away by precipitation. Many geologists thought that formation of salt domes didn’t require any significant salt deformation or displacement. But things have changed dramatically in 1901, with the discovery of the Spindletop oil field on top of a salt dome in southeastern Texas. The recognition that oil is often found on top of and around salt domes created a much stronger interest in understanding how exactly salt formations are put in place. European geologists thought that the main driving force was compression, the force that causes folding and thrusting and builds mountains. In Romania, where the Eastern Carpathians take a sharp turn toward the southwest, salt was found in the cores of oil-bearing anticlines. The contacts with the surrounding rocks were clearly discordant. These are the structures that prompted Ludovic Mrazec, professor of geology at University of Bucharest, to coin the term “diapir” in 1907. Mrazec’s explanation of how salt diapirs form. From Barton (1925). Salt in Germany and Poland also seemed to occur invariably in a compressional setting, in the cores of folds, next to folds that had no salt associated. It seemed obvious that salt was ‘pushed up’ by tectonic forces, and it appeared unlikely that the rise of salt itself was causing the folding. But the discovery of a multitude of salt diapirs in the Gulf of Mexico made it clear that they can occur far away from any mountains and compressive tectonic forces. The much simpler setting and relative lack of deformation in the Gulf proved informative. “The Roumanian salt-dome geologist possibly may have more to learn from the American salt domes than the American salt-dome geologist has to learn from the Roumanian domes. The occurrence of the American domes in a region of tectonic quiescence suggests that tectonic thrust cannot have the importance postulated by Mrazec” – wrote Donald Barton in 1925. This was also the time when the density difference between salt and sediment came into discussion. Gravity measurements in the Gulf of Mexico showed anomalies above salt domes that were due to the lower density of salt. It was increasingly recognized that density inversion must play an important role in diapirism, especially where compressive tectonic forces were absent. In addition, by the 1930s geologists have reached a consensus that salt diapirs must somehow punch through the overlying sediment. They seemed to ignore the fact that, as Wade (1931) put it, you cannot drive a putty nail through a wooden board. As mentioned before, salt does behave like a fluid over geological time scales. But how can it penetrate thick layers of hardened sedimentary rock? A brilliant idea: downbuilding The solution to this problem came in 1933, from the same Donald Barton who was discussing the differences between European and American salt domes in 1925. He suggested that diapirs can form without much piercement of the sediment above. Instead, once a small dome is initiated, it simply can stay in place, always at or close to the surface, while sediment is deposited around it and the source salt layer subsides: “it is the sediments which move, and not the salt core. The energy requirement (…) is very much less than if there were actual upward movement of the salt.” The evolution of salt diapirs through ‘downbuilding’. Salt domes are always close to the surface and diapirism goes hand-in-hand with sedimentation. From Barton (1933). This was a key insight: it got rid of the “room problem”, the need for moving huge volumes of hard rock out of the way of the rising salt. It also highlighted that salt movement can happen at the same time with sedimentation, a fact that became abundantly obvious later as high-quality seismic data became available. But the concept of ‘downbuilding’ was ignored for the next fifty years. The beauty of instabilities The main reason for conveniently forgetting Barton’s idea was that density inversion between two fluids could be nicely studied in the lab and described with elegant equations. In one of the papers that kicked off this fascination with Rayleigh-Taylor instabilities, Nettleton (1934) used corn syrup and less dense crude oil to visualize diapir-like blobs of fluid in a transparent cylinder and to show that gravity alone, without any help from contractional forces, was enough to generate structures similar to salt domes. Less dense crude oil (black) forming diapir-like blobs as rising through higher-density corn syrup (yellow). Redrawn from Nettleton (1934). One problem with this approach was that oil and syrup can be photographed during deformation, but the transient structures could not be carefully dissected and analyzed later. Materials of higher viscosity were needed for that; however, increasing the viscosity resulted in a density difference too small to get the fluids moving in the first place. The trick was to place the whole experiment in a centrifuge and use the centrifugal force to imitate a larger-than-normal gravitational force. This approach formed the basis of a productive line of research on gravity tectonics in the laboratory of the Norwegian-Swedish geologist Hans Ramberg. The results are probably more relevant to what is happening deeper in the Earth, at higher temperatures and pressures, where most rocks become more similar in behavior to salt. Modern salt tectonics By the late 1980s it has become quite obvious that kilometer-thick piles of sedimentary rock cannot be treated as fluids and salt-sediment interaction is more similar to placing and deforming slabs of brittle material on top of a viscous fluid. Seismic from salt-bearing sedimentary basins suggested that the history of salt movement and sedimentation were highly interconnected and Barton’s downbuilding concept was strongly relevant. Three-dimensional seismic data also showed the variety and complexity of allochtonous salt bodies in salt-rich sedimentary basins. Sandbox experiments with more realistic material properties and ongoing sedimentation during deformation were performed and the results beautifully visualized. The behavior of turbidity currents flowing over complex salt-related submarine topography was investigated. Hundreds of scientific papers were written on salt tectonics, both by industry geoscientists and researchers in the academia. N-S cross section in the Gulf of Mexico. Large volumes of the Jurassic Louann salt have been displaced and squeezed into a salt canopy surrounded by much younger sediments. From Pilcher et al., 2011 And there is quite a bit left to explore and understand. References and further reading Barton, D. C. (1926) The American Salt-Dome Problems in the Light of the Roumanian and German Salt Domes, AAPG Bulletin, v. 9, p. 1227–1268. Barton, D. C. (1933) Mechanics of Formation of Salt Domes with Special Reference to Gulf Coast Salt Domes of Texas and Louisiana, AAPG Bulletin, v. 17, 1025–1083. Hudec, M., & Jackson, M. (2007) Terra infirma: Understanding salt tectonics. Earth Science Reviews, 82(1-2), 1–28. Jackson, M. (1996) Retrospective salt tectonics, in M.P.A. Jackson, D.G. Roberts, and S. Snelson, eds., Salt tectonics: a global perspective: AAPG Memoir 65, p. 1–28. [great summary of the history of salt tectonics] Mrazec, L. (1907) Despre cute cu sȋmbure de străpungere [On folds with piercing cores]: Bul. Soc. Stiint., Romania, v. 16, p. 6–8. Nettleton, L. L. (1934) Fluid Mechanics of Salt Domes, AAPG Bulletin, v. 18, p. 1–30. Pilcher, R. S., Kilsdonk, B., & Trude, J. (2011) Primary basins and their boundaries in the deep-water northern Gulf of Mexico: Origin, trap types, and petroleum system implications. AAPG Bulletin, v. 95(2), p. 219–240. Wade, A. (1931) Intrusive salt bodies in coastal Asir, south western Arabia: Institute of Petroleum Technologists Journal, v. 17, p. 321–330, 357–361. The complexity of sinuous channel deposits in three dimensions The beauty of the shapes and patterns created by meandering rivers has long attracted the attention of many geomorphologists, civil engineers, and sedimentologists. Unless they are fairly steep or have highly stable and unerodible banks, rivers do not like to follow a straight course and tend to develop a sinuous plan-view pattern. The description and mathematical modeling of these curves is a fascinating subject, but that is not what I want to talk about here and now. It is hard enough to understand the plan-view evolution of rivers, especially if one is interested in the long-term results – when cutoffs become important -, but things get really complicated when it comes to the three-dimensional structure of the deposits that meandering rivers leave behind. The same can be said about sinuous channels on the seafloor, created and maintained by dirty mixtures of water and sediment (called turbidity currents). An ever-increasing number of seafloor and seismic images show that highly sinuous submarine channels are almost as common as their subaerial counterparts, but much remains to be learned about the geometries of their deposits that accumulate through geological time. Using simple modeling of how channel surfaces migrate through time, two recent papers attempt to illustrate the three-dimensional structure of sinuous fluvial and submarine channel deposits. In the Journal of Sedimentary Research, Willis and Tang (2010) show how slightly different patterns of fluvial meander migration result in different deposit geometries and different distribution of grain size, porosity and permeability. [These properties are important because they determine how fluids flow - or don't flow - through the pores of the sediment.] River meanders can either grow in a direction perpendicular to the overall downslope orientation, or they can keep the same width and migrate downstream through translation. In the latter case – which is often characteristic of rivers incising into older sediments -, deposits forming on the downstream, concave bank of point bars will be preferentially preserved. These deposits tend to be finer grained than the typical convex-bank point bar sediments. In addition to building a range of models and analyzing their geometries, Willis and Tang also ran simulations of how would oil be displaced by water in them. One of their findings is that sinuous rivers that keep adding sediment in the same area over time (in other words, rivers that aggrade) tend to form better connected sand bodies than rivers which keep snaking around roughly in the same horizontal plane, without aggradation. Map of deposits forming as river meanders grow (from Willis and Tang, 2010). Cross sections through the deposits of two meander bends (locations shown in figure above). Colors represent permeability, red being highly permeable and blue impermeable sediment. From Willis and Tang, 2010. Check out the paper itself for more images like these, plus discussions of concave-bank deposition, cutoff formation, and filling of abandoned channels. The second paper (Sylvester, Pirmez, and Cantelli, 2010; and yes, one of the authors is also the author of this blog post, so don’t expect any constructive criticism here) focuses on submarine channels and their overbank deposits, but the starting point and the modeling techniques are similar: take a bunch of sinuous channel centerlines and generate surfaces around them that reflect the topography of the system at every time step. However, we know much less about submarine channels than fluvial ones, because it is much more difficult to collect data at and from the bottom of the ocean than it is from the river in your backyard. The result is that some of the simplifications in our model are controversial; to many sedimentary geologists, submarine channels and their deposits are fundamentally different from rivers and point bars, and there is not much use in even comparing the two. Part of the problem is that not all submarine channels are made equal, and, when looking at an outcrop, it is not easy – or outright impossible – to tell what kind of geomorphology produced the stratigraphy. In fact, the number of exposures that represent highly sinuous submarine channels, as observed on the seafloor and numerous seismic images, is probably fairly limited. One thing is quite clear, however: many submarine channels show plan-view migration patterns that are very similar to those of rivers, and this large-scale structure imposes some significant constraints on the geometry of the deposits as well. That being said, nobody denies that there are plenty of significant differences between real and submarine ‘rivers’ [note quotation marks]. A very important one is the amount of overbank – or levee – deposition: turbidity currents often overflow their channel banks as thick muddy clouds and form much thicker deposits than the overbank sediment layers typical of rivers. When these high rates of levee deposition combine with the strong three-dimensionality of channel migration, complex geometries result that are quite tricky to understand just by looking at a single cross section. Cross section and chronostratigraphic diagram through a submarine channel system with inner and outer levees (from Sylvester et al., 2010). One of the consequences of the channel migration is the formation of erosional surfaces that develop through a relatively long time and do not correspond to a geomorphologic surface at all (see the red erosional zones in the Wheeler diagram above). This difference between stratigraphic and geomorphologic surfaces is essential, yet often downplayed or even ignored in stratigraphy. In terms of geomorphology, the combination of channel movement in both horizontal and vertical directions and the extensive levee deposition results in a wide valley with scalloped margins and numerous terraces Three-dimensional view of an incising channel-levee system (from Sylvester et al., 2010). This second paper is part of a nice collection focusing on submarine sedimentary systems that is going to be published as a special issue of Marine and Petroleum Geology, a collection that originated from a great conference held in 2009 in Torres del Paine National Park, Southern Chile. PS. As I am typing this, I see that Brian over at Clastic Detritus is also thinking about submarine channels and subaerial rivers… Those channels formed by saline density currents on the slope of the Black Sea are fascinating. Willis, B., & Tang, H. (2010). Three-Dimensional Connectivity of Point-Bar Deposits Journal of Sedimentary Research, 80 (5), 440-454 DOI: 10.2110/jsr.2010.046 Sylvester, Z., Pirmez, C., & Cantelli, A. (2010). A model of submarine channel-levee evolution based on channel trajectories: Implications for stratigraphic architecture Marine and Petroleum Geology DOI: 10.1016/j.marpetgeo.2010.05.012 Hillslope diffusion Modeling erosion and deposition of sediment using the diffusion equation is among the important subjects that are usually omitted from sedimentary geology textbooks. Part of the reason for this is that ‘conventional’ sedimentary geology tended to only pay lip service to earth surface processes and was more interested in describing the stratigraphic record than figuring how it relates to geomorphology. Nowadays, a good discussion of stratigraphy and sedimentology cannot ignore anymore what geomorphologists have learned about landscape evolution. (One textbook that clearly recognizes this is this one.) But let’s get back to the subject of this post. Hillslope evolution can be modeled with the diffusion equation, one of the most common differential equations in science, applied for example to describe how differences in temperature are eliminated through heat conduction. In the case of heat, the heat flux is proportional to the rate of spatial temperature change; on hillslopes, the sediment flux is proportional to the spatial rate of change in elevation. This last quantity of course is the slope itself. In other words, q = -k*dh/dx, q = -k*slope, where q is the volumetric sediment flux per unit length, k is a constant called diffusivity, h is the elevation, and x is the horizontal coordinate. We also know that sediment does not disappear into thin air: considering a small area of the hillslope, the amount of sediment entering and leaving this area will determine how large the change in elevation will be: dh/dt = -dq/dx, in other words, deposition or erosion at any location is determined by the change in sediment flux. Combining this equation with the previous one, we arrive to the diffusion equation: dh/dt = k*d^2h/dx^2. Note that the quantity on the right side is the second derivative (or curvature) of the slope profile. Large negative curvatures result in rapid erosion; places with large positive curvature have high rates of deposition. Through time, the bumps and troughs of the hillslope are smoothed out through erosion and deposition. The simplest possible case is the diffusion of a fault scarp. The animation below illustrates how a 1 m high fault scarp gets smoothed out through time; the evolution of slope and curvature are also shown. The dashed line indicates the original topography, at time 0. [The plots were generated using Ramon Arrowsmith's Matlab code]. More complicated slope profiles can be modeled as well; here is an example with two fault scarps: Note how both erosion and deposition get much slower as the gradients become more uniform. The simplicity of the diffusion equation makes it an attractive tool in modeling landscape evolution. In addition to hillslopes and fault scarps, it has been successfully applied in modeling – for example – river terraces, deltaic clinoforms, cinder cones, fluvial systems, and foreland basin stratigraphy. However, it is important to know when and where the assumptions behind it become invalid. For example, steep slopes often have a non-linear relationship between sediment flux and slope as mass movements dramatically increase sediment flux above a critical slope value. Also, the models shown here would fail to reproduce the topography of a system where not all sediment is deposited at the toe of the steeper slope, but a significant part is carried away by a river. And that brings us closer to advection; a subject that I might take notes about at another time. Further reading: 1) The book “Quantitative Modeling of Earth Surface Processes” by Jon Pelletier has a chapter with lots of details about the diffusion equation. 2) Analog and numerical modeling of hillslope diffusion- a nice lab exercise. Climbing Ripples I. Ripples, dunes, cross bedding and cross lamination have always been some of the sexiest subjects in sedimentary geology. They are certainly responsible (in part) for my choice of a certain walk of life that consists of studying dirt. You might say that everything has been already said about ripples and dunes, and you clearly get that feeling if you read some of J.R.L. Allen’s work on the subject (and that can be a lot of reading, by the way) or look at the fantastic multimedia material that David Rubin at the USGS put together. [Of course, there are numerous other authors who have written great papers on the subject, but it is not my purpose here to write a history of bedform sedimentology. Although that would be an interesting subject, if somebody had the time for it.] However, little of this material gets into the standard sedimentology and stratigraphy textbooks. Maybe rightly so: after all, textbooks are not supposed to include all the details about any particular subject. And maybe there are higher-density issues out there, like whether we should call something a turbidite or a debrite. [Sorry, I could not refrain from typing that]. Take for example climbing ripples. They form when several trains of ripples are superimposed on each other and they seem to ‘climb’, by generating stratigraphic surfaces that are tilted in an upcurrent direction. [Note however that these surfaces are *not* topographic - or time - surfaces; more on that later]. Numerous textbooks and many papers mention climbing ripple cross lamination, but often the explanation is something like “they indicate high rates of deposition”, or “the steepness of the climb and stoss-side preservation are a function of the ratio between suspended-load and bedload”. The question is, what do we *exactly* mean by ‘high rates of deposition’? If we cannot put numbers on it, it is not that informative. Also, by ‘suspended load’, do we mean suspended load concentration? Or deposition from suspended load and bedload, respectively? Those statements are not necessarily wrong, but they do not do justice to the models that have been published many years ago, models that actually have some numbers and equations behind the “conclusion” section. The key paper that I am talking about is “A quantitative model of climbing ripples and their cross-laminated deposit“, by J.R.L. Allen, published in 1970 in the journal Sedimentology. The most important relationship that Allen has derived links the angle of climb ζ (see the sketch below) to the rate of deposition M (measured in units of mass over unit time and area), the rate of bedload sediment transport j, and the ripple height H: tanζ = MH / 2j This is simply based on decomposing the sediment flux to and through the bed into vertical and horizontal components (plus a relationship between the horizontal sediment transport rate in ripples and the horizontal migration rate of the bedforms). Note that the quantity j refers to the sediment mass that moves through a cross section perpendicular to the general current direction, and does this by being part of the ripples themselves. In other words, there is no direct equivalence between M and suspended load deposition, and j and bedload deposition. Although it is possible that in general suspended load contributes more to M than deposition from bedload, it says nowhere that grains transported within the bedload cannot be deposited on the stoss side of the ripples and thus contribute to the vertical growth of the bed. Obviously, if the angle of climb is smaller than the dip of the stoss side, there will be no stoss side preservation and the resulting cross lamination will look like in the sketch below (which, by the way, was quite an effort to generate in Matlab; you can easily do this and much-much more with David Rubin’s Matlab code, but I wanted to understand things a little better by coding something simple myself): This is often called ‘A-type’ (or subcritical) climbing ripple cross lamination, but everybody knows what you are talking about if you “simply” call it climbing ripple cross lamination with no stoss-side preservation. In contrast, aggradation is much more prominent if the angle of climb is larger than the slope of the stoss side, and in this case deposition takes place on the stoss sides as well, resulting in ‘S-type’ (or supercritical) lamination: Of course, it says nowhere that the rate of deposition M or the bedload transport rate j must stay constant through time. If the ratio of these quantities changes, the angle of climb will change as well. This sketch shows an example where the rate of deposition M increases through time: One of the main points of the paper is that there is a fundamental difference between the rate of deposition M and the bedload sediment transport rate j. A rate of deposition larger than zero means that the sediment transport rate within the flow must decrease from an upcurrent position to a downcurrent position; a simple mass balance tells us that this change in the sediment transport rate has to equal the rate of deposition. In other words, the rate of deposition M is a derivative of the sediment transport rate, and as such, does not belong in the same drawer of physical quantities as the bedload transport rate. Along the same line of thought, Allen emphasizes that climbing ripple lamination says something about flow uniformity and steadiness. A uniform and steady flow can only form a single train of ripples; either non-uniformity or unsteadiness is needed to have climbing-ripple deposition. That’s it for now; to be continued. It’s time to do my taxes. Further reading: Brian has a Friday Field Photo and a Geopuzzle on climbing ripples. Here are some pictures and a movie of climbing ripples generated by a turbidity current in a flume.
{"url":"http://hinderedsettling.com/category/modeling/","timestamp":"2014-04-21T04:34:12Z","content_type":null,"content_length":"96042","record_id":"<urn:uuid:fbb46ba8-c3c7-4bb0-8fd4-7ce2184031f6>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
Srīnivāsa Aiyangār Rāmānujan FRS, better known as Srinivasa Iyengar Ramanujan (22 December 1887 – 26 April 1920) was a India , officially the Republic of India , is a country in South Asia. It is the seventh-largest country by geographical area, the second-most populous country with over 1.2 billion people, and the most populous democracy in the world... A mathematician is a person whose primary area of study is the field of mathematics. Mathematicians are concerned with quantity, structure, space, and change.... and autodidact who, with almost no formal training in pure mathematics Broadly speaking, pure mathematics is mathematics which studies entirely abstract concepts. From the eighteenth century onwards, this was a recognized category of mathematical activity, sometimes characterized as speculative mathematics, and at variance with the trend towards meeting the needs of... , made extraordinary contributions to mathematical analysis Mathematical analysis, which mathematicians refer to simply as analysis, has its beginnings in the rigorous formulation of infinitesimal calculus. It is a branch of pure mathematics that includes the theories of differentiation, integration and measure, limits, infinite series, and analytic functions... number theory Number theory is a branch of pure mathematics devoted primarily to the study of the integers. Number theorists study prime numbers as well... , infinite series and continued fraction In mathematics, a continued fraction is an expression obtained through an iterative process of representing a number as the sum of its integer part and the reciprocal of another number, then writing this other number as the sum of its integer part and another reciprocal, and so on... s. Ramanujan's talent was said by the English mathematician G.H. Hardy to be in the same league as legendary mathematicians such as Gauss may refer to:*Carl Friedrich Gauss, German mathematician and physicist*Gauss , a unit of magnetic flux density or magnetic induction*GAUSS , a software package*Gauss , a crater on the moon... , Euler, Cauchy, Sir Isaac Newton PRS was an English physicist, mathematician, astronomer, natural philosopher, alchemist, and theologian, who has been "considered by many to be the greatest and most influential scientist who ever lived."... Archimedes of Syracuse was a Greek mathematician, physicist, engineer, inventor, and astronomer. Although few details of his life are known, he is regarded as one of the leading scientists in classical antiquity. Among his advances in physics are the foundations of hydrostatics, statics and an... and he is widely regarded as one of the towering geniuses in mathematics. Born in Erode is a city, a municipal corporation and the headquarters of Erode district in the South Indian state of Tamil Nadu.It is situated at the center of the South Indian Peninsula, about southwest from the state capital Chennai and on the banks of the rivers Cauvery and Bhavani, between 11° 19.5"... , Tamil Nadu, India, to a poor Brahmin family, Ramanujan first encountered formal mathematics at age 10. He demonstrated a natural ability, and was given books on advanced Trigonometry is a branch of mathematics that studies triangles and the relationships between their sides and the angles between these sides. Trigonometry defines the trigonometric functions, which describe those relationships and have applicability to cyclical phenomena, such as waves... written by S. L. Loney Sidney Luxton Loney, M.A. was Professor of Mathematics at the Royal Holloway College , and a fellow of Sidney Sussex College, Cambridge. He authored a number of mathematics texts, some of which have been reprinted numerous times... . He mastered them by age 12, and even discovered In mathematics, a theorem is a statement that has been proven on the basis of previously established statements, such as other theorems, and previously accepted statements, such as axioms... s of his own, including independently re-discovering Euler's Identity . He demonstrated unusual mathematical skills at school, winning accolades and awards. By 17, Ramanujan conducted his own mathematical research on Bernoulli number In mathematics, the Bernoulli numbers Bn are a sequence of rational numbers with deep connections to number theory. They are closely related to the values of the Riemann zeta function at negative s and the Euler–Mascheroni constant The Euler–Mascheroni constant is a mathematical constant recurring in analysis and number theory, usually denoted by the lowercase Greek letter .... . He received a scholarship to study at Government College in Kumbakonam , also spelt as Coombaconum in the records of British India , is a town and a special grade municipality in the Thanjavur district in the southeast Indian state of Tamil Nadu. Located 40 kilometres from Thanjavur and 272 kilometres from Chennai, it is the headquarters of the Kumbakonam... , but lost it when he failed his non-mathematical coursework. He joined another college to pursue independent mathematical research, working as a clerk in the Accountant-General's office at the Madras Port Trust Office to support himself. In 1912–1913, he sent samples of his theorems to three academics at the University of Cambridge. Only Hardy recognised the brilliance of his work, subsequently inviting Ramanujan to visit and work with him at Cambridge. He became a Fellow of the Royal Society and a Fellow of Trinity College, Cambridge Trinity College is a constituent college of the University of Cambridge. Trinity has more members than any other college in Cambridge or Oxford, with around 700 undergraduates, 430 graduates, and over 170 Fellows... , dying of illness, malnutrition and possibly liver infection in 1920 at the age of 32. During his short lifetime, Ramanujan independently compiled nearly 3900 results (mostly In mathematics, the term identity has several different important meanings:*An identity is a relation which is tautologically true. This means that whatever the number or value may be, the answer stays the same. For example, algebraically, this occurs if an equation is satisfied for all values of... An equation is a mathematical statement that asserts the equality of two expressions. In modern notation, this is written by placing the expressions on either side of an equals sign , for examplex + 3 = 5\,asserts that x+3 is equal to 5... s). Although a small number of these results were actually false and some were already known, most of his claims have now been proven correct. He stated results that were both original and highly unconventional, such as the Ramanujan prime In mathematics, a Ramanujan prime is a prime number that satisfies a result proven by Srinivasa Ramanujan relating to the prime-counting function.-Origins and definition:... and the Ramanujan theta function In mathematics, particularly q-analog theory, the Ramanujan theta function generalizes the form of the Jacobi theta functions, while capturing their general properties. In particular, the Jacobi triple product takes on a particularly elegant form when written in terms of the Ramanujan theta... , and these have inspired a vast amount of further research. However, the mathematical mainstream has been rather slow in absorbing some of his major discoveries. The Ramanujan Journal , an international publication, was launched to publish work in all areas of mathematics influenced by his work. Early life Ramanujan was born on 22 December 1887 in the city Erode is a city, a municipal corporation and the headquarters of Erode district in the South Indian state of Tamil Nadu.It is situated at the center of the South Indian Peninsula, about southwest from the state capital Chennai and on the banks of the rivers Cauvery and Bhavani, between 11° 19.5"... , Tamil Nadu, India, at the residence of his maternal grandparents. His father, K. Srinivasa Iyengar worked as a clerk in a sari shop and hailed from the district of . His mother, Komalatammal was a Homemaking is a mainly American term for the management of a home, otherwise known as housework, housekeeping or household management... and also sang at a local temple. They lived in Sarangapani Street in a traditional home in the town of Kumbakonam. The family home is now a museum. When Ramanujan was a year and a half old, his mother gave birth to a son named Sadagopan, who died less than three months later. In December 1889, Ramanujan had Smallpox was an infectious disease unique to humans, caused by either of two virus variants, Variola major and Variola minor. The disease is also known by the Latin names Variola or Variola vera, which is a derivative of the Latin varius, meaning "spotted", or varus, meaning "pimple"... and recovered, unlike thousands in the Thanjavur district Thanjavur District is one of the 32 districts of the state of Tamil Nadu, in southeastern India. Its headquarters is Thanjavur.-Geography:... who died from the disease that year. He moved with his mother to her parents' house in Kanchipuram, or Kanchi, is a temple city and a municipality in Kanchipuram district in the Indian state of Tamil Nadu. It is a temple town and the headquarters of Kanchipuram district... , near Madras (now Chennai , formerly known as Madras or Madarasapatinam , is the capital city of the Indian state of Tamil Nadu, located on the Coromandel Coast off the Bay of Bengal. Chennai is the fourth most populous metropolitan area and the sixth most populous city in India... ). In November 1891, and again in 1894, his mother gave birth, but both children died in infancy. On 1 October 1892, Ramanujan was enrolled at the local school. In March 1894, he was moved to a Telugu medium Medium of instruction is a language used in teaching. It may or may not be the official language of the country or territory. Where the first language of students is different from the official language, it may be used as the medium of instruction for part or all of schooling. Bilingual or... school. After his maternal grandfather lost his job as a court official in Kanchipuram, Ramanujan and his mother moved back to Kumbakonam , also spelt as Coombaconum in the records of British India , is a town and a special grade municipality in the Thanjavur district in the southeast Indian state of Tamil Nadu. Located 40 kilometres from Thanjavur and 272 kilometres from Chennai, it is the headquarters of the Kumbakonam... and he was enrolled in the Kangayan Primary School. After his paternal grandfather died, he was sent back to his maternal grandparents, who were now living in Madras. He did not like school in Madras, and he tried to avoid attending. His family enlisted a local constable to make sure he attended school. Within six months, Ramanujan was back in Kumbakonam. Since Ramanujan's father was at work most of the day, his mother took care of him as a child. He had a close relationship with her. From her, he learned about tradition and The Puranas are a genre of important Hindu, Jain and Buddhist religious texts, notably consisting of narratives of the history of the universe from creation to destruction, genealogies of kings, heroes, sages, and demigods, and descriptions of Hindu cosmology, philosophy, and geography.Puranas... . He learned to sing religious songs, to attend pujas at the temple and particular eating habits – all of which are part of Brahmin Brahman, Brahma and Brahmin.Brahman, Brahmin and Brahma have different meanings. Brahman refers to the Supreme Self... culture. At the Kangayan Primary School, Ramanujan performed well. Just before the age of 10, in November 1897, he passed his primary examinations in English, Tamil is a Dravidian language spoken predominantly by Tamil people of the Indian subcontinent. It has official status in the Indian state of Tamil Nadu and in the Indian union territory of Pondicherry. Tamil is also an official language of Sri Lanka and Singapore... , geography and arithmetic. With his scores, he finished first in the district. That year, Ramanujan entered Town Higher Secondary School where he encountered formal mathematics for the first time. By age 11, he had exhausted the mathematical knowledge of two college students who were lodgers at his home. He was later lent a book on advanced trigonometry written by S. L. Loney Sidney Luxton Loney, M.A. was Professor of Mathematics at the Royal Holloway College , and a fellow of Sidney Sussex College, Cambridge. He authored a number of mathematics texts, some of which have been reprinted numerous times... . He completely mastered this book by the age of 13 and discovered sophisticated theorems on his own. By 14, he was receiving merit certificates and academic awards which continued throughout his school career and also assisted the school in the Logistics is the management of the flow of goods between the point of origin and the point of destination in order to meet the requirements of customers or corporations. Logistics involves the integration of information, transportation, inventory, warehousing, material handling, and packaging, and... of assigning its 1200 students (each with their own needs) to its 35-odd teachers. He completed mathematical exams in half the allotted time, and showed a familiarity with infinite series. In 1903 when he was 16, Ramanujan obtained a library loaned copy of a book by G. S. Carr George Shoobridge Carr wrote Synopsis of Pure Mathematics . This book, first published in England in 1880, was read and studied closely by Srinivasa Aiyangar Ramanujan when he was a teenager.... from a friend. The book was titled A Synopsis of Elementary Results in Pure and Applied Mathematics Synopsis of Pure Mathematics is a book by G. S. Carr, written in 1886. The book attempted to summarize the state of most of the basic mathematics known at the time.... and was a collection of 5000 theorems. Ramanujan reportedly studied the contents of the book in detail. The book is generally acknowledged as a key element in awakening the genius of Ramanujan. The next year, he had independently developed and investigated the Bernoulli number In mathematics, the Bernoulli numbers Bn are a sequence of rational numbers with deep connections to number theory. They are closely related to the values of the Riemann zeta function at negative s and had calculated Euler's constant The Euler–Mascheroni constant is a mathematical constant recurring in analysis and number theory, usually denoted by the lowercase Greek letter .... up to 15 decimal places. His peers at the time commented that they "rarely understood him" and "stood in respectful awe" of him. When he graduated from Town Higher Secondary School Town Higher Secondary School is a school in Kumbakonam, a town in the Thanjavur district in the Indian state of Tamil Nadu.Mathematician Srinivasa Ramanujan studied here.... in 1904, Ramanujan was awarded the K. Ranganatha Rao prize for mathematics by the school's headmaster, Krishnaswami Iyer. Iyer introduced Ramanujan as an outstanding student who deserved scores higher than the maximum possible marks. He received a scholarship to study at Government Arts College, Kumbakonam The Government Arts College, previously known as the Government Arts College for Men, is an arts college based in the town of Kumbakonam in Tamil Nadu, India. It is one of the oldest and prestigious educational institutions in the Madras Presidency of British India.- History :The Government Arts... , However, Ramanujan was so intent on studying mathematics that he could not focus on any other subjects and failed most of them, losing his scholarship in the process. In August 1905, he ran away from home, heading towards Visakhapatnam is a major sea port on the south east coast of India. With a population of approximately 1.7 million, it is the second largest city in the state of Andhra Pradesh and the third largest city on the east coast of India after Kolkata and Chennai. According to the history, the city was... . He later enrolled at Pachaiyappa's College Pachaiyappa's College is one of the oldest educational institutions in Chennai, in the South Indian state of Tamil Nadu. The college was established as Pachaiyappa's Central Institution at Popham's Broadway on January 1, 1842, from money given in Pachaiyappa Mudaliar's will. It was the first Hindu... in Madras. He again excelled in mathematics but performed poorly in other subjects such as physiology. Ramanujan failed his Fine Arts degree exam in December 1906 and again a year later. Without a degree, he left college and continued to pursue independent research in mathematics. At this point in his life, he lived in extreme poverty and was often on the brink of starvation. Adulthood in India On 14 July 1909, Ramanujan was married to a nine-year old bride, Janaki Ammal. In the branch of Hinduism is the predominant and indigenous religious tradition of the Indian Subcontinent. Hinduism is known to its followers as , amongst many other expressions... to which Ramanujan belonged, marriage was a formal engagement that was consummated only after the bride turned 17 or 18, as per the traditional calendar. After the marriage, Ramanujan developed a hydrocele testis A hydrocele testis is an accumulation of clear fluid in the tunica vaginalis, the most internal of membranes containing a testicle. A primary hydrocele causes a painless enlargement in the scrotum on the affected side and is thought to be due to the defective absorption of fluid secreted between... , an abnormal swelling of the tunica vaginalis The tunica vaginalis is the serous covering of the testis.It is a pouch of serous membrane, derived from the processus vaginalis of the peritoneum, which in the fetus preceded the descent of the testis from the abdomen into the scrotum.... , an internal membrane in the testicle. The condition could be treated with a routine surgical operation that would release the blocked fluid in the scrotal sac. His family did not have the money for the operation, but in January 1910, a doctor volunteered to do the surgery for free. After his successful surgery, Ramanujan searched for a job. He stayed at friends' houses while he went door to door around the city of Chennai , formerly known as Madras or Madarasapatinam , is the capital city of the Indian state of Tamil Nadu, located on the Coromandel Coast off the Bay of Bengal. Chennai is the fourth most populous metropolitan area and the sixth most populous city in India... (now Chennai) looking for a clerical position. To make some money, he tutored some students at Presidency College who were preparing for their F.A. exam. In late 1910, Ramanujan was sick again, possibly as a result of the surgery earlier in the year. He feared for his health, and even told his friend, R. Radakrishna Iyer, to "hand these [my mathematical notebooks] over to Professor Singaravelu Mudaliar [mathematics professor at Pachaiyappa's College] or to the British professor Edward B. Ross, of the Madras Christian College The Madras Christian College, commonly known as MCC, is a liberal arts and sciences college in Madras , India. Founded in 1837, MCC is one of Asia's oldest extant colleges. Currently, the college is affiliated to the University of Madras, but functions as an autonomous institution from its campus... ." After Ramanujan recovered and got back his notebooks from Iyer, he took a northbound train from Kumbakonam to Villupuram, a coastal city under French control. Attention from mathematicians He met deputy collector V. Ramaswamy Aiyer V. Ramaswamy Aiyer was a civil servant in the Madras Provincial Service. In 1907, along with a group of friends, he founded the Indian Mathematical Society with headquarters in Pune. He was the first Secretary of the Society and acted in that position until 1910... , who had recently founded the Indian Mathematical Society. Ramanujan, wishing for a job at the revenue department where Ramaswamy Aiyer worked, showed him his mathematics notebooks. As Ramaswamy Aiyer later recalled: I was struck by the extraordinary mathematical results contained in it [the notebooks]. I had no mind to smother his genius by an appointment in the lowest rungs of the revenue department. Ramaswamy Aiyer sent Ramanujan, with letters of introduction, to his mathematician friends in Madras. Some of these friends looked at his work and gave him letters of introduction to R. Ramachandra Rao Diwan Bahadur Raghunatha Rao Ramachandra Rao was an Indian civil servant, mathematician and social and political activist who served as District Collector in British India.- Early life and education , the district collector for Nellore , is a city and headquarters of Potti Sri Ramulu Nellore District, formerly Nellore district.And in the state of Andhra Pradesh. Ancient name of Nellore was "Vikrama Simhapuri".... and the secretary of the Indian Mathematical Society. Ramachandra Rao was impressed by Ramanujan's research but doubted that it was actually his own work. Ramanujan mentioned a correspondence he had with Professor Saldhana, a notable Mumbai , formerly known as Bombay in English, is the capital of the Indian state of Maharashtra. It is the most populous city in India, and the fourth most populous city in the world, with a total metropolitan area population of approximately 20.5 million... mathematician, in which Saldhana expressed a lack of understanding for his work but concluded that he was not a phony. Ramanujan's friend, C. V. Rajagopalachari, persisted with Ramachandra Rao and tried to quell any doubts over Ramanujan's academic integrity. Rao agreed to give him another chance, and he listened as Ramanujan discussed elliptic integral In integral calculus, elliptic integrals originally arose in connection with the problem of giving the arc length of an ellipse. They were first studied by Giulio Fagnano and Leonhard Euler... s, hypergeometric series, and his theory of divergent series In mathematics, a divergent series is an infinite series that is not convergent, meaning that the infinite sequence of the partial sums of the series does not have a limit.... , which Rao said ultimately "converted" him to a belief in Ramanujan's mathematical brilliance. When Rao asked him what he wanted, Ramanujan replied that he needed some work and financial support. Rao consented and sent him to Madras. He continued his mathematical research with Rao's financial aid taking care of his daily needs. Ramanujan, with the help of Ramaswamy Aiyer, had his work published in the Journal of Indian Mathematical Society One of the first problems he posed in the journal was: He waited for a solution to be offered in three issues, over six months, but failed to receive any. At the end, Ramanujan supplied the solution to the problem himself. On page 105 of his first notebook, he formulated an equation that could be used to solve the infinitely nested radicals problem. Using this equation, the answer to the question posed in the was simply 3. Ramanujan wrote his first formal paper for the on the properties of Bernoulli number In mathematics, the Bernoulli numbers Bn are a sequence of rational numbers with deep connections to number theory. They are closely related to the values of the Riemann zeta function at negative s. One property he discovered was that the denominators of the fractions of Bernoulli numbers were always divisible by six. He also devised a method of calculating based on previous Bernoulli numbers. One of these methods went as follows: It will be observed that if is even but not equal to zero, is a fraction and the numerator of (ii) the denominator of contains each of the factors 2 and 3 once and only once, In his 17-page paper, "Some Properties of Bernoulli's Numbers", Ramanujan gave three proofs, two corollaries and three conjectures. Ramanujan's writing initially had many flaws. As editor M. T. Narayana Iyengar noted: Mr. Ramanujan's methods were so terse and novel and his presentation so lacking in clearness and precision, that the ordinary [mathematical reader], unaccustomed to such intellectual gymnastics, could hardly follow him. Ramanujan later wrote another paper and also continued to provide problems in the . In early 1912, he got a temporary job in the Madras Accountant General The Accountant General or Accountant-General was formerly an officer in the English Court of Chancery who received all moneys lodged in court, deposited them in a bank, and disbursed them. The office was abolished by the Chancery Funds Act of 1872, with the duties transferred to the... 's office, with a salary of 20 rupees per month. He lasted for only a few weeks. Toward the end of that assignment he applied for a position under the Chief Accountant of the Madras Port Trust. In a letter dated 9 February 1912, Ramanujan wrote: I understand there is a clerkship vacant in your office, and I beg to apply for the same. I have passed the Matriculation Examination and studied up to the F.A. but was prevented from pursuing my studies further owing to several untoward circumstances. I have, however, been devoting all my time to Mathematics and developing the subject. I can say I am quite confident I can do justice to my work if I am appointed to the post. I therefore beg to request that you will be good enough to confer the appointment on me. Attached to his application was a recommendation from E. W. Middlemast Edgar William Middlemast was a British mathematician and educator in India in the early twentieth century. He served as the Deputy Director of the Department of Public Instruction, Madras Presidency, as Professor of Mathematics at the Presidency College, Madras from 1910 and as Principal of the... , a mathematics professor at the Presidency College Presidency College is an arts, law and science college in the city of Chennai in Tamil Nadu, India. Established as the Madras Preparatory School on October 15, 1840 and later, upgraded to a high school and then, graduate college, the Presidency College is one of the oldest government arts colleges... , who wrote that Ramanujan was "a young man of quite exceptional capacity in Mathematics". Three weeks after he had applied, on 1 March, Ramanujan learned that he had been accepted as a Class III, Grade IV accounting clerk, making 30 rupees per month. At his office, Ramanujan easily and quickly completed the work he was given, so he spent his spare time doing mathematical research. Ramanujan's Sir Francis Spring Sir Francis Joseph Edward Spring KCIE was a British civil engineer who played a pioneering role in development of the Indian Railways. He also served as Chairman of the Madras Port Trust from 1904 to , and S. Narayana Iyer, a colleague who was also treasurer of the Indian Mathematical Society, encouraged Ramanujan in his mathematical pursuits. Contacting English mathematicians On the spring of 1913, Narayana Iyer, Ramachandra Rao and E. W. Middlemast Edgar William Middlemast was a British mathematician and educator in India in the early twentieth century. He served as the Deputy Director of the Department of Public Instruction, Madras Presidency, as Professor of Mathematics at the Presidency College, Madras from 1910 and as Principal of the... tried to present Ramanujan's work to British mathematicians. One mathematician, M. J. M. Hill of University College London University College London is a public research university located in London, United Kingdom and the oldest and largest constituent college of the federal University of London... , commented that Ramanujan's papers were riddled with holes. He said that although Ramanujan had "a taste for mathematics, and some ability", he lacked the educational background and foundation needed to be accepted by mathematicians. Although Hill did not offer to take Ramanujan on as a student, he did give thorough and serious professional advice on his work. With the help of friends, Ramanujan drafted letters to leading mathematicians at Cambridge University. The first two professors, H. F. Baker Henry Frederick Baker was a British mathematician, working mainly in algebraic geometry, but also remembered for contributions to partial differential equations , and Lie groups.... E. W. Hobson Ernest William Hobson FRS was an English mathematician, now remembered mostly for his books, some of which broke new ground in their coverage in English of topics from mathematical analysis... , returned Ramanujan's papers without comment. On 16 January 1913, Ramanujan wrote to G. H. Hardy Godfrey Harold “G. H.” Hardy FRS was a prominent English mathematician, known for his achievements in number theory and mathematical analysis.... . Coming from an unknown mathematician, the nine pages of mathematical wonder made Hardy initially view Ramanujan's manuscripts as a possible "fraud". Hardy recognised some of Ramanujan's formulae but others "seemed scarcely possible to believe". One of the theorems Hardy found so incredible was found on the bottom of page three (valid for 0 < + 1/2): Hardy was also impressed by some of Ramanujan's other work relating to infinite series: The first result had already been determined by a mathematician named Bauer. The second one was new to Hardy, and was derived from a class of functions called a hypergeometric series In mathematics, a generalized hypergeometric series is a series in which the ratio of successive coefficients indexed by n is a rational function of n. The series, if convergent, defines a generalized hypergeometric function, which may then be defined over a wider domain of the argument by... which had first been researched by Leonhard Euler Leonhard Euler was a pioneering Swiss mathematician and physicist. He made important discoveries in fields as diverse as infinitesimal calculus and graph theory. He also introduced much of the modern mathematical terminology and notation, particularly for mathematical analysis, such as the notion... Carl Friedrich Gauss Johann Carl Friedrich Gauss was a German mathematician and scientist who contributed significantly to many fields, including number theory, statistics, analysis, differential geometry, geodesy, geophysics, electrostatics, astronomy and optics.Sometimes referred to as the Princeps mathematicorum... . Compared to Ramanujan's work on integrals, Hardy found these results "much more intriguing". After he saw Ramanujan's theorems on continued fractions on the last page of the manuscripts, Hardy commented that the "[theorems] defeated me completely; I had never seen anything in the least like them before". He figured that Ramanujan's theorems "must be true, because, if they were not true, no one would have the imagination to invent them". Hardy asked a colleague, J. E. Littlewood John Edensor Littlewood was a British mathematician, best known for the results achieved in collaboration with G. H. Hardy.-Life:... , to take a look at the papers. Littlewood was amazed by the mathematical genius of Ramanujan. After discussing the papers with Littlewood, Hardy concluded that the letters were "certainly the most remarkable I have received" and commented that Ramanujan was "a mathematician of the highest quality, a man of altogether exceptional originality and power". One colleague, E. H. Neville, later commented that "not one [theorem] could have been set in the most advanced mathematical examination in the world". On 8 February 1913, Hardy wrote a letter to Ramanujan, expressing his interest for his work. Hardy also added that it was "essential that I should see proofs of some of your assertions". Before his letter arrived in Madras during the third week of February, Hardy contacted the Indian Office to plan for Ramanujan's trip to Cambridge. Secretary Arthur Davies of the Advisory Committee for Indian Students met with Ramanujan to discuss the overseas trip. In accordance with his Brahmin upbringing, Ramanujan refused to leave his country to "go to a foreign land". Meanwhile, Ramanujan sent a letter packed with theorems to Hardy, writing, "I have found a friend in you who views my labour sympathetically." To supplement Hardy's endorsement, a former mathematical lecturer at Trinity College, Cambridge Trinity College is a constituent college of the University of Cambridge. Trinity has more members than any other college in Cambridge or Oxford, with around 700 undergraduates, 430 graduates, and over 170 Fellows... Gilbert Walker Sir Gilbert Thomas Walker, CSI, FRS, was a British physicist and statistician of the 20th century. He is best known for his groundbreaking description of the Southern Oscillation, a major phenomenon of global climate, and for greatly advancing the study of climate in general.He was born in... , looked at Ramanujan's work and expressed amazement, urging him to spend time at Cambridge. As a result of Walker's endorsement, B. Hanumantha Rao, a mathematics professor at an engineering college, invited Ramanujan's colleague Narayana Iyer to a meeting of the Board of Studies in Mathematics to discuss "what we can do for S. Ramanujan". The board agreed to grant Ramanujan a research scholarship of 75 rupees per month for the next two years at the University of Madras The University of Madras is a public research university in Chennai, Tamil Nadu, India. It is one of the three oldest universities in India... . While he was engaged as a research student, Ramanujan continued to submit papers to the Journal of the Indian Mathematical Society . In one instance, Narayana Iyer submitted some theorems of Ramanujan on summation of series to the above mathematical journal adding “The following theorem is due to S. Ramanujan, the mathematics student of Madras University”. Later in November, British Professor Edward B. Ross of Madras Christian College The Madras Christian College, commonly known as MCC, is a liberal arts and sciences college in Madras , India. Founded in 1837, MCC is one of Asia's oldest extant colleges. Currently, the college is affiliated to the University of Madras, but functions as an autonomous institution from its campus... , whom Ramanujan had met few years ago, stormed into his class one day with his eyes glowing, asking his students, “Does Ramanujan know Polish?” The reason was that in one paper, Ramanujan had anticipated the work of a Polish mathematician whose paper had just arrived by the day’s mail. In his quarterly papers, Ramanujan drew up theorems to make definite integrals more easily solvable. Working off Giuliano Frullani's 1821 integral theorem, Ramanujan formulated generalisations that could be made to evaluate formerly unyielding integrals. Hardy's correspondence with Ramanujan soured after Ramanujan refused to come to England. Hardy enlisted a colleague lecturing in Madras, E. H. Neville, to mentor and bring Ramanujan to England. Neville asked Ramanujan why he would not go to Cambridge. Ramanujan apparently had now accepted the proposal; as Neville put it, "Ramanujan needed no converting and that his parents' opposition had been withdrawn". Apparently, Ramanujan's mother had a vivid dream in which the family Goddess Namagiri is a Hindu goddess worshipped especially in the Namakkal district of Tamil Nadu state in Southern India. The name "Namagiri" translated from Sanskrit into Tamil sounds like "Namakkal". Her devotees worship her as a consort of Narasimha, an avatar of the deity, Vishnu.Namagiri was the... commanded her "to stand no longer between her son and the fulfilment of his life's purpose". Life in England Ramanujan boarded the S.S. on 17 March 1914, and at 10 o'clock in the morning, the ship departed from Madras. He arrived in London on 14 April, with E. H. Neville waiting for him with a car. Four days later, Neville took him to his house on Chesterton Road in Cambridge. Ramanujan immediately began his work with Littlewood and Hardy. After six weeks, Ramanujan moved out of Neville's house and took up residence on Whewell's Court, just a five-minute walk from Hardy's room. Hardy and Ramanujan began to take a look at Ramanujan's notebooks. Hardy had already received 120 theorems from Ramanujan in the first two letters, but there were many more results and theorems to be found in the notebooks. Hardy saw that some were wrong, some had already been discovered, while the rest were new breakthroughs. Ramanujan left a deep impression on Hardy and Littlewood. Littlewood commented, "I can believe that he's at least a Jacobi", while Hardy said he "can compare him only with [Leonhard] Euler or Jacobi." Ramanujan spent nearly five years in Cambridge collaborating with Hardy and Littlewood and published a part of his findings there. Hardy and Ramanujan had highly contrasting personalities. Their collaboration was a clash of different cultures, beliefs and working styles. Hardy was an atheist and an apostle of proof and mathematical rigour, whereas Ramanujan was a deeply religious man and relied very strongly on his intuition. While in England, Hardy tried his best to fill the gaps in Ramanujan's education without interrupting his spell of inspiration. Ramanujan was awarded a B.A. degree by research (this degree was later renamed PhD) in March 1916 for his work on highly composite number A highly composite number is a positive integer with more divisors than any positive integer smaller than itself.The initial or smallest twenty-one highly composite numbers are listed in the table at s, which was published as a paper in the Journal of the London Mathematical Society -See also:* American Mathematical Society* Edinburgh Mathematical Society* European Mathematical Society* List of Mathematical Societies* Council for the Mathematical Sciences* BCS-FACS Specialist Group-External links:* * *... . The paper was over 50 pages with different properties of such numbers proven. Hardy remarked that this was one of the most unusual papers seen in mathematical research at that time and that Ramanujan showed extraordinary ingenuity in handling it. On 6 December 1917, he was elected to the London Mathematical Society. He became a Fellow of the Royal Society in 1918, becoming the second Indian to do so, following Ardaseer Cursetjee Ardaseer Cursetjee FRS was an Indian shipbuilder and engineer.He is noted for having been the first Indian to be elected a Fellow of the Royal Society... in 1841, and he was one of the youngest Fellows in the history of the Royal Society. He was elected "for his investigation in Elliptic function In complex analysis, an elliptic function is a function defined on the complex plane that is periodic in two directions and at the same time is meromorphic... s and the Theory of Numbers." On 13 October 1918, he became the first Indian to be elected a Fellow of Trinity College, Cambridge. Illness and return to India Plagued by health problems throughout his life, living in a country far away from home, and obsessively involved with his mathematics, Ramanujan's health worsened in England, perhaps exacerbated by Stress is a term in psychology and biology, borrowed from physics and engineering and first used in the biological context in the 1930s, which has in more recent decades become commonly used in popular parlance... and by the scarcity of vegetarian food during the First World War. He was diagnosed with Tuberculosis, MTB, or TB is a common, and in many cases lethal, infectious disease caused by various strains of mycobacteria, usually Mycobacterium tuberculosis. Tuberculosis usually attacks the lungs but can also affect other parts of the body... and a severe A vitamin is an organic compound required as a nutrient in tiny amounts by an organism. In other words, an organic chemical compound is called a vitamin when it cannot be synthesized in sufficient quantities by an organism, and must be obtained from the diet. Thus, the term is conditional both on... deficiency and was confined to a sanatorium. Ramanujan returned to Kumbakonam, India in 1919 and died soon thereafter at the age of 32. His widow, S. Janaki Ammal, lived in Chennai (formerly Madras) until her death in 1994. A 1994 analysis of Ramanujan's medical records and symptoms by Dr. D.A.B. Young concluded that it was much more likely he had hepatic Entamebiasis is a term for the infection more commonly known as amoebiasis.It became the preferred term in MeSH in 1991, but the term amoebiasis is used by the World Health Organization and by those working in the field of amoebiasis research.... , a parasitic infection of the liver widespread in Madras, where Ramanujan had spent time. He had two episodes of Dysentery is an inflammatory disorder of the intestine, especially of the colon, that results in severe diarrhea containing mucus and/or blood in the faeces with fever and abdominal pain. If left untreated, dysentery can be fatal.There are differences between dysentery and normal bloody diarrhoea... before he left India. When not properly treated dysentery can lie dormant for years and lead to hepatic amoebiasis, a difficult disease to diagnose, but once diagnosed readily cured. Personality and spiritual life Ramanujan has been described as a person with a somewhat shy and quiet disposition, a dignified man with pleasant manners. He lived a rather Spartan life while at Cambridge. Ramanujan's first Indian biographers describe him as rigorously orthodox. Ramanujan credited his acumen to his family Goddess, Namagiri is a Hindu goddess worshipped especially in the Namakkal district of Tamil Nadu state in Southern India. The name "Namagiri" translated from Sanskrit into Tamil sounds like "Namakkal". Her devotees worship her as a consort of Narasimha, an avatar of the deity, Vishnu.Namagiri was the... . He looked to her for inspiration in his work, and claimed to dream of blood drops that symbolised her male consort, Narasimha or Nrusimha , also spelt as Narasingh and Narasingha, whose name literally translates from Sanskrit as "Man-lion", is an avatar of Vishnu described in the Puranas, Upanishads and other ancient religious texts of Hinduism... , after which he would receive visions of scrolls of complex mathematical content unfolding before his eyes. He often said, "An equation for me has no meaning, unless it represents a thought of God." Hardy cites Ramanujan as remarking that all religions seemed equally true to him. Hardy further argued that Ramanujan's religiousness had been romanticised by Westerners and overstated—in reference to his belief, not practice—by Indian biographers. At the same time, he remarked on Ramanujan's strict observance of vegetarianism. Mathematical achievements In mathematics, there is a distinction between having an insight and having a proof. Ramanujan's talent suggested a plethora of formulae that could then be investigated in depth later. It is said that Ramanujan's discoveries are unusually rich and that there is often more to them than initially meets the eye. As a by-product, new directions of research were opened up. Examples of the most interesting of these formulae include the intriguing infinite A series is the sum of the terms of a sequence. Finite sequences and series have defined first and last terms, whereas infinite sequences and series continue indefinitely.... ' is a mathematical constant that is the ratio of any circle's circumference to its diameter. is approximately equal to 3.14. Many formulae in mathematics, science, and engineering involve , which makes it one of the most important mathematical constants... , one of which is given below This result is based on the negative fundamental discriminant In mathematics, a fundamental discriminant D is an integer invariant in the theory of integral binary quadratic forms. If is a quadratic form with integer coefficients, then is the discriminant of Q. Conversely, every integer D with is the discriminant of some binary quadratic form with integer... = −4×58 with class number ) = 2 (note that 5×7×13×58 = 26390 and that 9801=99×99; 396=4×99) and is related to the fact that Compare to Heegner number In number theory, a Heegner number is a square-free positive integer d such that the imaginary quadratic field Q has class number 1... s, which have class number 1 and yield similar formulae. Ramanujan's series for π converges extraordinarily rapidly (exponentially) and forms the basis of some of the fastest algorithms currently used to calculate π. Truncating the sum to the first term also gives the approximation One of his remarkable capabilities was the rapid solution for problems. He was sharing a room with P. C. Mahalanobis who had a problem, "Imagine that you are on a street with houses marked 1 through n. There is a house in between (x) such that the sum of the house numbers to left of it equals the sum of the house numbers to its right. If n is between 50 and 500, what are n and x?" This is a bivariate problem with multiple solutions. Ramanujan thought about it and gave the answer with a twist: He gave a continued fraction In mathematics, a continued fraction is an expression obtained through an iterative process of representing a number as the sum of its integer part and the reciprocal of another number, then writing this other number as the sum of its integer part and another reciprocal, and so on... . The unusual part was that it was the solution to the whole class of problems. Mahalanobis was astounded and asked how he did it. "It is simple. The minute I heard the problem, I knew that the answer was a continued fraction. Which continued fraction, I asked myself. Then the answer came to my mind", Ramanujan replied. His intuition also led him to derive some previously unknown In mathematics, the term identity has several different important meanings:*An identity is a relation which is tautologically true. This means that whatever the number or value may be, the answer stays the same. For example, algebraically, this occurs if an equation is satisfied for all values of... , such as for all gamma function In mathematics, the gamma function is an extension of the factorial function, with its argument shifted down by 1, to real and complex numbers... . Expanding into series of powers and equating coefficients of In 1918, Hardy and Ramanujan studied the partition function ) extensively and gave a non-convergent asymptotic series that permits exact computation of the number of partitions of an integer. Hans Rademacher Hans Adolph Rademacher was a German mathematician, known for work in mathematical analysis and number theory.-Biography:... , in 1937, was able to refine their formula to find an exact convergent series solution to this problem. Ramanujan and Hardy's work in this area gave rise to a powerful new method for finding asymptotic formulae, called the circle method. He discovered mock theta function In mathematics, a mock modular form is the holomorphic part of a harmonic weak Maass form, anda mock theta function is essentially a mock modular form of weight 1/2. The first examples of mock theta functions were described by Srinivasa Ramanujan in his last 1920 letter to G. H. Hardy and in his... s in the last year of his life. For many years these functions were a mystery, but they are now known to be the holomorphic parts of harmonic weak Maass forms. The Ramanujan conjecture Although there are numerous statements that could bear the name Ramanujan conjecture , there is one statement that was very influential on later work. In particular, the connection of this conjecture with conjectures of André Weil André Weil was an influential mathematician of the 20th century, renowned for the breadth and quality of his research output, its influence on future work, and the elegance of his exposition. He is especially known for his foundational work in number theory and algebraic geometry... in algebraic geometry opened up new areas of research. That Ramanujan conjecture is an assertion on the size of the tau function , which has as generating function the discriminant modular form Δ( ), a typical cusp form In number theory, a branch of mathematics, a cusp form is a particular kind of modular form, distinguished in the case of modular forms for the modular group by the vanishing in the Fourier series expansion \Sigma a_n q^n... in the theory of modular forms. It was finally proven in 1973, as a consequence of Pierre Deligne - See also :* Deligne conjecture* Deligne–Mumford moduli space of curves* Deligne–Mumford stacks* Deligne cohomology* Fourier–Deligne transform* Langlands–Deligne local constant- External links :... 's proof of the Weil conjectures In mathematics, the Weil conjectures were some highly-influential proposals by on the generating functions derived from counting the number of points on algebraic varieties over finite fields.... . The reduction step involved is complicated. Deligne won a Fields Medal The Fields Medal, officially known as International Medal for Outstanding Discoveries in Mathematics, is a prize awarded to two, three, or four mathematicians not over 40 years of age at each International Congress of the International Mathematical Union , a meeting that takes place every four... in 1978 for his work on Weil conjectures. Ramanujan's notebooks While still in India, Ramanujan recorded the bulk of his results in four notebooks of loose leaf The term loose leaf is used in the United States, Canada, and some other countries to describe a piece of notebook paper which is not actually fixed in a spiral notebook... paper. These results were mostly written up without any derivations. This is probably the origin of the misperception that Ramanujan was unable to prove his results and simply thought up the final result directly. Mathematician Bruce C. Berndt Bruce Carl Berndt is an American mathematician. He attended college at Albion College, graduating in 1961, where he also ran track.... , in his review of these notebooks and Ramanujan's work, says that Ramanujan most certainly was able to make the proofs of most of his results, but chose not to. This style of working may have been for several reasons. Since paper was very expensive, Ramanujan would do most of his work and perhaps his proofs on A writing slate is a piece of flat material used as a medium for writing.In the 19th century, writing slates were made of slate, which is more durable than paper and was cheap at the time when paper was expensive. It was used to allow children to practice writing... , and then transfer just the results to paper. Using a slate was common for mathematics students in India at the time. He was also quite likely to have been influenced by the style of G. S. Carr George Shoobridge Carr wrote Synopsis of Pure Mathematics . This book, first published in England in 1880, was read and studied closely by Srinivasa Aiyangar Ramanujan when he was a teenager.... 's book, which stated results without proofs. Finally, it is possible that Ramanujan considered his workings to be for his personal interest alone; and therefore only recorded the results. The first notebook has 351 pages with 16 somewhat organized chapters and some unorganized material. The second notebook has 256 pages in 21 chapters and 100 unorganised pages, with the third notebook containing 33 unorganised pages. The results in his notebooks inspired numerous papers by later mathematicians trying to prove what he had found. Hardy himself created papers exploring material from Ramanujan's work as did G. N. Watson Neville Watson was an English mathematician, a noted master in the application of complex analysis to the theory of special functions. His collaboration on the 1915 second edition of E. T. Whittaker's A Course of Modern Analysis produced the classic “Whittaker & Watson” text... , B. M. Wilson, and Bruce Berndt. A fourth notebook with 87 unorganised pages, the so-called "lost notebook" Srinivasa Ramanujan's lost notebook is the manuscript in which Ramanujan, the great Indian mathematician from Cambridge University, recorded the mathematical discoveries of the last year of his life. It was rediscovered by George Andrews in 1976, in a box of effects of G. N. Watson stored at the... , was rediscovered in 1976 by George Andrews. Ramanujan–Hardy number 1729 A common anecdote about Ramanujan relates to the number 1729. Hardy arrived at Ramanujan's residence in a cab numbered 1729. Hardy commented that the number 1729 seemed to be uninteresting. Ramanujan is said to have stated on the spot that it was actually a very interesting number mathematically, being the smallest natural number representable in two different ways as a sum of two cubes: Generalizations of this idea have created the notion of " taxicab number In mathematics, the nth taxicab number, typically denoted Ta or Taxicab, is defined as the smallest number that can be expressed as a sum of two positive algebraic cubes in n distinct ways. The concept was first mentioned in 1657 by Bernard Frénicle de Bessy, and was made famous in the early 20th... s". Coincidentally, 1729 is also a Carmichael Number Other mathematicians' views of Ramanujan Hardy said : "The limitations of his knowledge were as startling as its profundity. Here was a man who could work out modular equation In mathematics, a modular equation is an algebraic equation satisfied by moduli, in the sense of moduli problem. That is, given a number of functions on a moduli space, a modular equation is an equation holding between them, or in other words an identity for moduli.The most frequent use of the term... s and theorems... to orders unheard of, whose mastery of continued fractions was... beyond that of any mathematician in the world, who had found for himself the functional equation of the zeta function and the dominant terms of many of the most famous problems in the analytic theory of numbers; and yet he had never heard of a doubly periodic function or of Cauchy's theorem In mathematics, the Cauchy integral theorem in complex analysis, named after Augustin-Louis Cauchy, is an important statement about line integrals for holomorphic functions in the complex plane... , and had indeed but the vaguest idea of what a function of a complex variable was...". When asked about the methods employed by Ramanujan to arrive at his solutions, Hardy said that they were "arrived at by a process of mingled argument, intuition, and induction, of which he was entirely unable to give any coherent account." He also stated that he had "never met his equal, and can compare him only with Euler or Jacobi." Quoting K. Srinivasa Rao, "As for his place in the world of Mathematics, we quote Bruce C. Berndt: ' Paul Erdős Paul Erdős was a Hungarian mathematician. Erdős published more papers than any other mathematician in history, working with hundreds of collaborators. He worked on problems in combinatorics, graph theory, number theory, classical analysis, approximation theory, set theory, and probability theory... has passed on to us Hardy's personal ratings of mathematicians. Suppose that we rate mathematicians on the basis of pure talent on a scale from 0 to 100, Hardy gave himself a score of 25, J.E. Littlewood 30, David Hilbert David Hilbert was a German mathematician. He is recognized as one of the most influential and universal mathematicians of the 19th and early 20th centuries. Hilbert discovered and developed a broad range of fundamental ideas in many areas, including invariant theory and the axiomatization of... 80 and Ramanujan 100.'" In his book Scientific Edge , noted physicist Jayant Narlikar spoke of "Srinivasa Ramanujan, discovered by the Cambridge mathematician Hardy, whose great mathematical findings were beginning to be appreciated from 1915 to 1919. His achievements were to be fully understood much later, well after his untimely death in 1920. For example, his work on the highly composite numbers (numbers with a large number of factors) started a whole new line of investigations in the theory of such numbers." During his lifelong mission in educating and propagating mathematics among the school children in India, Nigeria and elsewhere, P.K. Srinivasan has continually introduced Ramanujan's mathematical Ramanujan's home state of Tamil Nadu Tamil Nadu is one of the 28 states of India. Its capital and largest city is Chennai. Tamil Nadu lies in the southernmost part of the Indian Peninsula and is bordered by the union territory of Pondicherry, and the states of Kerala, Karnataka, and Andhra Pradesh... celebrates 22 December (Ramanujan's birthday) as 'State IT Day', memorializing both the man and his achievements, as a native of Tamil Nadu. A stamp picturing Ramanujan was released by the Government of India The Government of India, officially known as the Union Government, and also known as the Central Government, was established by the Constitution of India, and is the governing authority of the union of 28 states and seven union territories, collectively called the Republic of India... in 1962 – the 75th anniversary of Ramanujan's birth – commemorating his achievements in the field of number theory. Since the Centennial year of Srinivasa Ramanujan,every year 22 Dec, is celebrated as Ramanujan Day by the Government Arts College, Kumbakonam The Government Arts College, previously known as the Government Arts College for Men, is an arts college based in the town of Kumbakonam in Tamil Nadu, India. It is one of the oldest and prestigious educational institutions in the Madras Presidency of British India.- History :The Government Arts... where he had studied and later dropped out. It is celebrated by the Department Of Mathematics by organising one-, two-, or three-day seminar by inviting eminent scholars from universities/colleges, and participants are mainly students of Mathematics, research scholars, and professors from local colleges. It has been planned to celebrate the 125-th birthday in a grand manner by inviting the foreign Eminent Mathematical scholars of this century viz., G E Andrews. and Bruce C Berndt, who are very familiar with the contributions and works of Ramanujan. Every year, in Chennai (formerly Madras), the Indian Institute of Technology (IIT) The Indian Institute of Technology Madras is an engineering and technology school in Chennai in southern India. It is recognized as an Institute of National Importance by the Government of India... , Ramanujan's work and life are celebrated on 22 December. The Department of Mathematics celebrates this day by organising a National Symposium On Mathematical Methods and Applications (NSMMA) for one day by inviting Eminent scholars from India and foreign countries. A prize for young mathematicians from developing countries has been created in the name of Ramanujan by the International Centre for Theoretical Physics The Abdus Salam International Centre for Theoretical Physics was founded in 1964 by Pakistani scientist and Nobel Laureate Abdus Salam after consulting with Munir Ahmad Khan. It operates under a tripartite agreement among the Italian Government, UNESCO, and International Atomic Energy Agency... (ICTP), in cooperation with the International Mathematical Union The International Mathematical Union is an international non-governmental organisation devoted to international cooperation in the field of mathematics across the world. It is a member of the International Council for Science and supports the International Congress of Mathematicians... , who nominate members of the prize committee. The Shanmugha Arts, Science, Technology & Research Academy The Shanmugha Arts, Science, Technology & Research Academy, known as SASTRA University, is a deemed university in the town of Thirumalaisamudram, Thanjavur district, Tamil Nadu, India. Undergraduate and postgraduate engineering courses are its focus.... (SASTRA), based in the state of Tamil Nadu in South India, has instituted the SASTRA Ramanujan Prize The SASTRA Ramanujan Prize, founded by Shanmugha Arts, Science, Technology & Research Academy University in Kumbakonam, India, Srinivasa Ramanujan's hometown, is awarded every year to a young mathematician judged to have done outstanding work in Ramanujan's fields of interest... of $10,000 to be given annually to a mathematician not exceeding the age of 32 for outstanding contributions in an area of mathematics influenced by Ramanujan. The age limit refers to the years Ramanujan lived, having nevertheless still achieved many accomplishments. This prize has been awarded annually since 2005, at an international conference conducted by SASTRA in Kumbakonam, Ramanujan's hometown, around Ramanujan's birthday, 22 December. In popular culture • An international feature film on Ramanujan's life was announced in 2006 as due to begin shooting in 2007. It was to be shot in Tamil Nadu state and Cambridge and be produced by an Indo-British collaboration and co-directed by Stephen Fry Stephen John Fry is an English actor, screenwriter, author, playwright, journalist, poet, comedian, television presenter and film director, and a director of Norwich City Football Club. He first came to attention in the 1981 Cambridge Footlights Revue presentation "The Cellar Tapes", which also... and Dev Benegal Dev Benegal is an Indian director and screenwriter, most known for his debut film English, August , which won the 1995 National Film Award for Best Feature Film in English.... . A play, First Class Man by Alter Ego Productions, was based on David Freeman's First Class Man. The play is centred around Ramanujan and his complex and dysfunctional relationship with Hardy. • Another film, based on the book The Man Who Knew Infinity: A Life of the Genius Ramanujan by Robert Kanigel, is being made by Edward Pressman and Matthew Brown. • In the film Good Will Hunting Good Will Hunting is a 1997 drama film directed by Gus Van Sant and starring Matt Damon, Robin Williams, Ben Affleck, Minnie Driver, and Stellan Skarsgård... , the eponymous character is compared to Ramanujan. • "Gomez", a short story by Cyril Kornbluth, describes the conflicted life of an untutored mathematical genius, clearly based on Ramanujan. • A Disappearing Number A Disappearing Number is a 2007 play co-written and devised by the Théâtre de Complicité company and directed and conceived by English playwright Simon McBurney. It was inspired by the collaboration during the 1910s between two of the most remarkable pure mathematicians of the twentieth century,... is a recent British stage production by the company Complicite that explores the relationship between Hardy and Ramanujan. • The character Amita Ramanujan Amita Ramanujan is a fictional character from the TV series Numb3rs. Over the course of the series, she has become a professor at CalSci and has since become romantically involved with her former thesis advisor, Dr. Charlie Eppes . First introduced in "Pilot", the character of Amita has received... on the television show Numb3rs Numb3rs is an American television drama which premiered on CBS on January 23, 2005, and concluded on March 12, 2010. The series was created by Nicolas Falacci and Cheryl Heuton, and follows FBI Special Agent Don Eppes and his mathematical genius brother, Charlie Eppes , who helps Don solve crimes... is named after Ramanujan. • The novel The Indian Clerk The Indian Clerk is a novel by David Leavitt, published in 2007. It is inspired by the career of the self-taught mathematical genius Srinivasa Ramanujan, as seen mainly through the eyes of his mentor and collaborator G.H. Hardy, a British mathematics professor at Cambridge University... by David Leavitt David Leavitt is an American novelist.-Biography:Born in Pittsburgh, Pennsylvania, Leavitt is a graduate of Yale University. and a professor at the University of Florida... explores in fiction the events following Ramanujan's letter to Hardy. • On 22 March 1988, the PBS Series Nova aired a documentary about Ramanujan, "The Man Who Loved Numbers" (Season 15, Episode 9). • On 16 October 2011 it is announced that Roger Spottiswoode, best known for his James Bond film Tomorrow Never Dies, is working on a movie on mathematical genius Srinivasa Ramanujan starring Rang De Basanti actor Siddharth. Titled The First Class Man, the film's scripting has been completed and shooting is being planned from 2012. See also • Ramanujan–Petersson conjecture • Landau–Ramanujan constant • Ramanujan–Soldner constant • Ramanujan summation Ramanujan summation is a technique invented by the mathematician Srinivasa Ramanujan for assigning a sum to infinite divergent series. Although the Ramanujan summation of a divergent series is not a sum in the traditional sense, it has properties which make it mathematically useful in the study of... • Ramanujan theta function In mathematics, particularly q-analog theory, the Ramanujan theta function generalizes the form of the Jacobi theta functions, while capturing their general properties. In particular, the Jacobi triple product takes on a particularly elegant form when written in terms of the Ramanujan theta... • Ramanujan graph A Ramanujan graph, named after Srinivasa Ramanujan, is a regular graph whose spectral gap is almost as large as possible . Such graphs are excellent spectral expanders.... • Ramanujan's tau function • Rogers–Ramanujan identities • Ramanujan prime In mathematics, a Ramanujan prime is a prime number that satisfies a result proven by Srinivasa Ramanujan relating to the prime-counting function.-Origins and definition:... • Ramanujan's constant Selected publications by Ramanujan This book was originally published in 1927 after Ramanujan's death. It contains the 37 papers published in professional journals by Ramanujan during his lifetime. The third re-print contains additional commentary by Bruce C. Berndt. These books contain photo copies of the original notebooks as written by Ramanujan. This book contains photo copies of the pages of the "Lost Notebook". Selected publications about Ramanujan and his work • Berndt, Bruce C. "An Overview of Ramanujan's Notebooks." Charlemagne and His Heritage: 1200 Years of Civilization and Science in Europe. Ed. P. L. Butzer, W. Oberschelp, and H. Th. Jongen. Turnhout, Belgium: Brepols, 1998. 119–146. • Berndt, Bruce C., and George E. Andrews. Ramanujan's Lost Notebook, Part I. New York: Springer, 2005. ISBN 0-387-25529-X. • Berndt, Bruce C., and George E. Andrews. Ramanujan's Lost Notebook, Part II. New York: Springer, 2008. ISBN 978-0-387-77765-8 • Berndt, Bruce C., and Robert A. Rankin. Ramanujan: Letters and Commentary. Vol. 9. Providence, Rhode Island: American Mathematical Society, 1995. ISBN 0-8218-0287-9. • Berndt, Bruce C., and Robert A. Rankin. Ramanujan: Essays and Surveys. Vol. 22. Providence, Rhode Island: American Mathematical Society, 2001. ISBN 0-8218-2624-7. • Berndt, Bruce C. Number Theory in the Spirit of Ramanujan. Providence, Rhode Island: American Mathematical Society The American Mathematical Society is an association of professional mathematicians dedicated to the interests of mathematical research and scholarship, which it does with various publications and conferences as well as annual monetary awards and prizes to mathematicians.The society is one of the... , 2006. ISBN 0-8218-4178-5. • Berndt, Bruce C. Ramanujan's Notebooks, Part I. New York: Springer, 1985. ISBN 0-387-96110-0. • Berndt, Bruce C. Ramanujan's Notebooks, Part II. New York: Springer, 1999. ISBN 0-387-96794-X. • Berndt, Bruce C. Ramanujan's Notebooks, Part III. New York: Springer, 2004. ISBN 0-387-97503-9. • Berndt, Bruce C. Ramanujan's Notebooks, Part IV. New York: Springer, 1993. ISBN 0-387-94109-6. • Berndt, Bruce C. Ramanujan's Notebooks, Part V. New York: Springer, 2005. ISBN 0-387-94941-0. • Hardy, G. H. Ramanujan. New York, Chelsea Pub. Co., 1978. ISBN 0-8284-0136-5 • Hardy, G. H. Ramanujan: Twelve Lectures on Subjects Suggested by His Life and Work. Providence, Rhode Island: American Mathematical Society, 1999. ISBN 0-8218-2023-0. • Henderson, Harry. Modern Mathematicians. New York: Facts on File Inc., 1995. ISBN 0-8160-3235-1. • Kanigel, Robert. The Man Who Knew Infinity: a Life of the Genius Ramanujan. New York: Charles Scribner's Sons Charles Scribner's Sons, or simply Scribner, is an American publisher based in New York City, known for publishing a number of American authors including Ernest Hemingway, F. Scott Fitzgerald, Kurt Vonnegut, Stephen King, Robert A. Heinlein, Thomas Wolfe, George Santayana, John Clellon... , 1991. ISBN 0-684-19259-4. • Kolata, Gina Gina Bari Kolata is a science journalist for The New York Times. Her sister was environmental activist Judi Bari, and her mother was mathematician Ruth Aaronson Bari.... . "Remembering a 'Magical Genius'", Science, New Series, Vol. 236, No. 4808 (19 Jun. 1987), pp. 1519–1521, American Association for the Advancement of Science. • Leavitt, David David Leavitt is an American novelist.-Biography:Born in Pittsburgh, Pennsylvania, Leavitt is a graduate of Yale University. and a professor at the University of Florida... . The Indian Clerk. London: Bloomsbury, 2007. ISBN 978-0-7475-9370-6 (paperback). • Narlikar, Jayant V. Scientific Edge: the Indian Scientist From Vedic to Modern Times. New Delhi, India: Penguin Books Penguin Books is a publisher founded in 1935 by Sir Allen Lane and V.K. Krishna Menon. Penguin revolutionised publishing in the 1930s through its high quality, inexpensive paperbacks, sold through Woolworths and other high street stores for sixpence. Penguin's success demonstrated that large... , 2003. ISBN 0-14-303028-0. • T.M.Sankaran. "Srinivasa Ramanujan- Ganitha lokathile Mahaprathibha", (in Malayalam), 2005, Kerala Sastra Sahithya Parishath, Kochi. External links Media links • P.B.S. Nova Series: "The Man Who Loved Numbers" (1988) Biographical links Other links
{"url":"http://www.absoluteastronomy.com/topics/Srinivasa_Ramanujan","timestamp":"2014-04-19T03:15:05Z","content_type":null,"content_length":"135832","record_id":"<urn:uuid:97353ca9-b9b4-43e0-ae43-0b214a8380e9>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A football is kicked at an 50 degree angle to the horizontal , travels a horizontal distance of 20m before hitting the ground. what is the initial speed? please tell me how you would do this Best Response You've already chosen the best response. Wind resistance is not a factor in this question as we haven't done that yet. Best Response You've already chosen the best response. let u be the initial velocity , devide it into two component x and y the angle is 50 degree given so u in x direction = u cos50 in y direction = u sine50 time period = t= (2 u sine50)/g and horizontal distance = u cos50 * t ( acceleration in x direction is zero) by putting the value of t in above eqn yuou will get s=ucos50* (2 u sine50)/g 20 g = 2 u^2 cos50 * sine50 from here u can get the value of u which is initial velocity.... Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4e6eac530b8beaebb297c12a","timestamp":"2014-04-21T02:40:45Z","content_type":null,"content_length":"30562","record_id":"<urn:uuid:8009e8de-6afc-419b-9406-89b24e016952>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: June 2013 [00002] [Date Index] [Thread Index] [Author Index] Mathematica numerics and... Re: Applying Mathematica to practical problems • To: mathgroup at smc.vnet.net • Subject: [mg130987] Mathematica numerics and... Re: Applying Mathematica to practical problems • From: Daniel Lichtblau <danl at wolfram.com> • Date: Sat, 1 Jun 2013 06:26:48 -0400 (EDT) • Delivered-to: l-mathgroup@mail-archive0.wolfram.com • Delivered-to: l-mathgroup@wolfram.com • Delivered-to: mathgroup-outx@smc.vnet.net • Delivered-to: mathgroup-newsendx@smc.vnet.net • References: <kmngb2$3rv$1@smc.vnet.net> <20130519095011.606CD6A14@smc.vnet.net> Others have commented on most issues raised in this subthread. I wanted to touch on just a few details. On May 31, 2:15 am, Richard Fateman <fate... at cs.berkeley.edu> wrote: > On 5/30/2013 3:09 AM, John Doty wrote: > > Changing the topic here. > > On Tuesday, May 28, 2013 1:49:00 AM UTC-6, Richard Fateman wrote: > >> Learning Mathematica (only) exposes a student to a singularly > >> erroneous model of computation, I assume you (as ever) refer to significance arithmetic. If so, while it is many things, "erroneous" is not one of them. It operates as designed. You have expressed a couple of reasons why you find that design not to your liking. The two that most come to mind: bad behavior in iterations that "should" converge, and fuzzy equality that you find to be unintuitive 9mostly this arises at low precision). > > A personal, subjective judgement. However, I would agree that > > exposing the student to *any* single model of computation, to the > > exclusion of others, is destructive. > Still, there are ones that are well-recognized as standard, a common > basis for software libraries, shared development environments, etc. > Others have been found lacking by refereed published articles > and have failed to gain adherents outside the originators. (Distrust > of significance arithmetic ala Mathematica is not a personal > subjective opinion only.). No, but nor is it one that appears to be widely shared. As best I can tell, most people in the field either do not write about it, or else make the observation, correctly, that it is simply a first-order approximation to interval arithmetic. As such, it has most of the qualities of interval arithmetic. Among these are the issue that results with "large" intervals are sometimes not very useful. Knowing that one ended up with a large interval, can be useful: it tells one that either the problem is not well conditioned, or the method of solving it was not (and specifically, it may have been a bad idea to use intervals to assess error bounds). Significance arithmetic brings a few advantages. One is that computations are generally faster than their interval counterparts. Another is that it is "significantly" easier to extend to functions for which interval methods are out of reach (reason: one can compute derivatives but cannot always find extrema for every function on every segment). A third is that it is much easier to extend to complex values. A drawback, relative to interval arithmetic, is that as a first-order approximation, in terms of error propagation, significance arithmetic breaks down at low precision where higher-order terms can cause the estimates to be off. I will add that, at that point, intervals are going to give results that are also not terribly useful. I'm not sure what are the refereed articles referred to above. I suspect they do not disagree in substantial ways with what I wrote though. That is to say, error estimates may be too conservative, and at low precision results may not be useful. Fixed precision has its own advantages and disadvantages in terms of speed, error estimation, and the like. We certainly use it in places, even if significance arithmetic is the default behavior of bignum arithmetic in top-level Mathematic. For implementation purposes we use both modes. > > Mathematica applied to real problems is pretty good here. > Maybe, but is "pretty good" the goal, and the occasional identified > errors be ignored? We do try to fix many bugs that are brought to our attention. We have a better track record in some areas than others. I have no reason to believe that numerics issues have received short shrift though. Daniel Lichtblau Wolfram Research
{"url":"http://forums.wolfram.com/mathgroup/archive/2013/Jun/msg00002.html","timestamp":"2014-04-18T00:28:37Z","content_type":null,"content_length":"29493","record_id":"<urn:uuid:02d2282a-58f3-4cae-ac07-cf66688bd545>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
Application of Toms- Stein restriction theorem for Strichartz estimates up vote 0 down vote favorite The initial value problem for one dimensional Shrödinger equation is $$iu_{t}+u_{xx}=0,$$ $$u(x, 0)= f(x),$$ where $u:\mathbb R \times \mathbb R \rightarrow \mathbb C$ is a complex valued function. Lemma. For any $f\in S$, where $S$ is space of Schwartz class function on $\mathbb R$, there is a unique solution $u\in C^{\infty} (\mathbb R; S) $ of the above a IVP. The special Fourier transform of the solution is given by $\hat{u}(k, t)= e^{-it |k|^{2}}\hat{f}(k)$ and $$u(x, t)= \frac {1}{(4\pi i t)^{\frac {1} {2}}}\int_{\mathbb R} e^{\frac {-i|x-y|^{2}}{4t}} f(y) dy .$$ My question is: (a) What is Tomas- Stein restriction theorem (inequality ) for paraboloid ? (b) and using this how one does conclude that, $$\| u \|_{L^{6} (\mathbb R \times \mathbb R)}\leq \| f\| _{L^{2}(\mathbb R)}$$. ap.analysis-of-pdes probability-distributions harmonic-analysis add comment 1 Answer active oldest votes If $M_0$ is a compact subset of a hypersurface $M$ with novanishing Gauss curvature, then $$ \|\widehat{fd\sigma}\|_{L^p(\mathbb{R}^n)}\leq C(n,p)\|f\|_{L^2(M_0)},\quad p=\frac{2(n+1)} {n-1} $$ for any $f\in C^{\infty}(M_0)$. For example, the truncated paraboloid $\{(x',|x'|^2),x'\in \mathbb{R}^{n-1},|x'|\leq 1\}$. up vote 2 As an application, let $\mu$ be the measure in $\mathbb{R}\times \mathbb{R}$ satisfies $$ \int_{\mathbb{R}\times \mathbb{R}}\phi(\xi,t)d\mu=\int_{\mathbb{R}\times \mathbb{R}}\phi(\xi,|\xi| down vote ^2)d\xi, \quad \phi\in C^0 $$ then $$ \|u\|_{L^6(\mathbb{R}\times \mathbb{R})}=\|\mathcal{F^{-1}}(\hat{f}\mu)\|_{L^6(\mathbb{R}\times \mathbb{R})}\leq C\|\hat{f}\|_{L^2(\mathbb{R})}=C\|f\| _{L^2(\mathbb{R})} $$ by Plancharel. add comment Not the answer you're looking for? Browse other questions tagged ap.analysis-of-pdes probability-distributions harmonic-analysis or ask your own question.
{"url":"http://mathoverflow.net/questions/139060/application-of-toms-stein-restriction-theorem-for-strichartz-estimates","timestamp":"2014-04-18T03:04:07Z","content_type":null,"content_length":"52062","record_id":"<urn:uuid:aa1e8884-7a72-45ca-9e30-4d1f2a492c7f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
Random House, Inc. Academic Resources | Mathematics Mathematics Made Simple Sixth Edition Written by Thomas Cusick Format: Trade Paperback, 288 pages Three Rivers Press On Sale: August 19, 2003 978-0-7679-1538-0 (0-7679-1538-0) Featuring several overviews of a multitude of mathematical concepts, as well as detailed learning plans, Mathematics Made Simple presents the necessary information in clear, concise lessons that make math accessible. Easy-to-use features include: * complete coverage of fractions, decimals, percents, algebra, linear equations, graphs, probability, geometry, and trigonometry; * step-by-step solutions to every... Read more >
{"url":"http://www.randomhouse.com/acmart/subjects.pperl?cat=495601884","timestamp":"2014-04-19T09:49:57Z","content_type":null,"content_length":"132976","record_id":"<urn:uuid:bf961cca-ac72-4356-a1d0-7ab45ed9a47f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
Which year was jonathan rhys-meyers born in? You asked: Which year was jonathan rhys-meyers born in? Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/which_year_was_jonathan_rhys-meyers_born_in","timestamp":"2014-04-19T22:56:05Z","content_type":null,"content_length":"52225","record_id":"<urn:uuid:968be2b1-a719-48a9-94bd-ed563a026de9>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
Measurement unit conversion: minute ›› Measurement unit: minute Full name: minute Plural form: minutes Symbol: min Alternate spelling: mins Category type: time Scale factor: 60 ›› Similar units ›› SI unit: second The SI base unit for time is the second. 1 second is equal to 0.0166666666667 minute. Valid units must be of the time type. You can use this form to select from known units: I'm feeling lucky, show me some random units ›› Definition: Minute A minute is: * a unit of time equal to 1/60th of an hour and to 60 seconds. (Some rare minutes have 59 or 61 seconds; see leap second.) ›› Sample conversions: minute minute to century minute to day minute to millennium minute to millisecond minute to second minute to decade minute to shake minute to year minute to week minute to nanosecond
{"url":"http://www.convertunits.com/info/minute","timestamp":"2014-04-20T03:12:07Z","content_type":null,"content_length":"19699","record_id":"<urn:uuid:c2c72da8-b8ea-447c-9013-b1fcc9d73b2b>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
Back to basics 7 - Equal NaNs MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi Covering some basic topics I haven't seen elsewhere on Cody. Given 2 input variables, output true if they are equal, false otherwise. Note that the function should assume NaN's are equal to each other.
{"url":"http://www.mathworks.com/matlabcentral/cody/problems/350-back-to-basics-7-equal-nans","timestamp":"2014-04-19T00:33:05Z","content_type":null,"content_length":"48776","record_id":"<urn:uuid:7dc96d8a-441c-4a60-a3f7-338d274e755b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
I have roads stored as an array of rectangles, How to detect collision between car and the roads up vote 0 down vote favorite The roads don't have any elevation, the y coordinate is 0. The car is 3d, but for collision detection it can be taken as 2d rectangle. My structure : struct rectangle // 4 coordinates of the rectangle float x1_left, y1_left; float x1_right, y1_right; float x2_left, y2_left; float x2_right, y2_right; double thetaSlope; I have array of all these rectangles that make up the road, initially car is inside the first rectangle. I searched collision detection and found - simple 2d collision detection between 2 rectangles, but how to determine if my car lies in a particular rectangle, also car should be able to move from one rectangle to other easily, but not come out of the sides of the rectangle. I am looking for an fairly simple solution. c++ collision-detection Have a look at quadtrees. Also a simpler, brute force, less efficient way is to implement point-in-rectangle checks, then build it up to rectangle-in-rectangle checks. – Peter Wood Mar 14 '13 at add comment 1 Answer active oldest votes Create Point2d class and define rectangle as container of points. For example: struct rectangle Point2d p1; Point2d p2; Point2d p3; Point2d p4; double thetaSlope; up vote 0 down vote accepted struct rectangle Point2d points[4]; double thetaSlope; With this abstraction code is simpler. You should mark sides of rectangles as penetrable or not penetrable. In collision detection take part only not penetrable sides. I think using polygons instead of rectangles is simpler solution. You can define whole map as one big polygon and do not treat movement from one rectangle to another. add comment Not the answer you're looking for? Browse other questions tagged c++ collision-detection or ask your own question.
{"url":"http://stackoverflow.com/questions/15402697/i-have-roads-stored-as-an-array-of-rectangles-how-to-detect-collision-between-c","timestamp":"2014-04-18T22:21:04Z","content_type":null,"content_length":"65083","record_id":"<urn:uuid:fbf601f8-f4c1-45d0-9cf3-f3520fd016d4>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
Polynomial Long Division 09-02-2007 #1 Registered User Join Date Aug 2007 Polynomial Long Division Hello, I'm trying to write a program for doing long division with polynomials, and I am having a very difficult time trying to get the program to (properly) determine the coefficients of the quotient's terms (but it is determining the degree of the quotient properly, a plus). Using this program, an example would be a polynomial to be divided of 5x^5+9x^3+-18x (would be put in as (and manipulated) as 5x^5+0x^4+9x^3+0x^1-18x+0), with a divisor polynomial of x^3+3x (NOTE: the caret (^) is to raise to the power). The quotient should be 5x^2-6 (displayed in the program as 5x^ 2+0x-6), but is instead displaying 0x^2+00x+0. Here is the problem code: {//This section is (supposed to) do the actual math. for (; wsh_counter<=wsh_count_to; wsh_counter++, dr_counter++){ if (dr[dr_counter]!=0){ else if (dr[dr_counter]==0){ if (wsh_counter==wsh_count_to&&wsh_count_to<=(dd_degree+1)){ }//This is the end of the section meant to do the math. However, if need be, here is the whole program's code: #include <iostream> int dd_degree, dr_degree, qt_degree; //Degree of divided, divider, and quotient. int counter, counter_a, counter_b, dd_counter, dr_counter, qt_counter; //Various counters that might be needed. using namespace std; int main(){ cout<<"Welcome to the easy-to-use polynomial long-division program."; cout<<"\nTo begin, please enter the degree of the polynomial to divide: "; float dd[dd_degree+1]; //The array for the divided polynomial's coefficients. //This section will take the input for the polynomial to be divided's coefficients. for (;counter<=dd_degree;counter++){ if (counter<dd_degree-1){ cout<<"\nPlease enter the coefficient of the x^"<<dd_degree-counter<<" term: "; if (counter==dd_degree-1){ cout<<"\nPlease enter the coefficient of the x term: "; if (counter==dd_degree){ cout<<"\nPlease enter the constant: "; cout<<"\nTo continue, please enter the degree of the polynomial to divide by: "; float dr[dr_degree+1]; //The array for the divsor's coefficients. //Same as previous for loop, but for the divisor's coefficients. for (;counter<=dr_degree;counter++){ if (counter<dr_degree-1){ cout<<"\nPlease enter the coefficient of the x^"<<dr_degree-counter<<" term: "; if (counter==dr_degree-1){ cout<<"\nPlease enter the coefficient of the x term: "; if (counter==dr_degree){ cout<<"\nPlease enter the constant: "; qt_degree=dd_degree-dr_degree; //The degree of the quotient. float qt[qt_degree+1]; //Array for the coefficients of the quotient. int wsh; //Work Space Horizontal. int wsv; //Work Space Vertical. //The vertical and horizontal size of the Work Space Array are equal to the divided's degree. float ws[wsh][wsv]; //This is the array for the Work Space coefficients. int wsh_counter, wsv_counter, wsh_count_to, wsv_count_to, wsh_count_from; //More counters and counter controllers, yay! wsh_counter=0; //This is to count the horizontal position. wsv_counter=0; //This is to count the vertical position. wsh_count_to=dr_degree; //This is the limit on the loop for the horizontal counter. wsv_count_to=dd_degree; //This is the limit on the loop for the vertical counter. {//This section is (supposed to) do the actual math. for (; wsh_counter<=wsh_count_to; wsh_counter++, dr_counter++){ if (dr[dr_counter]!=0){ else if (dr[dr_counter]==0){ if (wsh_counter==wsh_count_to&&wsh_count_to<=(dd_degree+1)){ }//This is the end of the section meant to do the math. //This section is to display the quotient. cout<<"\nThe quotient is: \n"; for (;counter_a<=qt_degree;counter_a++){ if (counter_a<qt_degree-1){ if (counter_a==qt_degree-1){ else { //This is the end of the section to display the quotient. //This section is to ask the user to either restart or exit the the program. cout<<"\n\nWould you like to restart the program?\n1. Yes\n2. No\n\nChoice: "; int rerun; if (rerun==1){ cout<<"\nRestarting... please press enter."; The problem, as mentioned, lies in the first code section; the problem being how would I write a system that replicates the human process of long division? Last edited by NESevolved; 09-02-2007 at 04:19 PM. Reason: Forgot Something First of all, without examining the code too closely (although I notice that you call main() recursively, which I believe is forbidden in standard C++), do you actually have to do this yourself? There are a number of symbolic algebra packages that will do this, and a lot more, for you. 09-02-2007 #2 Registered User Join Date Sep 2006
{"url":"http://cboard.cprogramming.com/cplusplus-programming/93187-polynomial-long-division.html","timestamp":"2014-04-21T13:25:18Z","content_type":null,"content_length":"47716","record_id":"<urn:uuid:b8f0c500-53a6-498b-84c1-cfebaa275353>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Fremont, CA Trigonometry Tutor Find a Fremont, CA Trigonometry Tutor I graduated from the University of California, San Diego in 2009 with a bachelor's degree in chemical engineering. I have a lot of experience with algebra, geometry, trigonometry, calculus, physics, and chemistry from high school to upper division college level. When I was in high school and colle... 12 Subjects: including trigonometry, chemistry, calculus, physics I have received my B.S. in Pharmaceutical Chemistry from UC Davis. I have excellent experience in tutoring students from high school to the college level. I am available to tutor algebra 1&2, biology, general chemistry, organic chemistry, and trigonometry. 10 Subjects: including trigonometry, chemistry, algebra 2, biology ...Especially in Math (which is considered the most difficult Math exam ever since its inception) , I scored 113 (out of 120), while the average score in City of Beijing is 17. Majored in Applied Math, I got B.S. and M.S. from Fudan University. I taught college math for three years in China, cover... 14 Subjects: including trigonometry, calculus, statistics, geometry ...I have tutored and helped college students since the 80s and substantially in the last 5 years, on all HS Math classes (many Honors). I have a Math PhD (and MS in EE, CS) from Purdue and taught various courses before and after graduation, and regularly at San Jose State U (Calculus III for Fall 2... 15 Subjects: including trigonometry, calculus, GRE, algebra 1 ...Depending on your learning skills I employ different tactics for you to get a clear understanding of the material. The next part of the tutor session is devoted to practicing and solving problems. We go step by step to solve each problem however, I do not give you the answer I guide you to the answer. 15 Subjects: including trigonometry, chemistry, calculus, geometry
{"url":"http://www.purplemath.com/fremont_ca_trigonometry_tutors.php","timestamp":"2014-04-17T00:49:24Z","content_type":null,"content_length":"24262","record_id":"<urn:uuid:709c5d6a-fc2b-438a-ada0-5c5c159a5dd9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Find variables common to multiple data sets February 6, 2013 By Rick Wicklin (This article was originally published at The DO Loop, and syndicated at StatsBlogs.) Last week the SAS Training Post blog posted a short article on an easy way to find variables in common to two data sets. The article used PROC CONTENTS (with the SHORT option) to print out the names of variables in SAS data sets so that you can visually determine whether the data sets have any variables in common. The article also mentioned using the COMPARE procedure or writing a PROC SQL query that interrogates DICTIONARY tables. But what if you want to find variable names that are common to many data sets? The PROC SQL approach is a programming solution, so it might be up to the challenge. A quick internet search reveals one way to use PROC SQL to find common variables in two data sets (see p. 4 of the linked paper). I am not a PROC SQL expert, but the approach in that paper seems difficult to generalize to the case of multiple data sets. Because I like the SAS/IML language, this article shows how to find all variables that are common to multiple data sets. The following statements define six SAS data sets: data D1 D2; A=1; b=2; C=3; D=4; E=5; F=6; g=7; h=8; I=9; J=10; data D3 D4; j=1; f=2; h=3; a=4; N=7; L=6; c=7; data D5; J=1; D=2; A=3; g=4; h=5; P=6; q=7; data D6; C=1; M=2; F=3; a=4; j=5; H=6; B=7; R=8; K=9; I would have a hard time visually determining which variables are common to all of the data sets, so I'm going to write a program. I will use two SAS/IML functions to help: • The CONTENTS function returns a sorted list of the variables in a SAS data set. Use the UPCASE function in Base SAS to get the names in uppercase format so that you can perform case-insensitive • The XSECT function returns the intersection between two or more arrays of values. With those two functions, you can obtain the variables names that are common to the data sets D1–D6, as follows: proc iml; DSNames = "D1":"D6"; InCommon = upcase(contents(DSNames[1])); /* get all vars in D1 */ do i = 2 to ncol(DSNames); /* loop over data sets */ varNames = upcase(contents(DSNames[i])); /* get variable names */ InCommon = xsect(InCommon, varNames); /* intersect with previous */ print InCommon; The variables that are common to all the SAS data sets are A, H, and J. If you want to generalize the problem even more, you can use the SAS/IML DATASETS function to get the names of all data sets in a library. For example, you could use DSNames = T(datasets("work")) instead of hard-coding the data set names in this example. I invite you to submit your own solution in the comments. Please comment on the article here: The DO Loop
{"url":"http://www.statsblogs.com/2013/02/06/find-variables-common-to-multiple-data-sets/","timestamp":"2014-04-19T22:05:55Z","content_type":null,"content_length":"42351","record_id":"<urn:uuid:80b87716-364b-4225-93fe-6f3c840acefd>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
Infinite Limits Horizontal asymptotes are the limits as x approaches infinity. where c is a constant, then the horizontal asymptote is x = c. Same applies for negative infinity. You're answers are correct. Whenever you have a polynomial division, where a*x^n and b*x^n are the highest terms for each the numerator and the denominator, than a horizontal asymptote exists at a/b. Note that both n's have to be the same. In other words, this does not apply to 5x^3 / 2x^2. Edited to add: if you have a*x^n and b*x^m as the highest terms in the polynomial division, then: if n > m, the function goes to infinity if n < m, the function has a horizontal asymptote at x = 0 if n = m, the function has a horizontal asymptote at a/b Last edited by Ricky (2006-01-03 10:22:36) "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=22306","timestamp":"2014-04-19T05:14:17Z","content_type":null,"content_length":"14031","record_id":"<urn:uuid:07bbf357-cf66-4ba4-8d30-5b3a78d167df>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
7th International Conference on Mathematical Methods in Physics The 7th International Conference on Mathematical Methods in Physics took place in the Centro Brasileiro de Pesquisas Físicas (CBPF/MCT), Rio de Janeiro - RJ, Brazil, from 16 to 20 April 2012, and was jointly organized by the following Institutions: Centro Brasileiro de Pesquisas Físicas (CBPF/MCT), The Abdus Salam International Centre for Theoretical Physics (ICTP, Italy), Instituto Nacional de Matemática Pura e Aplicada (IMPA, Brazil), The Academy of Sciences for the Developing World (TWAS, Italy) and The Scuola Internazionale di Studi Avanzati (SISSA,Italy). The Organizing Committees were composed by: E. ABDALLA (USP, Brazil), L. BONORA (SISSA, Italy), H. BURSZTYN (IMPA, Brazil), A. A. BYTSENKO (UEL, Brazil), B. DUBROVIN (SISSA, Italy), M.E.X. GUIMARÃES (UFF, Brazil), J.A. HELAYËL-NETO (CBPF, Advisory Committee: A. V. ASHTEKAR (Penn State University, U.S.A.), V. M. BUCHSTABER (Steklov Mathematical Institute, Russia), L. D. FADDEEV (St. Petersburg Dept. of Steklov Mathematical Institute, Russia), I. M. KRICHEVER (Columbia Univ., U.S.A./ Landau Institute of Theoretical Physics, Russia), S. P. NOVIKOV (Univ. of Maryland, U.S.A./Landau Institute of Theoretical Physics, Russia), J. PALIS (IMPA, Brazil), A. QADIR (National University of Sciences and Technology, Pakistan), F. QUEVEDO (ICTP, Italy), S. RANDJBAR-DAEMI (ICTP, Italy), G. THOMPSON (ICTP, Italy), C. VAFA (Harvard University, The Main Goal: The aim of the Conference was to present the latest advances in Mathematical Methods of Physics to researchers, young scientists and students of Latin America in general, and Brazil in particular, in the areas of High Energy Physics, Cosmology, Mathematical Physics and Applied Mathematics. The main goal was to promote an updating of knowledge and to facilitate the interaction between mathematicians and theoretical physicists, through plenary sessions and seminars. This Conference can be considered as a part of a network activity in a special effort to encourage the formation of Regional and International scientific networks and professional societies. The ultimate ambition of the coordinated activity is to provide top level postgraduate (PhD) formation in an environment of high standard research in various fields of Physics, on the model of the School for Advanced Studies (SISSA). The Program: The Conference program was designed in order to provide an ample time for debates and discussions among the participants and included plenary talks (1 hour each) and a number of short seminars (30 minutes each) given by Brazilian and foreign researchers. There were also discussion sessions involving the participants, encouraging them to open debates on the themes of the Conference, as well as a “Round Table" (on prospectives in mathematics in South America) with discussions on a possible future investigations in theoretical and mathematical physics. Closing Sessions: Prof. R. C. SHELLARD (CBPF, Vice Director), Prof. G. MARTINELLI (SISSA, Italy), Prof. L. BONORA (SISSA, Italy), Prof. A. A. BYTSENKO (UEL, Local Chairman), R. COQUEREAUX (CPT, The organizers would like to thank the participants, who made this Conference a rare opportunity for the younger and experienced researchers present in this event to acquire new and precious knowledge. Thanks are also due to the contributors, who with their articles will surely make this Proceedings a living experience. Sponsors: This event has been co-sponsored by ICTP, TWAS, SISSA, CBPF, IMPA, FAPERJ and FAPESP. L. BONORA, A. A. BYTSENKO, M. E. X. GUIMARÃES, J. A. HELAYËL-NETO The Proceedings were Edited by: L. BONORA, A. A. BYTSENKO, M. E. X. GUIMARÃES and J. A. HELAYËL-NETO
{"url":"http://pos.sissa.it/cgi-bin/reader/conf.cgi?confid=175","timestamp":"2014-04-20T01:42:19Z","content_type":null,"content_length":"14374","record_id":"<urn:uuid:8897ae2f-619e-4e32-a41e-0a3a119aa4a3>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
Fubini Study Metric and Einstein constant up vote 4 down vote favorite Hi all, it is well known that the complex projective space with the fubini study metric is Einstein, but what is the explicit value, i.e. for which $\mu$ does $Ric=\mu g$ hold? Moreover, I would like to know how to calculate the sectional cuvature explicitly, because I would like to calculate the number $\sqrt{\sum K_{ij}}$ explicitly for a given orthonormal basis. ($K_{ij} $ is the sectional curvature of the plane spanned by $e_i$ and $e_j$) dg.differential-geometry riemannian-geometry 1 Isn't this available in many different places, including Griffiths-Harris and wikipedia? – Deane Yang Feb 15 '12 at 11:58 2 $$\mu=2\cdot n+3$$ ($\mathbb C\mathrm P^n$ is isometric to the factor $\mathbb S^{2n+1}/\mathbb S^1$. You can use O'Nail's formula to calculate sectional curvature, it is $=4$ in complex directions and $=1$ in real directions.) – Anton Petrunin Feb 15 '12 at 14:29 add comment 1 Answer active oldest votes As suggested by Anton, you can use the O'Neill formulas in the Riemannian submersion $\mathbb C^{n+1}\to \mathbb{C} P^n$ that defines the Fubini-Study metric on $\mathbb C P^n$. This gives the following: suppose $X,Y$ are orthonormal tangent vectors at some point in $\mathbb C P^n$, and denote by $\overline X,\overline Y$ their horizontal lifts to $\mathbb C^{n+1}$ (which are also orthonormal). Then $$sec(X,Y)=1+\tfrac34\|[\overline X,\overline Y]^v\|^2=1+3|\overline g(\overline Y,J\overline X)|^2,$$ where $\overline g$ is the canonical Euclidean metric on $\mathbb C^{n+1}$, $()^v$ denotes the vertical component wrt the submersion and $J$ is the complex structure, i.e., multiplication by $\sqrt{-1}$. Note that this immediately implies that $\mathbb CP^n$ is $\tfrac14$-pinched. up vote 3 With the above formula, you can easily compute the Einstein constant of $\mathbb C P^n$ to be equal to $\mu=2n+2$, see e.g. Petersen's book "Riemannian Geometry", chapter 3. down vote accepted Another possible way of doing it is using that this is a Kahler manifold. The Fubini-Study metric can be thought of as $\omega_{FS}=\sqrt{-1}\partial\overline\partial\log\|z\|^2$, where $ \|z\|^2$ is the square norm of a local non vanishing holomorphic section (it is independent of the choice of section by the $\partial\overline\partial$-lemma). You can then compute in local normal (holomorphic) coordinates the coefficients $g_{i\bar j}$ and use that the Ricci form is given by $Ric(\omega)=-\sqrt{-1}\partial\overline\partial\log\det(g_{i\bar{j}})$. This will obviously give you the same result, but in the form $Ric(\omega_{FS})=(n+1)\omega_{FS}$. As pointed out in the comments below, the reason for the missing factor $2$ in this computation is that we have to change from real orthonormal frames to complex unitary frames. Your last sentence is not correct, the missing factor of $2$ come up when changing from real orthonormal frames to complex unitary frames. – YangMills Feb 15 '12 at 15:30 @YangMills: thank you! I just corrected it! – Renato G Bettiol Feb 15 '12 at 17:04 typo: the metric should be $g_{i\bar{j}}$ – John B Feb 16 '12 at 17:43 @MG: Thanks! I guess this fixes it. In the code I actually had typed i\overline j, but I agree the result seemed more like \overline{ij}. Using \bar instead seems to give a better output... thanks! – Renato G Bettiol Feb 16 '12 at 19:28 add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry riemannian-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/88512/fubini-study-metric-and-einstein-constant/88525","timestamp":"2014-04-19T17:28:24Z","content_type":null,"content_length":"59011","record_id":"<urn:uuid:8af95b87-f1e6-4bb5-b245-ecf2279546fe>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
Principles of Hydrostatic Pressure Principles of Hydrostatic pressure UNIT PRESSURE, p. The unit pressure, meaning the intensity of pressure, at any point in a fluid is the amount of pressure per unit area. If the unit pressure is the same at every point on any area, A, on which the total pressure is P, p= P/ A if, however, the unit pressure is different is different points, the unit pressure at any point is equal to the total pressure on a small differential area surrounding the point divided by the differential area, or p= dP/dA where there is no danger of ambiguity, the term pressure is often used as an abbreviated expression for unit pressure. The fundamental foot-pound-second unit for pressure is pounds per square inch is often used. Direction of Resultant Pressure. The resultant pressure on any plane. In a fluid at rest is normal to that plane. Assume that the resultant pressure P, on any plane AB, makes an angle other than 90 degrees with the plane. Resolving P into rectangular components P1 and P2, respectively parallel with and perpendicular to AB, gives a component P1 which can resisted only by a shearing stress. By the definition, a fluid at rest cannot resist a shearing stress. By the definition, a fluid at rest cannot resist a shearing stress, and therefore the pressure must be normal to the plane. This means that there can be no static friction in hydraulics.
{"url":"http://hydrostatics.wordpress.com/lecture-page/hydrostatic-pressure/principles-of-hydrostatic-pressure/","timestamp":"2014-04-20T18:54:07Z","content_type":null,"content_length":"30584","record_id":"<urn:uuid:d2f1f675-4ed3-4994-a601-645c1e74249f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00062-ip-10-147-4-33.ec2.internal.warc.gz"}