content
stringlengths
86
994k
meta
stringlengths
288
619
Greek numerals are a bit more complicated than Roman. There are two distinct groups of Greek numerals, and both are given below. First is the Acrophonic - this is type of Greek numerals that were used in ancient Greece up to Roman times (100 BC). It is called acrophonic because of the initial letter of the word designation - for example g (in Greek script as below) stood for five, and five was 'gente'; h stood for 100, and hundred was 'hekaton' and so on. This system is also sometimes called 'Attic' - it also stands for an offspring of Ionic dialect spoken in the city of Athens. This was later replaced by the Alphabetic numerals - where 27 letters and their combinations stood for all numbers. The table is given below too. This system is also sometimes called Ionic. It was not positional, meaning that the place of the letter did not denote its value like we have it in our number system. Acrophonic Greek Numerals Alphabetic Greek Numerals Can you do calculations easily? Can you spot some 'interesing' numbers/letters? Which ones are they? Try after you visited the famous numbers page.
{"url":"http://www.mathsisgoodforyou.com/numerals/greeknums.htm","timestamp":"2014-04-16T19:13:53Z","content_type":null,"content_length":"9226","record_id":"<urn:uuid:d65e4372-5874-4eb7-9a75-f83ff6af001a>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
hash and __eq__ Aahz aahz at pythoncraft.com Tue Jun 2 17:27:02 CEST 2009 In article <003e1491$0$9723$c3e8da3 at news.astraweb.com>, Steven D'Aprano <steve at REMOVE-THIS-cybersource.com.au> wrote: >On Sun, 31 May 2009 07:24:09 +0100, Arnaud Delobelle wrote: >> AFAIK, 'complexity' means 'worst case complexity' unless otherwise >> stated. >No, it means "average or typical case" unless otherwise specified. >Consult almost any comp sci text book and you'll see hash tables with >chaining (like Python dicts) described as O(1) rather than O(N), >Quicksort as O(N log N) instead of O(N**2), and similar. If the default >was "worst case unless otherwise specified", then Quicksort would be >called "Slower than Bubblesort Sort". >(Both are O(N**2), but Quicksort does more work.) >Here's a quote on-line: >"You should be clear about which cases big-oh notation describes. By >default it usually refers to the average case, using random data. >However, the characteristics for best, worst, and average cases can be >very different..." When talking about big-oh, I prefer to define "worst-case" as "real world likely worst-case" -- for example, feeding an already-sorted list to Quicksort. However, the kind of worst-case that causes a dict to have O(N) behavior I would call "pathological case". I generally try to define big-oh in terms of what I call worst-case, because I think it's important to keep track of where your algorithm is likely to run into problems, but I don't think it's worth the effort in most cases to worry about pathological cases. In the case of dicts, it's important to remember that pathological cases are far more likely to arise from poorly designed hashable user classes than from data problems per se; Python's hashing of built-in types has almost twenty years of tuning. Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ on-a-new-machine-ly y'rs - tim More information about the Python-list mailing list
{"url":"https://mail.python.org/pipermail/python-list/2009-June/539089.html","timestamp":"2014-04-19T21:23:32Z","content_type":null,"content_length":"5031","record_id":"<urn:uuid:3bf83b07-d77e-495f-ae18-b8afc0621921>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> LGM: Multiple indicator and regular models LGM: Multiple indicator and regular m... Tiina posted on Friday, January 06, 2006 - 3:30 am I'm using Mplus version 3, and wondering the following questions concerning multiple and regular growth models (with three time scores). The questions are timely also for several of my colleagues. 1) If there is a possibility to use multiple indicator GM, is this always a better option? I understand the difference in parametrization between the two types of LGMs, but does the multiple indicator model do better job in controlling for the measurement error in the examined variables, given that the time scores are already latent? I suspect that I'm seeing this happening in my data, because I'm getting a higher regression coefficient among a covariate and the slope factor with a multiple than regular LGM. Both models fit well to the data. Are there any conditions under which you would recommend to use the one over the other? 2) About model identification: If I'm letting the program to estimate one of the time score loadings on the slope factor, thus ending up with only two "known" indicators for the latent variables, do I need to fix one time score to zero for the model to be identified, or does the parametrization of the LGM make sure that the model is identified anyway? Are there any differences in this between multiple indicator and regular GM? I'm asking this because sometimes I'm able to run such a model, and sometimes not. Further, do these issues depend on whether I'm using the "BY" or the random slope (|) statement in the syntax? 3) When constructing a multiple growth process model including effects from 1) a covariate to the intercept and slope of a growth curve, 2) which (intercept) again has an effect on the intercept and slope of another growth curve, and 3) it is known that the predictor variable predicting the intercept and slope of the first growth curve has similar effects on the second growth curve when tested separately, is it possible to test mediational effects trough the first growth curve to the next when all of these components are included in the same model (i.e., one predictor variable measured at T1, and two growth processes including three time scores, and the level and linear slope factors)? This would make theoretical sense, and I quess that my interest would be to model the mediation trough the slope-component of the first growth curve. In other words, may the intercept and slope factors be treated as regular latent variables, allthough they are specified components of growth 4) When regressing an intercept and slope of a LGM on a covariate, does one need to specify the relationship among the intercept and the level as a regression path to say that "...varible x predicts increases in variable y while CONTROLLING FOR THE INITIAL LEVEL of...", or is this true also in case these components are simply allowed to correlate? If not, if X predicts the slope of an Y only when the intercept and the level are correlated but not when the path is defined as a regression, can one really say that the X predicts increases in Y in similar manner as in traditional, longitudinal SEM model (allthough in LGM, the latent factors represent different things than traditional latent variables). If not, is this really "longitudinal" information (which would be essential for my paper)? 5) I noticed a discussion about the difference in obtained covariate effects on growth components when letting the intercept and the slope to be correlated vs. including a regression among them, in a situation where these factors are heavily negatively correlated: Significant effects from the covariate to these factors were obtained only when including a regression among the level and the slope. I have a similar situation, and wanted to ask if this effect is specific/common for the situation when the intercept and slope are negatively correlated? What may account for this? Thank you Bengt and Linda for your time, and excellent website!! Linda K. Muthen posted on Friday, January 06, 2006 - 11:34 am 1. If the factor model fits well and measurement invariance across time has been established, the multiple indicator model would be best. In this model time-specific variance and measurement error can be separated unlike a regular growth model where the two are combined. 2. See Chapter 16, Growth Models, to see the difference between the BY and | specifications of growth models and to see how to specify a multiple indicator growth model. 3. Yes, growth factors can be treated as any latent variable. 4. All covariates are controlled for in the regression of a dependent variable on a set of independent variables. 5. The is nothing special about a negative covariance that would require special modeling techniques. Tiina posted on Monday, January 09, 2006 - 7:07 am Dear Linda, Thank you for your timely response. I still wanted to specify the question 4. I meant controlling for the initial level of the dependent variable/growth curve, rather than control for other independent variables. That is, if I'm obtaining a significant effect on the linear slope factor (and possibly also on the level factor), is the meaning of this effect similar in case when there is a correlation among the level and slope factor, and when the slope is regressed to the level? Or is it so that including the regression among the level and the slope is the only way to indicate that some covariate in fact predicts increases/decreases in some variable over time? Thank you very much! Linda K. Muthen posted on Monday, January 09, 2006 - 8:41 am I think you are asking about the following two models: Model 1 where s and i are correlated and s amd i are regressed on an observed covariate: s ON x; i ON x; i WITH s; Model 2 where s is regressed on i and an observed covariate and i is regressed on an observed covariate: s ON i x; i ON x; In Model 2, i and x are controlled for each other so the regression coefficent for x will not be the same as in Model 1. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=14&page=987","timestamp":"2014-04-16T16:02:36Z","content_type":null,"content_length":"24885","record_id":"<urn:uuid:14dc78cf-5cc3-4ce0-a2b7-8c28cbc4951d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
MATH 3703 Schedule-Spring 2014 Lecture Notes │Chapter 11│Chapter 12│Chapter 13│Chapter 14│ │ 11.1 │ 12.1 │ 11.2 │ 13.1 │ │ 11.3 │ 12.2 │ 14.1 │ 13.2 │ │ 11.4 │ 12.3 │ 14.2 │ 13.3 │ │ 14.3 │ 12.4 │ 14.4 │ 11.3 │ │ │ │ 14.5 │ 13.4 │ Rubric for Geometry in _______________________ Review Sheets MATH 3703 Spring 2014 Course Name: Geometry for P-8 Teachers Office Number: 210 Boyd Bldg. Instructor: Dr. Sheila Rivera Office Hours: Given in class E-mail: srivera@westga.edu Telephone: O: 678-839-4141 H: 770-920-2177 Course Objectives: Students will demonstrate: 1. A better understanding of standard vocabulary and symbols of elementary mathematics; 2. An ability to reason logically and to provide justifications and coherent arguments for the plausibility of conjectures; 3. An ability to use geometry in real-world problem-solving; 4. Well-developed spatial sense including both two- and three-dimensional figures (tessellations, symmetry, congruence, similarity, polygons and other curves, polyhedra); 5. A better understanding of geometry and measurement from a historical perspective; 6. A better understanding of measurement including the metric system; 7. An ability to solve measurement problems involving perimeter, circumference, area, volume, temperature, and mass; 8. A better understanding of synthetic, coordinate, and transformational geometry with an emphasis on problem-solving; 9. A better understanding of the uses of a variety of manipulatives, technology, and other materials for the P-8 level; 10. A better understanding of the vision of mathematics education as put forth in NCTM’s Principles and Standards (2000); 11. A better understanding of the scope and sequence of elementary school mathematics programs; 12. A knowledge of current professional literature in the field of mathematics education. Text: A Problem Solving Approach to Mathematics for Elementary School Teachers, Addison-Wesley Publishing Co., Inc. Eleventh Ed., 2004. Authors: Billstein, Libeskind, Lott. Additional Supplies: You will need to have a ruler, a compass, a protractor, and a pair of scissors. Also, a packet of course handouts is available at the bookstore. Test 1 100 points Test 2 100 points Test 3 100 points Test 4 100 points Portfolio 180 points Final Exam 150 points Grading Policy: A (657-730 pts.), B (584-656 pts.), C (511-583 pts.), D (438-510 pts.), F (0-437 pts.) Late Assignments Policy: Hard copies of all assignments, journal entries, etc. are due at the beginning of class on the specified date (see course schedule). Late assignments will not be accepted unless prior approval has been given by the instructor. In the event that a late assignment is accepted, the grade on the assignment will be lowered (i.e. points will be deducted). Attendance Policy: Students are expected to attend all classes. This term a student may withdraw with a grade of W through March 4th, regardless of grades, absences, etc. This deadline has been established by the University. After this deadline, if a student has accumulated more than three absences throughout the semester, he/she will normally receive a grade of WF (which counts as an F). The three absences should be saved for sickness and other emergencies. Late arrivals and early exits count one-half of an absence. If a student is absent for a test and has an excuse from someone in authority, then the final exam grade will be used for the missed test in the calculation of the final course grade. No make-up exams will be given. This means no early tests/no late tests under any circumstance. Suggested Problems: For each section covered in class there will be a set of problems provided. These are not homework problems in the sense that they will be taken up and graded. Instead, these are problems that are recommended for you to work in order to be successful in the class. If you have questions concerning the suggested problems, you should address these questions to the instructor during office hours, before or after class, or during the review session prior to the test. Conferences: Conferences can be beneficial and are encouraged. All conferences should occur during the instructor’s office hours, whenever possible. If these hours conflict with a student’s schedule, then appointments should be made. The conference time is not to be used for duplication of lectures that were missed; it is the student’s responsibility to obtain and review lecture notes before consulting with the instructor. The instructor is very concerned about the student’s achievement and well-being and encourages anyone having difficulties with the course to come by the office for extra help. Grades will be based on coursework, not on Hope Grant needs, GPA, or any other factors outside the realm of coursework.
{"url":"http://www.westga.edu/~srivera/3703/syallabus-2.htm","timestamp":"2014-04-16T22:26:24Z","content_type":null,"content_length":"119974","record_id":"<urn:uuid:02cb4aeb-be76-4142-ba41-e657c8f17a5e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculate your phone number - Aquila Online Calculate your phone number Here’s a quick party trick. [Via] 1. Grab a calculator. (unless you’re a MENSA member) 2. Key in the first three digits of your phone number. (NOT the area/operator code) 3. Multiply by 80 4. Add 1 5. Multiply by 250 6. Add the last 4 digits of your phone number 7. Add the last 4 digits of your phone number again 8. Subtract 250 9. Divide number by 2 7 thoughts on “Calculate your phone number” 1. Hm… freaky! 2. Very good, except we have 8 numbers not seven in the UK. 3. Hi liked your blog nice,but like the above we have 8 numbers,Wish you well 4. does it work if you add your last 5 digits instead of 4? 5. well, isn’t that a clever bar trick. then again i ain’t no math genius. 6. By me dosent work i ad 616 and 2. Key in the first three digits of your phone number. (NOT the area/operator code) 3. Multiply by 80 4. Add 1 5. Multiply by 250 6. Add the last 6166 digits of your phone number 7. Add the last 6166 digits of your phone number again 8. Subtract 250 9. Divide number by 2 and the number is wrong my phone number is +38640616688 why is this not working
{"url":"http://www.aquilaonline.co.za/2006/07/calculate-your-phone-number/","timestamp":"2014-04-21T14:59:29Z","content_type":null,"content_length":"41407","record_id":"<urn:uuid:c74f49b9-cf50-4517-b9c2-d6b3c54705d0>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
Is the universal enveloping algebra of a finite-dimensional Lie algebra (left) noetherian? up vote 1 down vote favorite The universal enveloping algebra of a Lie algebra $\mathfrak{g}$ is a flat deformation of $S(\mathfrak{g})$, so these algebras should be similar in many ways. Does at least this general similarity ra.rings-and-algebras rt.representation-theory 3 The universal enveloping algebra of a finite dimensional Lie algebra is a so-called G-alegbra, hence is left and right Noetherian (see e.g. singular.uni-kl.de/Manual/3-1-5/sing_510.htm). Note that this includes quantized enveloping algebra as well. – Adrien Jan 10 '13 at 10:47 Adrien, thank you very much for the reference. – Oleg Jan 10 '13 at 11:36 add comment 1 Answer active oldest votes Yes, if a filtered ring $R$ has the property that its associated graded ring is Noetherian, then $R$ is Noetherian. Universal enveloping algebras have a PBW filtration such that the up vote 5 associated graded algebra is $S(\mathfrak{g})$. This is proved in Noncommutative Noetherian Rings by McConnell, Robson, Small - see sections 1.6 and 1.7. down vote Thank you very much for the reference. – Oleg Jan 10 '13 at 11:27 You're welcome - do you have a reference for the flat deformation idea you mentioned in the question? It seems like an interesting point of view. – m_t Jan 10 '13 at 12:59 I don't have a reference, but I can just explain you how I see it. $U(g)$ is the quotient of the tensor algebra $T(g)$ modulo the relations $x\otimes y−y\otimes x−[x,y]$ for $x,y\in 1 g$. Let $k$ be the base field, $t$ be a variable and $B$ be the quotient of the algebra $T(g)\otimes_k k[t]$ modulo the relations $x\otimes y−y\otimes x−t[x,y]$. Then $B$ is (I hope --- I haven't checked) a free $k[t]$-module with the usual PBW basis and an algebra over $k[t]$, the fiber of $B$ over the point $t=0$ is $S(g)$, and the fiber over $t−a$ for $a\in k^ *$ is $U(g)$. – Oleg Jan 10 '13 at 13:34 I think I got the idea of the construction from Chapter 6 of Eisenbud's Commutative Algebra with a View toward Algebraic Geometry. – Oleg Jan 10 '13 at 13:41 And here is more to deformation: mathoverflow.net/questions/41142/… – Oleg Jan 10 '13 at 17:10 add comment Not the answer you're looking for? Browse other questions tagged ra.rings-and-algebras rt.representation-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/118511/is-the-universal-enveloping-algebra-of-a-finite-dimensional-lie-algebra-left-n/118521","timestamp":"2014-04-21T04:40:19Z","content_type":null,"content_length":"59025","record_id":"<urn:uuid:9770324e-073e-4d57-bf8d-5319ca80601a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
When we take a stroll in the park and notice the pretty flowers lining our sidewalks, we hardly take time to look into the mathematic beauty of it all. What kind of math you might ask? Specifically, the Fibonacci numbers. First things first, if one were mildly observant, he or she would count the petals on a flower. There are the flowers with 1 petal: Flowers with 2 petals: Flowers with 3 petals: Flowers with 5 petals: Flowers with 8 petals: And…. The list goes on. There are Black-eyed Susans with 13 petals, daisies with 21, and 34 petals. Overall, we see a pattern of: 1, 2, 3, 5, 8, 13, 21, 34 and so on. This is a clear example of the Fibonacci sequence starting at 1. However, you can also find the Fibonacci sequence starting on 13 through just ordinary field daisies. There are daisies with 13, 21, 34, 55 and even 89 petals; these are all prime examples of the Fibonacci sequence. Although petal number is intriguing, if one looked even closer into the stems of a simple plant, you could also find the pronounced Fibonacci sequence. For example this is a diagram of a grown If you draw lines through the flower’s axils, you’ll see that the number of branches up each level represent the Fibonacci number sequence The number of leaves up each level, also represents the Fibonacci Sequence! The Fibonacci pattern also occurs in tree growth with the number of branches from bottom to top in trees.
{"url":"http://www.fabulousfibonacci.com/portal/index.php?option=com_content&view=article&id=10&Itemid=9","timestamp":"2014-04-19T22:06:06Z","content_type":null,"content_length":"21161","record_id":"<urn:uuid:8f092de2-9a13-40e7-b62e-faa47721cfad>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
Splits a convex polygon by a plane npm install split-polygon Want to see pretty graphs? Log in now! 1 downloads in the last day 10 downloads in the last week 15 downloads in the last month Splits a convex polygon by a plane into two parts (or optionally clips the polygon against a single plane) using the Sutherland-Hodgman algorithm. Works in arbitrary dimensions, both in the server and the browser npm install split-polygon var splitPolygon = require("split-polygon") var poly = [[1,2], [3,4], [0,0]] var parts = splitPolygon(poly, [0, 1, 3]) var splitPolygon = require("split-polygon") splitPolygon(poly, plane) Splits the convex polygon poly against plane into two parts, one above the plane and the other below it. The equation for the plane is determined by: function planeDistance(x) { return plane[0] * x[0] + plane[1] * x[1] + ... + plane[n-1] * x[n-1] + plane[n] Points above the plane are those where planeDistance(x) >= 0 and below are those with planeDistance(x) <= 0 • poly is a convex polygon • plane is the plane Returns An object with two properties: • positive is the portion of the polygon above the plane • negative is the portion of the polygon below the plane splitPolygon.positive(poly, plane) Same result as splitPolygon, except only returns the positive part. This saves a bit of memory if you only need one side. splitPolygon.negative(poly, plane) Ditto, except returns only the negative part. (c) 2013 Mikola Lysenko. MIT License
{"url":"https://www.npmjs.org/package/split-polygon","timestamp":"2014-04-21T10:32:15Z","content_type":null,"content_length":"9468","record_id":"<urn:uuid:a810ad57-fcd3-4ec8-beee-bfca87aee96b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Baltimore, MD Algebra 1 Tutor Find a Baltimore, MD Algebra 1 Tutor ...Honors graduate in Mathematical Physics with a Postgraduate Diploma Electrical Engineering, specializing in Software. I have overs 20 years corporate industrial experience in the field of Software Engineering and Telecommunications (ICT). I have worked for the big blue chip companies in the Comm... 16 Subjects: including algebra 1, physics, calculus, geometry ...Simply put: I try to understand the student, then I present the information being taught in a way that the student can relate to. It works - every time. I have tutored over 190 students in my life time. 17 Subjects: including algebra 1, chemistry, reading, biology Hello! I am an experienced tutor eager to help you in almost any subject! I am proficient using the Calvert Hall Home School Curriculum in addition to other home school programs. 24 Subjects: including algebra 1, reading, chemistry, English ...I do not have any professional tutoring experience, but I have had good experiences tutoring my friends and family. I am an extremely patient person, and I am usually able to explain math problems in several different ways until they are understood. I also have scored very well on standardized ... 32 Subjects: including algebra 1, reading, algebra 2, calculus ...As an engineer, I have used Excel for parts lists, loads calculation, mass properties management (weight and CG, along with mass moment of inertia) for a nacelle, fastener CG calculation. I consider myself an advanced user, utilizing most aspects of Excel equation types - logical tests, math an... 10 Subjects: including algebra 1, physics, calculus, geometry Related Baltimore, MD Tutors Baltimore, MD Accounting Tutors Baltimore, MD ACT Tutors Baltimore, MD Algebra Tutors Baltimore, MD Algebra 2 Tutors Baltimore, MD Calculus Tutors Baltimore, MD Geometry Tutors Baltimore, MD Math Tutors Baltimore, MD Prealgebra Tutors Baltimore, MD Precalculus Tutors Baltimore, MD SAT Tutors Baltimore, MD SAT Math Tutors Baltimore, MD Science Tutors Baltimore, MD Statistics Tutors Baltimore, MD Trigonometry Tutors
{"url":"http://www.purplemath.com/baltimore_md_algebra_1_tutors.php","timestamp":"2014-04-16T04:51:29Z","content_type":null,"content_length":"23920","record_id":"<urn:uuid:e303d3ed-03ec-4751-ab77-558127f5438b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
R: st: Zero-truncated Negative Binomial convergence [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] R: st: Zero-truncated Negative Binomial convergence From "Carlo Lazzaro" <carlo.lazzaro@tiscalinet.it> To <statalist@hsphsun2.harvard.edu> Subject R: st: Zero-truncated Negative Binomial convergence Date Tue, 17 Mar 2009 18:53:16 +0100 Dear Tony, may I ask you for the reference of the papers in Statistics in Medicine you mentioned in your reply to the thread Emily started? Thanks a lot and Kind Regards, -----Messaggio originale----- Da: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] Per conto di Lachenbruch, Inviato: martedì 17 marzo 2009 18.16 A: statalist@hsphsun2.harvard.edu Oggetto: RE: st: Zero-truncated Negative Binomial convergence If the zeros are identifiable from some other information - e.g., hospital costs will be 0 if the patient isn't hospitalized or in this case we would know the visits to a specialist is 0 if the patient is only seen for physical exams, etc. - then a two-part model might work. In this case one uses two models: one for the number of visits if visits are >0 and one (a logistic) for distinguishing 0 vs. non-zero. I have a few papers in Statistics in Medicine in 2001 that may be helpful. I would emphasize that in this case, some of the 0 visits to a specialist are in patients who should have seen a specialist but didn't. Peter A. Lachenbruch Department of Public Health Oregon State University Corvallis, OR 97330 Phone: 541-737-3832 FAX: 541-737-4001 -----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Maarten buis Sent: Monday, March 16, 2009 2:08 PM To: stata list Subject: RE: st: Zero-truncated Negative Binomial convergence --- Emily Wilson wrote: > I am having trouble running a zero-truncated negative binomial > regression. The dependent variable is: # of visits to a > specialist physician in the past year, and the distribution is > something like: > 0 visits ~= 93,000 > 1 visit ~= 15,000 > 2 visits ~= 1,000 > 3 visits ~= 500 The zero truncated distribution assumes that there are no zeros. In your case you definately do have zeros. I would start with a regular -poisson-, and than I might worry about excess zeros, for which you can look at the zero inflated poisson (-zip-), the negative binomial (-nbreg-), and zero inflated negative binomial (-zinb-). There is a nice discussion of these models and how to choose between them in this book: Hope this helps, Maarten L. Buis Institut fuer Soziologie Universitaet Tuebingen Wilhelmstrasse 36 72074 Tuebingen * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-03/msg00916.html","timestamp":"2014-04-18T18:30:29Z","content_type":null,"content_length":"9294","record_id":"<urn:uuid:110b7535-00bc-439a-a299-d85ccbf56fe3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Array of Callables Shane Holloway shane.holloway@ieee.... Wed Mar 21 11:01:03 CDT 2007 On Mar 21, 2007, at 6:58 AM, Anne Archibald wrote: > Vectorizing apply is what you're looking for, by the sound of it: > In [13]: a = array([lambda x: x**2, lambda x: x**3]) > In [14]: b = arange(5) > In [15]: va = vectorize(lambda f, x: f(x)) > In [16]: va(a[:,newaxis],b[newaxis,:]) > Out[16]: > array([[ 0, 1, 4, 9, 16], > [ 0, 1, 8, 27, 64]]) > Once in a while it would also be nice to vectorize methods, either > over self or not over self, but the same trick (vectorize an anonymous > function wrapper) should work fine. Varadic functions do give you > headaches; I don't think even frompyfunc will allow you to vectorize > only some of the arguments of a function and leave the others > unchanged. > Anne Thanks for the info. I tried this, and found that it benchmarks about half of just iterating over the methods in the list and calling them. I think the reason is that two python frames are actually being run -- one for the vectorized apply, and one for whatever I'm calling. So what I'm thinking is that I could accomplish this if I implemented the apply call as a C-level ufunc. I'll try hacking about with this idea. Thanks Anne! More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-March/026611.html","timestamp":"2014-04-17T19:07:17Z","content_type":null,"content_length":"3992","record_id":"<urn:uuid:a3ab4963-e3bc-419b-9dd3-a7e3ec975c0d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
Bridge circuits No text on electrical metering could be called complete without a section on bridge circuits. These ingenious circuits make use of a null-balance meter to compare two voltages, just like the laboratory balance scale compares two weights and indicates when they're equal. Unlike the "potentiometer" circuit used to simply measure an unknown voltage, bridge circuits can be used to measure all kinds of electrical values, not the least of which being resistance. The standard bridge circuit, often called a Wheatstone bridge, looks something like this: When the voltage between point 1 and the negative side of the battery is equal to the voltage between point 2 and the negative side of the battery, the null detector will indicate zero and the bridge is said to be "balanced." The bridge's state of balance is solely dependent on the ratios of R[a]/R[b] and R[1]/R[2], and is quite independent of the supply voltage (battery). To measure resistance with a Wheatstone bridge, an unknown resistance is connected in the place of R[a] or R[b], while the other three resistors are precision devices of known value. Either of the other three resistors can be replaced or adjusted until the bridge is balanced, and when balance has been reached the unknown resistor value can be determined from the ratios of the known resistances. A requirement for this to be a measurement system is to have a set of variable resistors available whose resistances are precisely known, to serve as reference standards. For example, if we connect a bridge circuit to measure an unknown resistance R[x], we will have to know the exact values of the other three resistors at balance to determine the value of R[x]: Each of the four resistances in a bridge circuit are referred to as arms. The resistor in series with the unknown resistance R[x] (this would be R[a] in the above schematic) is commonly called the rheostat of the bridge, while the other two resistors are called the ratio arms of the bridge. Accurate and stable resistance standards, thankfully, are not that difficult to construct. In fact, they were some of the first electrical "standard" devices made for scientific purposes. Here is a photograph of an antique resistance standard unit: This resistance standard shown here is variable in discrete steps: the amount of resistance between the connection terminals could be varied with the number and pattern of removable copper plugs inserted into sockets. Wheatstone bridges are considered a superior means of resistance measurement to the series battery-movement-resistor meter circuit discussed in the last section. Unlike that circuit, with all its nonlinearities (nonlinear scale) and associated inaccuracies, the bridge circuit is linear (the mathematics describing its operation are based on simple ratios and proportions) and quite accurate. Given standard resistances of sufficient precision and a null detector device of sufficient sensitivity, resistance measurement accuracies of at least +/- 0.05% are attainable with a Wheatstone bridge. It is the preferred method of resistance measurement in calibration laboratories due to its high accuracy. There are many variations of the basic Wheatstone bridge circuit. Most DC bridges are used to measure resistance, while bridges powered by alternating current (AC) may be used to measure different electrical quantities like inductance, capacitance, and frequency. An interesting variation of the Wheatstone bridge is the Kelvin Double bridge, used for measuring very low resistances (typically less than 1/10 of an ohm). Its schematic diagram is as such: The low-value resistors are represented by thick-line symbols, and the wires connecting them to the voltage source (carrying high current) are likewise drawn thickly in the schematic. This oddly-configured bridge is perhaps best understood by beginning with a standard Wheatstone bridge set up for measuring low resistance, and evolving it step-by-step into its final form in an effort to overcome certain problems encountered in the standard Wheatstone configuration. If we were to use a standard Wheatstone bridge to measure low resistance, it would look something like this: When the null detector indicates zero voltage, we know that the bridge is balanced and that the ratios R[a]/R[x] and R[M]/R[N] are mathematically equal to each other. Knowing the values of R[a], R [M], and R[N] therefore provides us with the necessary data to solve for R[x] . . . almost. We have a problem, in that the connections and connecting wires between R[a] and R[x] possess resistance as well, and this stray resistance may be substantial compared to the low resistances of R[a] and R[x]. These stray resistances will drop substantial voltage, given the high current through them, and thus will affect the null detector's indication and thus the balance of the bridge: Since we don't want to measure these stray wire and connection resistances, but only measure R[x], we must find some way to connect the null detector so that it won't be influenced by voltage dropped across them. If we connect the null detector and R[M]/R[N] ratio arms directly across the ends of R[a] and R[x], this gets us closer to a practical solution: Now the top two E[wire] voltage drops are of no effect to the null detector, and do not influence the accuracy of R[x]'s resistance measurement. However, the two remaining E[wire] voltage drops will cause problems, as the wire connecting the lower end of R[a] with the top end of R[x] is now shunting across those two voltage drops, and will conduct substantial current, introducing stray voltage drops along its own length as well. Knowing that the left side of the null detector must connect to the two near ends of R[a] and R[x] in order to avoid introducing those E[wire] voltage drops into the null detector's loop, and that any direct wire connecting those ends of R[a] and R[x] will itself carry substantial current and create more stray voltage drops, the only way out of this predicament is to make the connecting path between the lower end of R[a] and the upper end of R[x] substantially resistive: We can manage the stray voltage drops between R[a] and R[x] by sizing the two new resistors so that their ratio from upper to lower is the same ratio as the two ratio arms on the other side of the null detector. This is why these resistors were labeled R[m] and R[n] in the original Kelvin Double bridge schematic: to signify their proportionality with R[M] and R[N]: With ratio R[m]/R[n] set equal to ratio R[M]/R[N], rheostat arm resistor R[a] is adjusted until the null detector indicates balance, and then we can say that R[a]/R[x] is equal to R[M]/R[N], or simply find R[x] by the following equation: The actual balance equation of the Kelvin Double bridge is as follows (R[wire] is the resistance of the thick, connecting wire between the low-resistance standard R[a] and the test resistance R[x]): So long as the ratio between R[M] and R[N] is equal to the ratio between R[m] and R[n], the balance equation is no more complex than that of a regular Wheatstone bridge, with R[x]/R[a] equal to R[N]/ R[M], because the last term in the equation will be zero, canceling the effects of all resistances except R[x], R[a], R[M], and R[N]. In many Kelvin Double bridge circuits, R[M]=R[m] and R[N]=R[n]. However, the lower the resistances of R[m] and R[n], the more sensitive the null detector will be, because there is less resistance in series with it. Increased detector sensitivity is good, because it allows smaller imbalances to be detected, and thus a finer degree of bridge balance to be attained. Therefore, some high-precision Kelvin Double bridges use R[m] and R[n] values as low as 1/100 of their ratio arm counterparts (R[M] and R[N], respectively). Unfortunately, though, the lower the values of R[m] and R[n], the more current they will carry, which will increase the effect of any junction resistances present where R[m] and R[n] connect to the ends of R[a] and R[x]. As you can see, high instrument accuracy demands that all error-producing factors be taken into account, and often the best that can be achieved is a compromise minimizing two or more different kinds of errors. • REVIEW: • Bridge circuits rely on sensitive null-voltage meters to compare two voltages for equality. • A Wheatstone bridge can be used to measure resistance by comparing the unknown resistor against precision resistors of known value, much like a laboratory scale measures an unknown weight by comparing it against known standard weights. • A Kelvin Double bridge is a variant of the Wheatstone bridge used for measuring very low resistances. Its additional complexity over the basic Wheatstone design is necessary for avoiding errors otherwise incurred by stray resistances along the current path between the low-resistance standard and the resistance being measured. Related Links
{"url":"http://www.allaboutcircuits.com/vol_1/chpt_8/10.html","timestamp":"2014-04-19T17:01:41Z","content_type":null,"content_length":"21943","record_id":"<urn:uuid:0af082b1-cf4f-4649-aa5d-abc61d5e8db6>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
Simulating the World Cup Knockout Stage June 24, 2010 — Andrew Moylan, Technical Communication & Strategy The knockout stage of the 2010 FIFA World Cup is about to begin in South Africa. At the time of writing, every team has one group stage match remaining, and most teams still have a chance to finish in the top two places in their group and progress to the knockout stage (see the tournament schedule and group stage standings). There are different approaches to ranking world football teams. The most well known is FIFA’s official world rankings, which are derived from points gained and lost in each match according to a heuristic set of rules that generally reward winning against higher-ranked opponents in more-important tournaments. A simple alternative with a more statistical basis is an Elo rating system (described in more detail below). A handy property of Elo rating systems is that they directly provide an estimate of the probability that a given team will perform better than another. We can use Mathematica with that to set up simulations of the knockout stage of the World Cup. This lets us estimate things like the chance of each team winning the tournament. We’ll also generate some nice visualizations of the results, such as the following simulated knockout stage (based on the current top two teams in each Elo rating systems are used in many other sports and games, including international Chess and Go competitions. The World Football Elo Ratings website (www.eloratings.net)* maintains up-to-date Elo ratings for all national football teams. The following table compares the top 10 national football teams according to the official FIFA rankings and the alternative Elo ratings, showing some significant differences. An Elo rating system works by assuming that the performance of a team in a match is a random value drawn from a certain probability distribution. The mean of the distribution is called the Elo rating for that team. The particular shape of the distribution may be freely chosen, but a common choice is ExtremeValueDistribution[α,400/Log[10]]. This is the distribution assumed for the stats we will use from World Football Elo Ratings (see details of how those ratings are calculated). Here are the expected performance distributions for Slovakia and Brazil, whose latest Elo ratings (at the time of writing) are 1605 and 2100 respectively. The plot shows that Brazil is expected to usually, but not always, have a greater performance. The assumption that performance follows an extreme value distribution leads directly to the probability that one team outperforms another, as a function of their rating difference. We can verify this by computing the probability that p1 > p2, where p1 and p2 are the performances of two teams, and r1 and r2 are their Elo ratings. To simulate knockout stage matches, we will take this Elo probability of one team outperforming another to be the probability that they win the match—hence the function name WinExpectation. We can import the latest ratings directly into Mathematica: (Due to web traffic, www.eloratings.net may be down during the tournament. You can import ratings from Google’s cached version or from Wikipedia instead—see the downloadable notebook near the end of this post for an example.) We’ll also define a function TeamElo that looks up the rating of a given team: It works like this: Now we are in a position to simulate the result of a match between teams a and b. This SimulateMatch function returns a symbolic representation of a completed match, Match[{a,b},winner], indicating which teams played and who won. I defined a custom appearance for Match objects, which you can see at the bottom of this post. We can define properties of symbolic objects, such as this simple one: The winner is not always the same in our random simulated matches. Next we’ll simulate each round of the knockout stage. Some synonyms help standardize country names between the different data sources: For now we’ll assume that the two teams currently atop each group make it through to the knockout stage. We can import the group stage standings directly from the World Cup website: The teams coming first and second in each of the groups A through H slot into the knockout draw as follows (see the knockout draw on the official website): KnockoutDraw gives a nested list encoding the structure of the knockout stage. We can visualize the resulting expression using TreeForm, showing the expected binary tree: To simulate the first round we just need to evaluate SimulateMatch on each pair of team names sitting at the second lowest level of this tree: A simulated first round: For subsequent rounds we simulate new matches between the winners of each pair of matches in the previous round: (The Sow function doesn’t affect the result. It is used here to accumulate rules indicating how each match depends on preceding matches, which will be handy later.) The entire knockout stage has four rounds in total: Here is the final match in one random simulation of the whole knockout stage: Here are the two semi-finals in another simulation: Every simulation is different. Here is a custom tree plot of a whole knockout stage: (We used Reap to gather up the rules accumulated by Sow in the NextRound function.) Here is a simulation starting with a different set of teams, those currently in first and third place in each group: We certainly aren’t limited to doing one simulation at a time. Here are the winners of 1000 simulated knockout stages using the teams in first and second place in the current group stage standings: Here are the same results as a bar chart giving the estimated probability of winning the tournament: Using our simulation framework we can explore all sorts of things: What about a giant knockout tournament containing the top 128 national teams? Download this notebook to see the source code for these examples and try some of your own! You can use the same notebook to run simulations with the final set of teams participating in the knockout stage, once they are determined.** * Note: All Elo ratings in this post and notebook are from World Football Elo Ratings, whose primary source of international football data is Advanced Satellite Consulting. ** You can view the notebook in the free Mathematica Player. To run your own simulations you need Mathematica—you can request a free trial. Read on to see how to set up the visual representation for teams and matches. We’ll visually represent each team with a labeled national flag, and winning teams get a bit of extra formatting to help them stand out. Mathematica has country flags built in, as part of CountryData: (We explicitly defined the flag for England, which is normally considered part of the United Kingdom by CountryData.) Here’s how the team icons look: Using the team icons we can define a visual representation of Match objects: Using Format, we told Mathematica that all symbolic Match objects should be displayed in this way: 16 Comments Posted by alex June 25, 2010 at 1:33 am Posted by Praveen kumar kjvs June 25, 2010 at 4:02 am Posted by sad June 25, 2010 at 4:44 am Posted by Helly June 25, 2010 at 6:45 am Posted by Jones June 25, 2010 at 6:46 am Posted by Roger June 25, 2010 at 6:46 am Posted by Jeff Green June 25, 2010 at 8:44 am Posted by Andrew June 26, 2010 at 2:36 pm Posted by fd June 27, 2010 at 11:54 pm Posted by Carla June 28, 2010 at 8:15 am Posted by Ben Heideveld July 9, 2010 at 3:25 am Posted by Chris June 24, 2011 at 12:46 pm Posted by KevinTran July 19, 2013 at 10:31 pm
{"url":"http://blog.wolfram.com/2010/06/24/simulating-the-world-cup-knockout-stage/","timestamp":"2014-04-19T17:03:07Z","content_type":null,"content_length":"131880","record_id":"<urn:uuid:a2e80cf7-b6a9-4aca-8e6f-00f8b576dc4a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
The diamond lemma The diamond lemma November 20, 2009 Posted by David Speyer in Uncategorized. A few results 1 (Bjorner, Eidelman and Ziegler) Suppose we have a finite collection of great circles on a sphere, none of them through the north or nouth pole. Let $R$ be the set of regions in the complement of these circles, and suppose that every region is a triangle. Put a partial order on $R$ by $x \leq y$ if $x$ is south of every circle that $y$ is south of. Show that, for $x$ and $y \in R$, there is some $z \in R$ such that $w \leq z$ if and only if $w \leq x$ and $w \leq y$. 2 (Mozes, see also IMO 1986.3) Let $G$ be a finite graph, and let $r$ be a real valued function on the vertices of $G$. Consider the following (solitaire) game: find a vertex $i$ for which $r_i$ is negative. Replace $r_i$ by $- r_i$ and, for every vertex $j$ that neighbors $i$, decrease $r_j$ by $-r_i$. The game ends if all of the $r_i$ are nonnegative. You and I start playing with the same graph and the same $r$. Show that, if my game ends in $N$ moves at position $z$, then your game will end in the same position, in the same number of moves. 3 (Poincare, Birkhoff and Witt) Define $U$ to be the ring generated by $E$, $F$ and $G$, subject to the relations $FE=EF+G$, $GE=EG+F$ and $GF=FG+E$. Show that any element of $R$ can be expressed uniquely as a sum of elements of the form $E^i F^j G^k$. (Uniqueness is up to rearranging the sum and combining like terms.) 4 (Jordan and Holder) Let $G$ be a finite group. Let $G = G_0 \supsetneq G_1 \supsetneq G_2 \supsetneq \cdots \supsetneq G_r = \{ e \}$ $G = H_0 \supsetneq H_1 \supsetneq H_2 \supsetneq \cdots \supsetneq H_s = \{ e \}$ by two sequences of subgroups such that $G_{i+1}$ is normal in $G_i$, with $G_{i+1}/G_i$ simple, and the same is true for the $H$‘s. Then $r=s$ and the quotients $H_{i+1}/H_i$ are a permutation of the quotients $G_{i+1}/G_i$. What do all of these have in common? You can remember all of their solutions by drawing the same figure — the diamond! Solution 1 We say that $z$ is a meet of $x$ and $y$ if $z$ has the required properties. For any region $r$, let $\ell(r)$ be the number of circles below $r$. We will prove the following statement by induction on $\ell(r)$: Inductive Claim Suppose that $r \geq x$ and $r \geq y$. Then $x$ and $y$ have a meet. This establishes the result, as we can take $r$ to be the region containing the north pole and the hypotheses on $r$ become trivial. In the other hand, the base case is trivial because, when $\ell(r) =0$, then $r$ must be the region containing the north pole, we have $r=x=y$, and $r$ is a meet of $x$ and $y$. Now for the inductive part. Let $r \to \cdots \to x$ and $r \to \cdots \to y$ be southward traveling paths from $r$ to $x$ and $y$. Let the first steps on these paths be $r \to s$ and $r \to t$, crossing lines $i$ and $j$. If $s=t$, or equivalently $i=j$, then we can take $s$ as a new $r$ and we are done by induction. Otherwise, let $u$ be the region due south of the crossing of $i$ and $j$. It is easy to see that $u$ is a meet of $s$ and $t$. Applying the inductive hypothesis to $(s, x, u)$, let $z'$ be a meet of $x$ and $u$. Applying the inductive hypothesis to $(t, u, y)$, let $z''$ be a meet of $u$ and $y$. Applying the inductive hypothesis to $(u, z', z'')$, let $z$ be a meet of $z'$ and $z''$. We leave it to the reader to check that $z$ is a meet of $x$ and $y$. Solution 2 The proof is by induction on $N$. Say my game begins $r \to s \to \cdots \to z$ and your game begins $r \to t \to \cdots$. (We don’t know yet that your game ends.) Let my first move be at vertex $i$ and yours at vertex $j$. The case $i=j$ is an immediate induction, so let’s assume $i eq j$. We know that $r_i$ and $r_j <0$. My brother and your sister come to join us. Here is how they play. If $i$ and $j$ are not adjacent, my brother starts off playing at $i$, then at $j$ and your sister starts of at $j$ and then at $i$. So, after two moves, both he and she reach the same configuration $u$. If, on the other hand, $i$ is adjacent to $j$, then my brother starts with $iji$ and your sister starts $jij$. In three moves, they have again reached the same configuration. (Exercise! Remember to check that all vertices which are played are in fact negative at the time.) Call this configuration $u$. After this, my brother plays in any manner he wishes. By induction, after he moves to $s$, he will make $N-1$ more moves and end at $z$. Your sister also plays in any manner she wishes. By induction between her and my brother, after she gets to $u$, she will make either $N-2$ or $N-3$ more moves and end at $z$, having made $N$ moves in total. Finally, we apply the induction hypothesis to you and your sister. After you both move to $t$, you will make $N-1$ more moves, ending at $z$. So we all get to $z$ in $N$ moves. I hope it is clear that Solutions 1 and 2 have the same inductive structure, although the details differ. The general strategy goes as follows: suppose we have some process which goes from one state to another. We have two different paths, $r \to s \to \cdots \to x$ and $r \to t \to \cdots \to y$ that start at the same state, and we want to show that, in some sense, they come together again. In the figure above, we are presented with the black structure, and we need to show that the paths rejoin. We will do this by adding the blue structure. First, make a careful analysis of the case where the two paths have length $1$, and prove your result in this case. Now, using your analysis from the first step, build paths $r \to s \to \cdots \to u$ and $r \to t \to \cdots \to u$ which come together in the required way. Inductively, bring the paths $s \to x$ and $s \to u$ together at some $z'$; bring the paths $t \to u$ and $t \to y$ together at some $z''$; finally, bring $u \to z'$ and $u \to z''$ together at some $z$. Often, as in Solution 1, the starting point $r$ is trivial in the most important application. But bringing it into the problem allows us to apply this inductive structure. One of the main difficulties is figuring out what to induct on. Roughly, one wants to use the length of the paths, but this may not be precisely right. There are many good papers on the diamond lemma, but they tend to focus on applications to one particular field, and call it by different names. Here are some examples of the Diamond Lemma in Grobner bases, noncommutative rings, braid groups (chapter 14, see figure 12!), lattices (lemma 2.1), anti-matroids (lemma 2.6). Solution 3 (a sketch) Let’s talk about Puzzle 3. This is typical of applications to ring theory, and there are many subtleties which are particular to this context. I would like to refer to Bergmann’s superb paper for the details but, sadly, it is not publicly available online. For those without academic access, the best reference I can find online is Wenfeng Ge’s masters thesis. Let’s call $E^i F^j G^k$ a standard monomial, and a sum of standard monomials a standard polynomial. Let the states of our system be formal noncommutative polynomials in $E$, $F$ and $G$. Let our operations be finding a term of the form $FE$, $GE$ or $GF$, and replacing it by $EF+G$, $EG-F$ or $FG+E$ respectively. So the states where we can perform no operations are precisely the standard polynomials. We want to show that, starting with any polynomial $r$, this process will terminate and, if we perform the process in two different ways, $r \to \cdots \to x$ and $r \to \cdots \to y$, then $x=y$. I’ll ignore termination, which is the easier question, to focus on the latter issue. A better way to phrase our claim is that if we have any two paths $r \to \cdots \to x$ and $r \to \cdots \to y$, which may not end at standard polynomials, then we can join them together into two paths $r \to \cdots x \to \cdots \to z$ and $r \to \cdots y \to \cdots \to z$ which have a common endpoint. This trick is frequently useful: by phrasing our claim to apply to more paths, we make induction easier. We start with a careful analysis of the case of paths of length 1. Say we have $r \to s$ and $r \to t$. If the two changes happen in different monomials, or if they happen in nonoverlapping parts of the same monomial, then we can just make both changes in the two possible orders to get to the same destination in two steps. So the only interesting case is two overlapping changes. For example, $GFE \to (FG+E)E$ and $GFE \to G(EF+G)$. In this case, we make the following moves The reader familiar with Lie algebras might like to see how this computation works in a general universal enveloping algebra; it’s a bit more complicated because the quadratic terms may not be The structure of the proof is now the same. Of course, we have to figure out what to induct on, and that’s a little subtle. But the worse issue is the following: Suppose our starting state is $GFE + FGE$ and our first move was to $E^2$ on one path, and to $GEF + G^2 + FGE$ on the other. According to the rubric above, we should move $E^2 \to FEG - F^2 + E^2 - FGE$. But, as we’ve defined things, this isn’t a legal move, because there is no $GE$ to replace in $E^2$. This possibility of terms cancelling is a major nuisance; I leave it to Bergmann to explain how to fix it. Solution 4 You do it! The diamond lemma comes up under other names as well, e.g., “confluence” of normalization/reduction algorithms, the “Church-Rosser property”. Mac Lane’s proof of the coherence theorem for monoidal categories (as given in Categories for the Working Mathematician) is a classic illustration of the technique. There are many good papers on the diamond lemma, but they tend to focus on applications to one particular field, and call it by different names. There is at least one paper trying to unify the story (http://arxiv.org/abs/0712.1142); it is not written as thorough as it could be though. Also (in a pitiful attempt of self-advertising) let me mention this paper as an instance of a yet another Diamond Lemma – it is essentially a paper telling its reader about one good definition, and about what consequences a good definition can have :-) Is the Vect extension of the diamond lemma still the diamond lemma? I would hope so, and David endorses this usage, but I don’t know if it is entirely standard. I first encountered the diamond lemma in Milnor’s proof of unique factorization of 3-manifolds. I have used similar arguments in 3-manifold topology on a few occasions. Of course, the linear version of course also shows up a lot in skein theories for quantum topological invariants. The Jordan-Holder theorem looks a lot easier to me. You don’t need induction, but can write down the whole diamond at once, by intersecting the two filtrations, to get a diamond-shaped poset. @Ben Wieland: I think you are right, although even then I think that the diamond proof will be a little shorter than dealing with the whole poset at once. But I also don’t mind having some examples in an expository post which don’t need the full strength of the technique I’m explaining. @Greg Kuperberg: I’m not sure exactly what you mean by the Vect extension of the diamond lemma, but it seems to me Bergmann’s paper would be an example of it, and his title is “The Diamond Lemma for Ring Theory.” @Greg Kuperberg: I’m not sure exactly what you mean by the Vect extension of the diamond lemma, but it seems to me Bergmann’s paper would be an example of it, and his title is “The Diamond Lemma for Ring Theory.” Hi David. I should just have said, a linear extension of the diamond lemma. You’re right that George Bergman’s paper (one n, by the way) gives an example of such an extended diamond lemma. It is logically more general than the original diamond lemma, because each case can be completed to a linear combination of different diamonds. The PBW theorem is a classic example of this, of course. Maybe it would be interesting to formulate a category-theoretic diamond lemma that captures both the unique-completion and the linear-combination-of-completions cases. Now that I think of it, here is yet another interpretation that I noticed recently of the linearly extended diamond lemma, as it arises in the PBW theorem and other cases. The PBW theorem says that a certain filtered vector space, U(g), is the same size as its associated graded. The filtered vector space comes from another filtered vector space, T(g), quotiented by a filtered set of relations. The diamond lemma is equivalent to the statement that spectral sequence of the resolution converges at E_1. It is always equivalent to that statement. So it could be fair to say that the theory of spectral sequences co-opts the diamond lemma. Maybe that is a good way to say it in category theory terms. [...] unique? This essentially comes down to what the guys at the Secret Blogging Seminar call the diamond lemma: if we have two different ways of partitioning a root system, then there is some common way of [...] I’m bothered by the first example: it seems non-trivial to construct families of more than three great circles so that every complementary region is a triangle. […] of which has applications such as the Poincaré–Birkhoff–Witt theorem; see, for example, this blog post by Ben […] Sorry comments are closed for this entry Recent Comments Erka on Course on categorical act… Qiaochu Yuan on The many principles of conserv… David Roberts on Australian Research Council jo… David Roberts on Australian Research Council jo… Elsevier maths journ… on Mathematics Literature Project…
{"url":"http://sbseminar.wordpress.com/2009/11/20/the-diamond-lemma/","timestamp":"2014-04-18T18:10:43Z","content_type":null,"content_length":"106467","record_id":"<urn:uuid:7f4d1499-cc70-4383-b861-ce13ad004fa1>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
Generating Gaussian Random Numbers let X~N(mu,sigma) then z=((x-mu)/sigma)~N(0,1) this is basic normal dist stuff now lets do it backwards ie let z~N(0,1) then X=(sigma*z + mu)~N(mu,sigma) so there ya go. with a N(0,1) generator to make it N(mu, sigma) transform it by sigma*X + mu. you can prove this without too much trouble by writing E[X] as integral(x*p(x)) (for continuous, obviously sum for discrete) and showing E[aX+b] = aE[X] + b and then considering variance as E[X^2] - also your code is wrong, it works by writing the normal distribution integral in 2d space and picking a point(v1,v2) in the unit circle which is why you reject if v1^2 + v2^2 > 1. so v1,v2 should lie between -1 and 1 which is why the code calls UnitRV() which should return a value between 0 and 1 however your UnitRV returns a value between -1 and 1 putting v1 and v2 in the range -3 to 1 which shouldnt effect your numbers (i think) but will require more numbers to be generated per each (x1,x2) ive just looked at the maths for generating a N(mu,sigma) distribution closely instead of guessing(as i did originally), you have to do a translated elliptical substitution (x-a=q*cos t, y-b=r*sin t rather than the much simpler polar subsition x=r*cos t,y=r*sin t) into the double integral and its far too much work when you can just multiply by variance and add mean to the resulting N(0,1) **i deleted the paragraph that was here cos it was a load of nonsense my masters project is based on random number generation so i do know quite a bit about it and have probably assumed too much knowledge on your part, if theres anything you dont get then please ask. Last edited by kev82; 08-10-2004 at 06:17 PM.
{"url":"http://www.linuxquestions.org/questions/programming-9/generating-gaussian-random-numbers-215942/","timestamp":"2014-04-18T12:10:55Z","content_type":null,"content_length":"55946","record_id":"<urn:uuid:7b184b38-d3ee-4dda-a8e8-7984ccfb9168>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
tion Photoshop plugin Now you can easily draw the golden ratio/golden section, or other golden proportion, as an aid to composition. This plugin can draw the golden section, golden spiral and the golden triangles. In addition it also can draw the harmonious triangles and the rule of thirds. Ships with a free golden section calculator. Now you can easily draw the golden ratio/golden section, or other golden proportion, as an aid to composition. This plugin can draw the golden section, golden spiral and the golden triangles. In addition it also can draw the harmonious triangles and the rule of thirds. Ships with a free golden section calculator. Photoshop plugin to get the golden section into your composition Now you can easily draw the golden section, or other golden proportion, as an aid to composition. This plugin can draw the golden section, golden spiral and the golden triangles. In addition it also can draw the harmonious triangles and the rule of thirds - based on composition through identity. The best way to use the plugin is to output the drawing to a transparent top layer in Photoshop you can move around for cropping. Ships with a free golden section calculator. Windows version is for all versions of Photoshop, Elements, Fireworks, Paint Shop Pro, Corel Draw, Illustrator and other software that supports Photoshop plugins. See list. Mac version is for Photoshop CS3 and later and Elements and is universal only. Now you can easily draw the golden section/golden ratio, or other golden proportion, as an aid to composition. This plugin can draw the golden section, golden spiral and the golden triangles. In addition it also can draw the harmonious triangles and the rule of thirds. Ships with a free golden section calculator. Now you can easily draw the golden section/golden ratio, or other golden proportion, as an aid to composition. This plugin can draw the golden section, golden spiral and the golden triangles. In addition it also can draw the harmonious triangles and the rule of thirds. Ships with a free golden section calculator. Now you can easily draw the golden ratio or golden section, or other golden proportion, as an aid to composition. This plugin can draw the golden section, golden spiral and the golden triangles. In addition it also can draw the harmonious triangles and the rule of thirds. Ships with a free golden section calculator. Now you can easily draw the golden ratio/golden section, or other golden proportion, as an aid to composition. This plugin can draw the golden section, golden spiral and the golden triangles. In addition it also can draw the harmonious triangles and the rule of thirds. Ships with a free golden section calculator. Now you can easily draw the golden ratio or golden section, or other golden proportion, as an aid to composition. This plugin can draw the golden section, golden spiral and the golden triangles. In addition it also can draw the harmonious triangles and the rule of thirds. Ships with a free golden section calculator. Now you can easily draw the golden ratio/golden section, or other golden proportion, as an aid to composition. This plugin can draw the golden section, golden spiral and the golden triangles. In addition it also can draw the harmonious triangles and the rule of thirds. Ships with a free golden section calculator. Use of the golden section Photoshop plugin Transparent The best way to use the plugin is to create a transparent layer on the image you want to edit. In Photoshop you do this with Shift-Ctl-N. Its color should be None, which is layer transparent. To get a resizable transparent golden section in Photoshop... 1. First create a new empty layer in Photoshop. 2. In the plugin draw the sections or divisions you want onto this transparent layer filling the entire image. 3. apply. 4. In Photoshop, with the golden section layer still active, select 'Free Transform' from the menu 'Edit' to resize the golden section and move it around the image. What are the Divine Proportions? Golden The divine proportions and the golden sections are two expressions for the same thing. Basically it is the division of a line in two sections, where the ratio between the smallest sections section and the largest section is identical to the ratio between the largest section and the entire length of the line. In other words A/B = B/(A+B). The ratio is about 1/1.618. One interesting consequence of this ratio is that if you have a rectangle where the sides have the golden ratio, then you can divide the rectangle into a square and a rectangle, where the new rectangle also has the golden ratio between its sides. This can go on ad infinitum and is known as golden spiral sections. You can use this to construct an equiangular spiral, known as the golden spiral, where the size of the revolutions grow with the golden ratio. If you want a more dynamic composition than the simple golden sections, then you can construct golden triangles as shown in the illustration. Below you will see an example of the harmonious triangles. If your image has the divine proportion, then golden triangles and harmonious triangles will be identical. Harmonious Harmonious divisions rely on the principle of similarity. The most common is the rule of thirds where you simply divide a line into three equal part. This is often misnamed as the golden section. Another harmonious division is the division of a rectangle into equiangular harmonious triangles based on the diagonal. When the proportions of the rectangle are identical to the golden proportions, then the harmonious triangles will of course be identical to the golden triangles. Midpoints Diagonals offer a nice compositional grid. Now you can easily draw the golden section/golden ratio, or other golden proportion, as an aid to composition. This plugin can draw the golden section, golden spiral and the golden triangles. In addition it also can draw the harmonious triangles and the rule of thirds. Ships with a free golden section calculator. Now you can easily draw the golden ratio/golden section, or other golden proportion, as an aid to composition. This plugin can draw the golden section, golden spiral and the golden triangles. In addition it also can draw the harmonious triangles and the rule of thirds. Ships with a free golden section calculator. The inscribed A and V. Now you can easily draw the golden section / golden ratio, or other golden proportion, as an aid to composition. This plugin can draw the golden section, golden spiral and the golden triangles. In addition it also can draw the harmonious triangles and the rule of thirds. Ships with a free golden section calculator. Now you can easily draw the golden ratio / golden section, or other golden proportion, as an aid to composition. This plugin can draw the golden section, golden spiral and the golden triangles. In addition it also can draw the harmonious triangles and the rule of thirds. Ships with a free golden section calculator. The inscribed > and < . Now you can easily draw the golden section / golden ratio, or other golden proportion, as an aid to composition. This plugin can draw the golden section, golden spiral and the golden triangles. In addition it also can draw the harmonious triangles and the rule of thirds. Ships with a free golden section calculator. Now you can easily draw the golden section / golden ratio, or other golden proportion, as an aid to composition. This plugin can draw the golden section, golden spiral and the golden triangles. In addition it also can draw the harmonious triangles and the rule of thirds. Ships with a free golden section calculator. Fibonacci Leonard Fibonacci discovered that if you have a sequence of numbers beginning with 0,1, where the next number in line is the sum of the previous two, then the sequence will progress sequence towards a more and more exact representation of the golden ratio. The fibonacci sequence is 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, etc. Oddly enough nature tends to organize growth and limbs according to this sequence. F.ex. the ratio between the length of limbs of your fingers is the golden ratio. The ratio between the length of your nose and the distance from the bottom of the chin to the bottom of the nose is the golden ratio. The spiral growth of crustaceans follows the golden spiral. The divine proportions are an in-built (or in-grained) aesthetic parameter we judge beauty by. The golden section plugins four main control groups Flip and rotate here will also apply to the harmonious divisions. Harmonious Now you can easily draw the golden section / golden ratio, or other golden proportion, as an aid to composition. This Photoshop plugin can draw the golden section, golden spiral and divisions and the golden triangles. In addition it also can draw the harmonious triangles and the rule of thirds. Ships with a free golden section calculator. Now you can easily draw the golden midpoints ratio / golden section, or other golden proportion, as an aid to composition. This Photoshop plugin can draw the golden section, golden spiral and the golden triangles. In addition it also can draw the harmonious triangles and the rule of thirds. Ships with a free golden section calculator. Size Size enables you to define a rectangle within the limits of the image. This is useful if you need to crop the image to a specific size and want to have some help. The buttons let you set height from a specified width or width from a specified height so that the size is the golden ratio. In each case you have two buttons to create either a landscape or portrait format rectangle from a given side. If you left-click the preview, the upper left corner of the rectangle will be moved to where you click. To reset the position to the top left corner of the image, click Pos 0. Line Line lets you set line thickness and line darkness. If you have a large image and have to downscale it, then a a line that's only one pixel may become invisible. You can't use colored lines, but can change it from black to white through any gray. You will rarely want to thicken the frame, but the option is there. Classic examples - Vermeer Vermeer used the basic golden sections to arrange the He used the golden triangles to create a pyramid By contructing the focalpoint of a golden spiral, he located the masses. containing the two persons, thus bringing them into main dynamic focus of the composition: the girls hands and the relationship glass of wine. In fact Vermeer subdivided minutely and found the exact Vermeer also used the symmetrical focalpoint of the It exactly locates the placement of the open window with the faint space occupied by the glass and the edges of the hands rotated spiral. reflection of a woman outside. holding it. Vermeer also divided the picture into two halfs along the He did the same with the horisontal halfs. The Of course there are other compositional principles, that have been vertical centre. He then divided each half into the golden horisontal middle is the mans eye-hight. in use for centuries. Diagonals and diagonals to midpoints are sections. classics. Aids and inspiration for composing The golden triangles are very useful for creating dynamic images where the diagonal balances the two angles. Now you can easily draw the golden section / golden ratio, or other golden proportion, as an aid to composition. This plugin can draw the golden section, golden spiral and the golden triangles. In addition it also can draw the harmonious triangles and the rule of thirds. Ships with a free golden section calculator. Now you can easily draw the golden section / Golden golden ratio, or other golden proportion, as an aid to composition. This Photoshop plugin can draw the golden section, golden spiral and the golden triangles. In addition it also can triangle in draw the harmonious triangles and the rule of thirds. Ships with a free golden section calculator. close ups Rotated golden Here the photographer composed along the upward left diagonal. The two golden triangles create a dynamic motion in the opposite direction balancing this. One could also use the triangles following crop: Golden spiral Here the golden triangles balance the diagonal and the golden spiral enforces the motion upwards right by placing focus on the dramatic point: the index finger pushing up the chin for dynamic while at the same time taking the motion from the hand entering at the lower right corner.
{"url":"http://www.powerretouche.com/Divine_proportion_tutorial.htm","timestamp":"2014-04-21T09:37:14Z","content_type":null,"content_length":"39341","record_id":"<urn:uuid:e6a8f395-3156-4fe8-bc06-012a7f7412a5>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
Surely You Must Be Joking, Mr. Mahajan! The "Street-Fighting Mathematician" Answers Your Questions We recently solicited your questions for Sanjoy Mahajan, author of Street-Fighting Mathematics. As usual, you came up with some excellent questions, about everything from the methodology of educated guessing to Mahajan’s business model (the PDF of his book is available for free). And Mahajan’s answers are terrific — thorough, thoughtful, and even funny (you can tell he likes Richard Feynman even before he says so). By the end of this Q&A you’ll know a bit more about the Gulf Oil spill, the economics of publishing, and the relationship between stair-climbing and eating jelly doughnuts. Thanks to all of you for the questions and especially to Mahajan for the great answers. How does one improve his ability to make educated guesses? Is it just something one is born with? – Bobby In college I took a class where we figured out if we tend to overestimate or underestimate when we make an educated guess. Then from that we were able to find out the percent that we tended to over or underestimate and theoretically this would lead us to make more accurate educated guesses. Is this the same process you use? And is it the same one the book recommends to be a better educated guesser? – Brian What are the most important factors that improve estimating? For example, I would assume simple geometry is one (understanding area, volume, proportion), but what others are crucial? – Gary Talent, it now seems, is made rather than born. That is the conclusion of a large body of research on world-class experts, from grandmaster chess players and concert pianists to championship golfers. This research, the subject of a previous interview in this blog, “The Science of Genius: A Q&A With Author David Shenk,” represents one of the most exciting conclusions of psychology research—for it implies that we can dramatically increase our skill in many areas, including in educated guessing. This research has identified a particular type of practice — deliberate practice — as essential for improving expertise. In deliberate practice, one receives specific feedback on what to improve, whether through self-study or with a coach, and uses that feedback to make targeted improvements. Deliberate practice is contrasted with the usual habits of practice, such as how I tried to improve at chess: I simply played lots of chess games, with the result that at age 41 I am hardly better at chess than at age 11. For educated guessing, a powerful type of deliberate practice is first to make educated guesses and then to check the intermediate steps in the calculation to find where in particular the estimates were most inaccurate. Furthermore, in teaching educated guessing, I find that developing skill in it requires mastering a repertoire of reasoning tools, just as a carpenter masters a repertoire of physical tools. Because the reasoning tools seem to be so important, I organize my teaching around the reasoning tools. For applied mathematics, which is the focus of Street-Fighting Mathematics, the reasoning tools for getting quick answers are as follows: dimensional analysis, easy cases, lumping, pictorial proofs, successive approximation, and reasoning by analogy. Science and engineering estimations are the subject of my next book, Street-Fighting Tools for Science and Engineering; there I find nine tools to be particularly useful (several of which overlap with those for mathematics). Here is an example that illustrates the most important science-and-engineering estimation tool—divide-and-conquer reasoning—and shows how to build deliberate practice into one’s thinking. Suppose you want to estimate how much oil the U.S. consumes (in barrels per year). Without expertise in oil economics, a straight-off guess (before making a detailed estimate) is pretty much a wild guess. My wild guess is that anywhere from 1 million to 1 trillion barrels per year sounds reasonable. Now it’s time to refine that wide estimate by using divide-and-conquer reasoning. Here, I’ll divide the estimate into two parts: (1) the amount of oil consumed by passenger vehicles (cars, pickup trucks, SUVs, etc.); and (2) the ratio of all oil consumption to passenger-vehicle consumption. The final estimate is the product of these two estimates. However, the first part is still too hard for me to guess accurately. Thus, I refine it into simpler factors: (1) the number of passenger vehicles in the U.S; (2) the miles each vehicle is driven per year; (3) the typical gas mileage of a passenger vehicle (miles per gallon); and (4) the number of gallons in a barrel. Here are my estimates for these four subparts: The number of passenger vehicles in the United States: The U.S. population is 300 million, and maybe there is one vehicle per person, so there are probably 300 million (passenger) vehicles. The miles each vehicle is driven per year: from looking for a used car many years ago, I remember that cars driven 10,000 miles per year or less were considered “low mileage.” So I’ll estimate that a typical passenger vehicle is driven 15,000 miles per year. The typical gas mileage of a passenger vehicle (miles per gallon): small cars get maybe 40 miles per gallon on the highway; SUVs get maybe 15 miles per gallon. I’ll use 25 miles per gallon as a typical value. The number of gallons in a barrel: I’ll pretend (to give a spot for deliberate practice later) that I have no idea about this value, and I make a semi-wild guess of 500 gallons per barrel. Having divided, I now hope to conquer by combining all the subestimates: 300 million vehicles * 15,000 miles/vehicle/year * 1 gallon/25 miles * 1 barrel/500 gallons = 360 million barrels per year (for passenger vehicles). I need one more estimate: the ratio of all oil consumption to passenger-vehicle consumption. On the one hand, passenger vehicles are an important oil user, so the ratio should be near 1. On the other hand, I can think of many other important uses (planes, trains, heating, electricity generation, fertilizer, and plastics), so the ratio should be much higher. My compromise is that passenger vehicles use 50 percent of all oil consumed, so the ratio in question is 2. My final estimate for U.S. oil consumption is therefore 720 million barrels per year. Now it’s time to check the final estimate and, in order to apply deliberate practice, to check the intermediate steps. The actual oil consumption is 19.5 million barrels per day, which is 7.2 billion barrels per year (2008 estimate, from the CIA World Factbook). My estimate is very low—by a factor of 10. That error is too large for me, even with my low standards. It’s time to check the individual estimates in order to find out what went wrong. The number of passenger vehicles is 254 million (2007 data, U.S. Bureau of Transportation Statistics). My estimate of 300 million is too high but only by 20 percent. The number of miles driven per vehicle per year is roughly 11,500 (2001 estimate, U.S. Department of Transportation, National Household Travel Survey). My estimate of 15,000 is roughly 30 percent too The typical gas mileage is roughly 20 miles per gallon (2007 data, U.S. Bureau of Transportation Statistics). My estimate of 25 miles per gallon is high, but by only 20 percent. Furthermore, this error compensates for a portion of the preceding errors. That compensation is an additional benefit of divide-and-conquer reasoning: by splitting the problem into many parts, you give the various errors a chance to cancel each other. The number of gallons per barrel is 42. My estimate (really, guess) of 500 gallons per barrel is far too high, by roughly a factor of 10. Aha, that explains the discrepancy between my estimate and the true oil consumption. (My estimate of the ratio between all oil consumption and passenger-vehicle consumption turns out to be quite accurate.) That is the specific feedback on where the estimation can be improved. To use that feedback for deliberate practice, I ask myself, “How could I estimate the volume of a barrel?” Two methods come to First, I’ve seen barrels of tar at roadside construction sites. Perhaps they are oil barrels. Their volume can be divided into three factors (divide and conquer again!): depth, width, and height. These barrels are about 3 feet or 1 meter high. Their width and depth are about 1.5 feet or 0.5 meters. Thus, the volume is 1 meter * 0.5 meters * 0.5 meters (pretending that the barrels are square instead of circular in cross section, which is equivalent to pretending that pi equals 4). The product comes to 0.25 cubic meters or 250 liters or roughly 65 gallons. That estimate, although higher than the true value of 42 gallons, is a significant improvement over my guess of 500 gallons. Here is the second method, which MIT students taught me: Oil costs about about $80 per barrel, and gasoline costs about $2 per gallon, so a barrel contains about 40 gallons! How many barrels of crude has been leaking into the Gulf of Mexico every day? – jimi I regret that I have no special knowledge of the rate at which oil has been spilling into the gulf, beyond what I read in the newspapers. But the question brings up two issues applicable to estimation in general. The first issue is rounding and accuracy. In a natural-history museum, a guide was showing the visitors an ancient insect preserved in amber. “How old is that insect?” asked a visitor. “1,000,007 years,” said the guide. How can the age be known so precisely, the visitors wondered. “Because it was 1 million years old when I started here 7 years ago.” Several early news reports about the spill had a similar flaw. Some reported a flow estimate of 5,000 barrels per day. However, other contemporaneous reports quoted the flow as 210,000 gallons per day. Because a barrel (of oil) is 42 gallons, the two numbers are numerically equivalent. However, they are psychologically different. The gallons-per-day figure of 210,000 includes a second nonzero digit (the “1″ in the “21″), implying that it is based on quite accurate measurements. The number 210,000 suggests an accuracy of a few percent. In contrast, the 5,000 (for barrels per day) suggests merely that the “5″ is somewhat reliable but promises little more. The suggested accuracy is perhaps 20 percent—a much more plausible conclusion, especially as some current flow estimates range up to 50,000 barrels per day. The second issue is how to make a quantity meaningful. Quantities with dimensions (such as dimensions of barrels or gallons per day) are not meaningful in themselves. Is 5,000 barrels a day large or small? It’s hard to know. Even harder to decide: Is 210,000 gallons a day large or small? To make it almost impossible to decide, express the time in years: Is 80 million gallons a year a large or a small amount? Because such quantities have dimensions, they do not receive a meaning until we compare them with a related quantity that has the same dimensions. To that end, compare the 5,000 barrels a day (or roughly 200,000 gallons per day) with another famous oil spill, the Exxon Valdez in Prince William Sound, Alaska. The supertanker spilled (conservatively) 11 million gallons of oil. Call it 10 million gallons. That means every 50 days (or roughly 2 months), the Gulf receives another Exxon Valdez worth of oil. (That comparison is based on using 5,000 barrels a day instead of current and usually higher flow estimates.) What about the problem of bias in “street math”? It seems to me that rigor is the safeguard against bias hijacking estimates for manipulative ends — I remember during the presidential campaign that estimates of Palin rally attendance became controversial along these lines of street estimates — and also, much more troubling, is the ‘guesstimation’ that went on in the financial industry, where the banks colluded with ratings agency guesstimates on securities which were rigged to overinflate prices, thereby exploiting investor bias during a boom time (becoming blinded to the downside)? – Street-fighting reasoning can help develop financial judgment, a quality useful for spotting financial charlatans. The street-fighting technique of “easy cases” can, for example, help estimate loan payments. A key idea of easy-cases reasoning is to think about the extreme cases of a problem; these cases are usually easiest to understand, and their analysis helps build intuitions and ways of Among loans, one extreme is the short-term or zero-interest loan. Near this extreme is a 10-year loan at 6 percent annual interest. (All loans are compounded monthly and repaid in equal monthly payments.) For concreteness, imagine that the principal is $120,000, in order to make the arithmetic easier. The approximate, easy-cases reasoning is as follows: if the interest rate were zero (the fully extreme case), the 120 payments would each be 1/120th of the principal, or $1,000. Even when the interest rate is not zero, but still small, the principal-only payment of $1,000 per month will be the main portion of the payment. How can you tell whether the interest rate is small? Multiply the interest rate times the loan term and compare it to 100 percent. Here, 6 percent per year times 10 years gives 60 percent, which is somewhat smaller than 1, so the approximation that the payment is mostly principal is not too bad. To improve the preceding zeroth approximation, estimate the interest. In the zero-interest (or all-principal) approximation, the principal declines at a constant rate from $120,000 to $0; therefore, the average principal balance is $60,000. A 6 percent annual interest means a 0.5 percent monthly interest, and 0.5 percent of $60,000 is $300. Thus, the interest will be roughly $300 per month—making the total payment $1.300 per month. (The actual payment is very close: $1,332 per month.) The other extreme is the long-term loan (or, if the principal is negative, an annuity). For a loan near this extreme, consider the same principal of $120,000 but loaned at 12 percent annual interest for 30 years. Now the interest rate times the loan term is 360 percent, significantly larger than the 100 percent border between the extremes. In this extreme, the payments are mostly interest. The monthly interest is 1 percent, so the monthly payment is 1 percent of $120,000, or $1,200. This estimate is the zeroth approximation. It is already quite accurate—the true payment is $1,234 per For the next approximation, I’ll just state the procedure without giving a proof. Because the zeroth approximation accounted only for the interest, the correction should increase the payment. The increase is estimated as follows: Multiply the interest rate times the loan period. Here, the result is 360 percent. Convert it to a number by dividing it by 100 percent. Here, the result is 3.6. Raise e (the number 2.718…) to that number, and then take the reciprocal. Here, that means computing 1/e^3.6, which is roughly 0.027. Convert that number to a percentage by multiplying it by 100 percent. Here, the result is 2.7 percent. Increase the interest-only estimate by this amount. Here that gives $1,232 (instead of the true payment of $1,234). This corrected estimate is extremely accurate! As another example with a more realistic interest rate, imagine the same principal ($120,000) loaned at 6 percent annual interest for 30 years (a typical fixed-rate mortgage in the U.S.). The interest-only payment estimate is $600 per month. The interest rate times the loan period is 180 percent or 1.8. The reciprocal of e^1.8 is approximately 0.17 or 17 percent. Thus, we increase the $600 per month by 17 percent, giving $702. Not bad: The true payment is $720. Do authors pay you to advertise their books here? – Imad Qureshi Please ballpark the amount of money you will make from this book. Does the fact that it is available in several free, pre-publication forms online affect this amount, and if so by how much? Would making its final form available free electronically affect it further, and if so by how much? – Quin How much revenue did you lose by having this incredible visibility moment without having an available Kindle or iPad book or other e-copy? How much of that is recoverable? – Sandy As far as I know, authors do not pay to have books featured here. [Ed.: No, they don't; but thanks for the idea!] Those decisions rightly belong with the editors, based on what they find interesting and think will interest readers. The editors then ask a prospective author if he or she would answer questions about the book and related topics. I was glad to do so. I did not know whether it would increase sales, for the publisher and I have made the book’s PDF file freely available. You may wonder why we did such a crazy thing. I am fortunate to have a job where I get paid to develop and share knowledge. I find inspiration in these words of Thomas Jefferson: “He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me.” (Letter to Isaac McPherson, 13 August 1813.) I therefore wanted Street-Fighting Mathematics to be freely available. That required an open-minded publisher, for this mode of publishing entails risk. Publishing includes fixed costs such as copy-editing, editorial work, typesetting, publicity, and setting up the printing press. These costs are incurred even if no books are sold. But what if everyone reads the book online without buying the printed version? The publisher loses its shirt! MIT Press gladly agreed to take the risk. They published the book, in print and online, under a Creative Commons NonCommercial ShareAlike license—the same license used by MIT’s OpenCourseWare. Roughly speaking, the license gives everyone permission to share verbatim or modified copies non-commercially. Meanwhile, MIT Press retains the commercial rights and sells a nicely printed and bound hard copy. Using the Creative Commons license and thereby making the PDF file freely available may have cost revenue. On the other hand, it may increase the book’s visibility and increase revenue. It also makes buyers of the print version more likely to be satisfied, because they can read it for free online (and even print it if they choose) and decide if it is what they are looking for. MIT Press’s risk was lessened by modern methods of typesetting. In the old days, an author handed the publisher a hand- or typewritten manuscript. The publisher turned the manuscript into page proofs. The author corrected the proofs, and with luck all the corrections were correctly incorporated into the metal blocks of type, resulting in a typeset book. The process was expensive, and mathematical text, called “penalty copy,” was especially expensive. That process has changed greatly. The entire book (except for the cover) was typeset by me, and I gave it to MIT Press as a PDF file. They copy-edited it, I entered the changes into my files, and produced the final PDF file. If I wanted page proofs, I just printed my PDF file. Most of the typesetting expense has disappeared, and the typesetting is now often done entirely by the authors. This development reduces the fixed costs of publishing, and makes it easier for a publisher to risk using a free license. In my case, I typeset the book using ConTeXt, which is based on the TeX typesetting system. TeX is one of the earliest pieces of free software. I find it fitting that free software helped enable, at least in this case, the creation and distribution of freely available content. I know this makes me a philistine, but of what value is pi over, say, establishing that pi is exactly 3.14? That is, what would be lost or gained if instead of an endless number, we established that pi is this much and no more or less? Would our circles be clearly distorted? Would some mathematical principles tumble? – Wondering The numerical error in declaring pi to be 3.14 is small: about 0.05 percent. To make that amount concrete, imagine a modified circle—one whose circumference is 3.14 (instead of pi) times its diameter. To make this new kind of circle, snip out a tiny piece of a true circle. If the modified circle’s diameter is 6 inches (15 centimeters), the snipped segment measures one-quarter of a millimeter—the thickness of two or three sheets of paper. Depending on the problem, this error may be small enough to ignore. However, the conceptual problems with declaring pi to be 3.14 are more significant. It would make mathematics inconsistent by using conflicting definitions of pi: (1) as the ratio of circumference to diameter; and (2) as 3.14 exactly. Every area of mathematics that uses pi (differential equations, number theory, and much else) would now be inconsistent and the formerly clean statements would become incorrect or ambiguous. That said, in the first stage of almost any analysis, one can profitably use pi equals 3.14 or even more extreme approximations, including pi equals 3. Imagine that you have a sheet of regular paper to turn into a cylindrical cover for a pencil holder, and the pencil holder has a diameter of about 5 inches. The approximation that pi equals 3 is enough to show that the 11-inch-long sheet of paper cannot wrap around the pencil holder, because 3*5 inches = 15 inches. Using more decimal places—for example, 3.1 or 3.14—wouldn’t change the circumference enough to make the project possible. Using approximations such as pi equals 3 or 3.14 (or sometimes pi equals 4 or even 1!) is very useful in the early stages of a project or analysis. And that’s the purpose of street-fighting reasoning methods—to help you start, for often that is the hardest part. The motto: don’t just stand there, estimate something. Make enough assumptions to get started. You cannot lose by trying! Enrico Fermi was famous for his ‘back-of-the-envelope’ calculations, including estimating the yield of nuclear bomb tests where he would drop torn paper and observe how far they were displaced. So, my question: aside from chaotic and supercritical systems, are there any other areas in which mathematical guesstimates are not useful? – Lystraeus These are Fermi questions—made famous by the physicist Enrico Fermi: one question he asked physics students was how many piano tuners are there in Chicago? He also estimated the blast from an atom bomb by how far some scraps of paper he threw up in the air were displaced from his observation point. These are common types of questions in physics PhD qualifying exams. Do you give Fermi any credit? – Dr J Fermi was a master of this kind of analysis. The Nobel Prize-winning physicist Richard Feynman, who was himself a master of it, said (in his book, Surely You’re Joking, Mr. Feynman!) that Fermi was even better at it than he was. How did Fermi become so skilled? Partly from the way he tried to understand physics. In the years just following World War II, physics underwent huge changes, many due to the development of quantum electrodynamics. Because Europe was physically devastated by the war, and because many of the leading physicists fled Europe for America, America became the scientific center of the world. Leading centers in America included Berkeley and Caltech on the West Coast and Princeton and Harvard on the East Coast. Conferences on both coasts meant lots of cross-country travel—which in those days meant taking the train or driving. On those trips, Fermi would sit in the back of the car and think about physics. He would pick one area of physics and review in his mind all that he understood about it, and try to figure out ways of thinking that made the results obvious. Before the distractions of email, the Internet, and cell phones were invented, that meant days of concentration and deliberate practice (for more on deliberate practice, see the above answer to the questions about improving one’s educated guessing). One area that is unsuitable for approximations is computing the small difference of large numbers. Then, small errors in the large numbers turn into big changes in the difference. For example, showing that energy is conserved—important in the development of physics—usually means finding the difference between the energy that goes in and the energy that goes out. A slight error in either energy significantly changes their difference, and can change one’s conclusions about energy conservation. Does the cost of a precise calculation add commensurate value? – wild guess It depends! If a rough calculation indicates that a project is likely to be feasible, then a more precise calculation is worth doing. If the rough calculation indicates that the project is almost certainly infeasible, then a more precise calculation is probably wasted. For example, if a first, rough estimate of the cost to build a new bridge across the Hudson River comes up with “roughly $10 billion, give or take maybe a factor of 2,” yet the available funds are a few hundred million dollars, then there’s little point in refining the estimates with a detailed business plan. I work on the 7th floor. How many additional calories will I expend if, for the next four years, I take the stairs instead of the elevator? – Jordan Time to estimate! The energy required to raise an object—pretend it is me—to a height is the object’s mass times the earth’s gravitational strength (“g”) times the height. My mass is 60 kilograms; the strength of gravity is 10 meters per second; and 7 floors is roughly 20 meters. The required energy is their product, which is 12,000 Joules. This energy is, however, just the mechanical energy. Because the human “engine” is only about 25% efficient (internal-combustion engines are also about 25% efficient), the total energy required is a factor of 4 greater: 48,000 Joules. Each Calorie (with a capital C) is about 4,000 Joules, so the energy required to walk up the stairs is 12 Calories. I could use this value to answer the question, but it would give me just a very large number of Calories, and I would not immediately know whether that number is large or small. To make this energy more meaningful, I compare it against another relevant energy. A useful estimation fact: a moderate-sized jelly doughnut provides 1 million Joules or about 250 Calories. That would be enough to climb the stairs 20 times. Thus, one jelly doughnut provides enough energy to climb the stairs (every weekday) for a month. Equivalently, climbing the stairs for a month will burn off the Calories from one jelly donut. The distinction between calories (1 calorie is roughly 4 Joules) and Calories (1 Calorie is roughly 4,000 Joules) is sometimes ignored. The results are never pretty. One Internet diet plan that I saw many years ago recommended eating all kinds of candy and sugar because walking just a short distance would burn away the calories. The plan “worked” only because the computations of the energy provided by candy bars used Calories, whereas the computations of the energy required for walking used calories—and the distinction was ignored. The bogus factor of 1,000 thereby gained made the diet look easy. Leave A Comment Comments are moderated and generally will be posted if they are on-topic and not abusive. COMMENTS: 17 View All Comments » 1. “the strength of gravity is 10 meters per second” i believe you mean m/s^2, i.e. 10 N/kg-otherwise the units don’t work out. 2. Pretty interesting reading through the estimation examples. Also thanks for using my question! I’m definitely satisfied with the answer. 3. Great article, thanks! 4. Errr, there were so many other fudge factors floating around that I may have missed it, but you APPEAR to have assumed that the petroleum fraction of crude oil makes up 100% by volume….. □ I wondered about that too, but maybe the fraction of crude oil that is petroleum is marked up by a relatively constant $ amount which results in a ratio of crude cost/gallon:petroleum cost/ gallon that approximately equals 1:1 5. Ian, He got around it by simply estimating how many gallons of gasoline are in a barrel of oil. If it’s 42 gallons per barrel, then we can take total gasoline gallon consumption and divide by 42 to see how many barrels of oil. It’s just a gasoline –> barrel conversion. 6. Whoever wrote this OP is to be totally distrusted, evidenced by the fact that in speaking of Fermi, he cites West Coast and East Coast universities as the “leading centers” of atomic physics. HaHa! Enrico Fermi was the proud physicist who set off the first sustained nuclear reaction at the Non-West Coast, Non-East Coast University of Chicago. Strange that a person who doesn’t know that can take pride in performing “street math.” 7. Thanks, Bobby, but I’m not sure he knew he was doing that! It certainly wasn’t clear from his running commentary. But I followed by initial reaction a bit further. He’s also – as far as I can see – assuming that either all private cars use petroleum or they all use diesel (since petroleum and diesel are refined as different fractions, you can’t simply add the demand for both together if you want to arrive at demand for crude), and under “other uses” he’s lumping together uses which do add simply (commercial uses of petroleum) with uses which don’t (kerosene, heating oil, lubricating oil, raw materials for plastics….) I would say that demand for crude is determined by consumption of the dominant fraction, whichever that is. Once you’ve bought enough crude to satisfy that, all the other fractions essentially become by-products (think free plastic shopping bags). So the way to proceed, I believe, is 1) identify the dominant fraction as far as demand is concerned (either petroleum or diesel sound like a good bet); 2) estimate the number of private and commercial vehicles which use the fuel of your choice; 3) from that calculate the demand; 4) stop, don’t consider “other uses”. Now, proceeding in this way may well produce an even worse guess than Mr Mahajan’s! But I think that illustrates the real danger with fudge factors, which is that they can mask serious problems with the underlying model. In this case I believe that the demand is calculated in a straightforward but non-linear way. Mr Mahajan’s model of demand is decidedly linear. Not much of a problem with guesses about oil consumption of course, but just suppose somebody at, say, a credit rating agency made the same mistake…. 8. One barrel of crude yields a little under 20 gallons of gas on average. Any given refinery varies its “crack” as a function of type of crude, refinery characteristics, seasonal and market demand variations. So double your crude oil barrel estimate to be closer. Having valued a refinery I know more about it than I care to.
{"url":"http://freakonomics.com/2010/07/16/surely-you-must-be-joking-mr-mahajan-the-street-fighting-mathematician-answers-your-questions/","timestamp":"2014-04-17T03:49:20Z","content_type":null,"content_length":"105507","record_id":"<urn:uuid:cc53f9e7-3c67-4c22-8278-15bd6dc7b35c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
R - avoiding loops with nested lists up vote 4 down vote favorite I know that for loops can be avoided almost all the time in R if you understand the language properly, but I'm struggling to find out the clever way of doing this for (i in 1:100){ AllData[[i]]$Div = NULL Where AllData is a list of 100 lists of various sizes. Can someone clue me in? r nesting add comment 1 Answer active oldest votes Like this: up vote 5 down AllData <- lapply(AllData, `[[<-`, "Div", NULL) vote accepted Isn't it possible without lapply? – N. McA. Sep 23 '12 at 10:20 What's wrong with it? It seems appropriate since you are working with a list. – flodel Sep 23 '12 at 10:23 There's nothing wrong, but it doesn't really further my understanding that much because it's so equivalent to my code. I might be looking for something that doesn't exist though, as I was expecting something like AllData[[1:100]] = NULL – N. McA. Sep 23 '12 at 10:26 I don't think you can, but let's wait and see. Also note that your AllData[[1:100]] = NULL does not mention Div. – flodel Sep 23 '12 at 10:37 There is no increase in readability with a for loop unless you cannot read R. A big purpose of apply is expressiveness and avoiding the typical tendency to write long for loops. 1 Just thinking of doing each step separately with a separate apply often provides performance boosts because somewhere in there you can get in a vectorized operation. – John Sep 23 '12 at 15:04 show 4 more comments Not the answer you're looking for? Browse other questions tagged r nesting or ask your own question.
{"url":"http://stackoverflow.com/questions/12551450/r-avoiding-loops-with-nested-lists","timestamp":"2014-04-23T18:05:58Z","content_type":null,"content_length":"67164","record_id":"<urn:uuid:8c7ec69e-e3f2-47dd-864f-b309b1ae9d8d>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
HowStuffWorks "How Corrective Lenses Work" • Compound lens: a lens having both a spherical and a cylindrical component • Cylindrical curve: a curve that radiates along a straight line, like a pipe cut lengthwise • Diopter (D): the refractive power of a lens; the higher the number, the stronger the lens • Refraction: the bending of light • Spherical curve: a curve that is the same in all directions, like a basketball cut in half Determining Lens Strength The strength of a lens is determined by the lens material and the angle of the curve that is ground into the lens. Lens strength is expressed as diopters (D), which indicates how much the light is bent. The higher the diopter, the stronger the lens. Also, a plus (+) or minus (-) sign before the diopter strength indicates the type of lens. Plus and minus lenses can be combined, with the total lens type being the algebraic sum of the two. For example, a +2.00D lens added to a -5.00D lens yields: Lens Shapes Two basic lens shapes are commonly used in optometry: spherical and cylindrical. • A spherical lens looks like a basketball cut in half. The curve is the same all over the surface of the lens. • A cylindrical lens looks like a pipe cut lengthwise. The direction of a cylinder curve's spine (axis) defines its orientation. It will only bend light along that axis. Cylinder curves are commonly used to correct astigmatism, as the axis can be made to match the axis of the aberration on the cornea.
{"url":"http://science.howstuffworks.com/innovation/everyday-innovations/lens4.htm","timestamp":"2014-04-16T04:15:58Z","content_type":null,"content_length":"121702","record_id":"<urn:uuid:c7ae71bb-50d6-4168-bccb-d6e94aef4d79>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
Douglasville Algebra 2 Tutor Find a Douglasville Algebra 2 Tutor ...My name is Scott, and I'm a full-time math instructor at Chattahoochee Technical College. I offer over 10 years of expert math teaching and tutoring experience. I'm highly skilled in all areas of math including algebra, geometry, trigonometry, precalculus, calculus, and statistics. 13 Subjects: including algebra 2, calculus, algebra 1, SAT math ...The focus of much of his teaching in recent years is to students who are missing key math fundamentals, students that need help understanding new math concepts and students that have diverse learning styles and special needs. Settings include: private one-on-one sessions; community group classes... 5 Subjects: including algebra 2, geometry, algebra 1, prealgebra ...I specialize in standardized test preparation including all sections of the PSAT, SAT, ACT, and ASVAB. By scoring above the 92nd percentile nationwide, I achieved the highest SAT score in my graduating class and was awarded an academic scholarship for my efforts. For military applicants, I scor... 26 Subjects: including algebra 2, reading, English, calculus I have a true passion for technologies, and I consider learning as a lifelong commitment. It is that same passion that one can expect me to provide at every tutoring session. My primary goal is to help you or your child gain the confidence and knowledge to succeed in school and life in general. 27 Subjects: including algebra 2, French, calculus, physics ...I ensure you that I will work hard to improve your understanding of these subjects, while adding a touch of fun along the way. If you are ready to work hard and learn more about science and math, contact me. I look forward to working with you in the future.I am a science teacher who absolutely loves science and math! 15 Subjects: including algebra 2, chemistry, geometry, biology
{"url":"http://www.purplemath.com/Douglasville_Algebra_2_tutors.php","timestamp":"2014-04-18T08:30:47Z","content_type":null,"content_length":"24162","record_id":"<urn:uuid:b5ad1360-a21c-403e-a2b5-82ba30654f63>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Ordinals describable by a finite string of symbols Replies: 3 Last Post: Jul 11, 2013 8:42 PM Messages: [ Previous | Next ] fom Re: Ordinals describable by a finite string of symbols Posted: Jul 11, 2013 8:42 PM Posts: 1,969 Registered: 12/4/12 On 7/10/2013 6:40 AM, Aatu Koskensilta wrote: > fom <fomJUNK@nyms.net> writes: >> What is expressed by both, however, is that the universe of discourse >> must be expressed by a set -- an object of the theory. > What is expressed by the axioms M and SM is that there exists a set > with certain properties. Neither says anything whatever about the > universe of discourse or how it must be expressed. In any case, for > (relative) consistency and independence results by forcing, the use of M > and SM is always eliminable, as Cohen himsels explains in _Set Theory > and the Continuum Hypothesis_. (G. H. Moore, in /The Origins of > Forcing/, reports Moschovakis in a letter urged Cohen to do away with > the "ridiculous assumption", that there exists a standard model of set > theory!) Thank you for a very correct statement. I looked up the remark in Cohen's book. The paragraph in question begins as "If one does not care about the construction of actual models, ..." I am less interested in relative consistency and independence results than I am in the model theory of set theory. That is probably clear from my other reply. But, I had been somewhat rushed. The "eliminability" of which you speak is precisely associated with relative consistency and independence results. Your remark is clear and exact. I just missed it yesterday. Date Subject Author 7/10/13 Re: Ordinals describable by a finite string of symbols Aatu Koskensilta 7/10/13 Re: Ordinals describable by a finite string of symbols fom 7/11/13 Re: Ordinals describable by a finite string of symbols fom
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2581314&messageID=9167547","timestamp":"2014-04-19T00:24:02Z","content_type":null,"content_length":"19792","record_id":"<urn:uuid:b8c030d3-7d74-45e2-bbbf-f73d65704849>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
An example of finite-time singularities in the 3d Euler equations The Library An example of finite-time singularities in the 3d Euler equations He, Xinyu. (2007) An example of finite-time singularities in the 3d Euler equations. Journal of Mathematical Fluid Mechanics, Vol.9 (No.3). pp. 398-410. ISSN 1422-6928 Full text not available from this repository. Let Omega = R-3\(B) over bar _1(0) be the exterior of the closed unit ball. Consider the self-similar Euler system alpha u+beta y.del u+u.del u+del p=0, div u=0 in Omega. Setting alpha=beta=1/2 gives the limiting case of Leray's self-similar Navier-Stokes equations. Assuming smoothness and smallness of the boundary data on partial derivative Omega, we prove that this system has a unique solution (u,p) epsilon C-1(Omega;R(3)xR), vanishing at infinity, precisely u(y)down arrow 0 as vertical bar y vertical bar up arrow infinity, with u = O(vertical bar y vertical bar(-1)), del u=O(vertical bar y vertical bar(-2)). The self-similarity transformation is v(x,t)=u(y)/(t*-t)(alpha), y=x/(t*-t)(beta), where v(x,t) is a solution to the Euler equations. The existence of smooth function u(y) implies that the solution v(x,t) blows up at (x*,t*), x*=0, t*<+infinity. This isolated singularity has bounded energy with unbounded L-2-norm of curl v. Data sourced from Thomson Reuters' Web of Knowledge Actions (login required)
{"url":"http://wrap.warwick.ac.uk/31610/","timestamp":"2014-04-17T06:57:40Z","content_type":null,"content_length":"35414","record_id":"<urn:uuid:9da8bf85-685d-49ea-b8eb-29b582bcca2b>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Problems for Third Grade 3rd Grade Math Problems If you’re looking for ways to help your 3rd grader get better at math, you’ve come to the right place! Check out our collection of fun, free and printable math problems for 3rd graders and get the little ones to start practicing. Help your young learner get better at reading analog clocks with this simple time worksheet, “Clock Work”. Students find out more about others’ routines and answer a few questions about their own in this fun worksheet for kids. See more In this math worksheet, students practice adding and subtracting to balance equations as they put each of the beautiful butterflies in their right places. See more “Bag of Beans” is a fun math worksheet that will have your students competing to complete subtraction problems. See more When the king ordered for cut glass triangles, he didn’t mean for them to be broken! Unfortunately, now only a skilled multiplier can help put the broken pieces together again. Do you think you’re up to the task? See more Test your mathematical ability by solving math problems and comparing your answers with the calculator’s in ‘Calculator Match’. See more With values for each letter, students add the values of the letters in their names to find out whose name is the most valuable. See more A twist on the classic word search type of activities, here students must try and identify the multiplication facts hidden in the jumble of numbers. See more In this math worksheet, students must figure out how many glasses of orange juice, grapefruit juice and apple juice to fetch for the passengers on the ferry. See more 3rd Grade Math Problems If you’re looking for ways to help your 3rd grader get better at math, you’ve come to the right place! Check out our collection of fun, free and printable math problems for 3rd graders and get the little ones to start practicing. Math Problems for Third Graders 8 – 9-year-olds may vary in their abilities to grasp different math topics and concepts. But when in third grade, there are certain skills that kids are expected to learn. The 3rd grade math curriculum includes learning multiplication and division, place values, basic probability and statistics, interpreting bar graphs and line graphs, learning to calculate area and perimeter, and more. 3rd grade math problems cover all these topics, with problems of varying degrees of difficulty. Such problems are a valuable resource for homeschooling parents as well as teachers who want kids to practice math and get better at the subject. Fun 3rd Grade Math Problems Make math more interactive and fun for 3rd graders. Free and printable math worksheets with special themes like Christmas, St. Patrick’s Day, Easter, etc, will make the learning process a lot more enjoyable for the kids!
{"url":"http://www.mathblaster.com/teachers/math-problems/3rd-grade-math-problems","timestamp":"2014-04-19T19:33:26Z","content_type":null,"content_length":"88940","record_id":"<urn:uuid:5e2a7c09-88c1-4c07-9258-e1862f5e5a5b>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
The Language of Algebra - Definitions - Workout Are these statements true or false? True False Feedback "Pi" is a real number. 2.52 is a real, rational number. n + 5 is an expression for “the sum of a number and five.” 3 is an irrational number. “Twice a number divided by three” can be written as 2n - 3. “Five decreased by twice a number” can be written as 5 - 2x. “Ten less than a number” can be written as 10 n. An integer is not a rational number.
{"url":"http://math.com/school/subject2/practice/S2U1L1/S2U1L1Pract.html","timestamp":"2014-04-20T20:54:51Z","content_type":null,"content_length":"37756","record_id":"<urn:uuid:76e64f68-220c-4fa7-81e0-e28d9601ad42>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: can anyone send me best website link for math,physics n chemistry ?? • one year ago • one year ago Best Response You've already chosen the best response. other then OS . Best Response You've already chosen the best response. Best Response You've already chosen the best response. I also, like the opencoursework for MIT Best Response You've already chosen the best response. ok but can u tell me how can be best in maths ? Best Response You've already chosen the best response. need to do your homework and strive for doing it to understand it. Do not do the homework for the sake of completion but for understanding. Best Response You've already chosen the best response. ok which is the best method book /online videos? Best Response You've already chosen the best response. I like to use my math books but I also, like to watch online videos when I need clarification. I always refer to the books-authors have to go through a process for a book to be published. ie verification of math facts. Videos can have mistakes and even when people point them out, very few people correct the videos. Even Khan has a mistake on quotient rule in the calculus section. He named the video quotient rule but never used the actual rule in the video so I think he should have name it "an alternative to the quotient rule". Anyways videos are good way to hear and see the Best Response You've already chosen the best response. yes, but I would only use it as a doublecheck. Wolf does not always give the most efficient system to solve Best Response You've already chosen the best response. All technology has limitations Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50fd8131e4b0860af51ed732","timestamp":"2014-04-16T04:25:32Z","content_type":null,"content_length":"52294","record_id":"<urn:uuid:97e31c4f-6c66-4320-9a1d-9715fd75e792>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
Wallington Precalculus Tutor Find a Wallington Precalculus Tutor ...I have since been working professionally as a software developer. The first few years I worked in C and C++. The last few years have been primarily Java. I have a dual computer science and computer engineering degree from NYU and Stevens Institute of Technology. 9 Subjects: including precalculus, algebra 1, algebra 2, trigonometry ...I am confident that any student would make significant progress studying with me, and my psychology major also makes me sensitive to individual issues that might arise, such as test anxiety. I am gentle and kind, but have high expectations that my students achieve success in math. Above all, I ... 8 Subjects: including precalculus, calculus, geometry, algebra 1 ...I am also skilled in many other subjects, including my native language, Spanish. Over the past year, I have worked with students in various areas, ranging from algebra, calculus, chemistry, and physics to English and US History. I also specialize in standardized test prep (SAT/ACT/ISEE/GRE). ... 38 Subjects: including precalculus, Spanish, chemistry, GRE ...I also know how to construct equilateral polygons from triangles to decagons using my geometric skills. Lastly I have perfect knowledge of coordinate geometry. I can show students how coordinates can be used to find slope, and distance. 22 Subjects: including precalculus, chemistry, Spanish, calculus ...I not only teach these skills as part of my "subject" tutoring, but in my work with students preparing for exams, such as the SHSAT or ASVAB, I emphasize study skills even more, since usually the amount of study time before the exam is limited. For a year, I was a substitute para-professional wi... 29 Subjects: including precalculus, reading, biology, ASVAB
{"url":"http://www.purplemath.com/Wallington_precalculus_tutors.php","timestamp":"2014-04-16T22:23:26Z","content_type":null,"content_length":"24107","record_id":"<urn:uuid:d81b7c02-82ec-47cf-8039-c53dd1646066>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
Definition of Interpolate Date: 10/27/95 at 5:56:19 From: Anonymous Subject: Interpolate Could you please define interpolate and give me an example? I have looked in "The Book of Answers" and it defines interpolate: Determination of an intermediary value of a function by means of a sequence of known values of the function. Please help! Date: 11/7/95 at 13:0:8 From: Doctor Steve Subject: Re: Interpolate Hi Mark, In more informal language, interpolate means to guess at what happens between two values you already know. If I gave you a sequence of numbers and asked you to fill in the missing number you would be 0,1,2, ... ,6 But, since you don't know the rule I'm using, you're only making a guess. It often happens, for instance in economics, that you know the data but you don't know what the formula is that explains why something happened that way or what the next value will be. I might calculate the national debt every three months and that might enable me to make a good guess what it was the two months in between. Usually when you make an interpolation you are making an assumption that the behavior you're looking at is predictable or that the amount of error in your guess can't be enough to prevent you from using it. In the sequence I gave above you might interpolate a value of 4 and have some confidence that you wouldn't be more than one off. So interpolation relies on some assumptions about the behavior of whatever you're investigating that make it possible for you to guess values for which you only have information about what happened "before" and "after". - Doctor Steve, The Geometry Forum
{"url":"http://mathforum.org/library/drmath/view/58318.html","timestamp":"2014-04-19T11:58:49Z","content_type":null,"content_length":"6580","record_id":"<urn:uuid:d98b075f-af62-4022-945f-fdd778c1114c>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
show that p is not complete November 25th 2011, 05:45 PM #1 Senior Member Jan 2010 Prove that D is connected Let $D \subset R$, and let f: D-->R be continuous. Prove that D is connected if { $x,f(x): x \in D$}, the graph of f, is a connected subset of $R^2$ Last edited by wopashui; November 25th 2011 at 06:03 PM. Re: Prove that D is connected This is an if and only if. Let $\Gamma_f$ denote the graph, note then that $D=\pi_1(\Gamma_f)$ where $\pi_1:R^2\to R$ is the canonical projection onto the first coordinate--why does this tell us $D$'s connected. Conversely, if $D$ is connected then $\Gamma_f=h(D)$ where $h: D\to R^2: x \mapsto (x,f(x))$--why is $h$ continuous and why does this tell us that $\Gamma_f$ is connected? Re: Prove that D is connected This is an if and only if. Let $\Gamma_f$ denote the graph, note then that $D=\pi_1(\Gamma_f)$ where $\pi_1:R^2\to R$ is the canonical projection onto the first coordinate--why does this tell us $D$'s connected. Conversely, if $D$ is connected then $\Gamma_f=h(D)$ where $h: D\to R^2: x \mapsto (x,f(x))$--why is $h$ continuous and why does this tell us that $\Gamma_f$ is connected? sorry, what is canonical projection ? November 25th 2011, 07:57 PM #2 November 26th 2011, 08:35 AM #3 Senior Member Jan 2010
{"url":"http://mathhelpforum.com/differential-geometry/192687-show-p-not-complete.html","timestamp":"2014-04-16T19:04:40Z","content_type":null,"content_length":"40340","record_id":"<urn:uuid:9da11223-4879-48b0-af31-b2dd1d119860>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
Williams College Catalog Electricity and Magnetism for Mathematicians (Q) Last offered Fall 2013 Maxwell's equations are four simple formulas, linking electricity and magnetism, that are among the most profound equations ever discovered. These equations led to the prediction of radio waves, to the realization that a description of light is also contained in these equations and to the discovery of the special theory of relativity. In fact, almost all current descriptions of the fundamental laws of the universe are deep generalizations of Maxwell's equations. Perhaps even more surprising is that these equations and their generalizations have led to some of the most important mathematical discoveries (where there is no obvious physics) of the last 25 years. For example, much of the math world was shocked at how these physics generalizations became one of the main tools in geometry from the 1980s until today. It seems that the mathematics behind Maxwell is endless. This will be an introduction to Maxwell's equations, from the perspective of a mathematician. Class Format: lecture Requirements/Evaluation: evaluation will be based primarily on performance on homework and exams Additional Info: Additional Info2: Prerequisites: MATH 350 or MATH 351, and MATH 355, or permission of instructor; not open to students who have taken MATH 337; no physics background required Enrollment Preference: Department Notes: Material and Lab Fees: Distribution Notes: Divisional Attributes: Division III,Quantitative and Formal Reasoning Other Attributes: Enrollment Limit: none Expected Enrollment: 15 Class Number: 1762
{"url":"http://catalog.williams.edu/catalog.php?strm=1141&subj=MATH&cn=437","timestamp":"2014-04-21T07:46:13Z","content_type":null,"content_length":"28223","record_id":"<urn:uuid:b1f9b68c-ad89-4a7d-9f6b-6ff0eea26701>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
Speed of Sound Speed of Sound The speed of sound is different depending on the type of fluid that is being observed. It is based off of how fast a disturbance can travel through a fluid due to the relationship between pressure and density. Refer to equation 1. c = Speed of Sound dP = Pressure Differential dρ = Density Differential Since there is a relationship between pressure and density a pressure wave builds up has an object approaches the speed of sound. Because of this pressure wave, it is impossible for an object to travel exactly at the speed of sound. The reason why is because as the object flies at the speed of sound the pressure wave will continue to build until it destroys the object. This is why jets have after burners. The after burners are used to get the jet past this building pressure wave and essentially break the sound barrier causing the plane to out run its pressure wave which is referred to as the sonic boom. To relate an objects speed to the speed of sound, the Mach number would be used, which is a unit less number. Refer to equation 2. v = Velocity Finally, the ideal gas law can also be used to calculate the speed of sound. It will be assumed that an isentropic process will occur. Refer to equation 3. k = Specific Heat Ratio R = Ideal Gas Constant T = Temperature If you found this information helpful please donate to show your support. Feedback and Recommendations
{"url":"http://engineering-references.sbainvent.com/fluid-mechanics/speed-of-sound.php","timestamp":"2014-04-20T03:11:44Z","content_type":null,"content_length":"15953","record_id":"<urn:uuid:b14e5e59-a236-44b8-ae52-9d91d62a2a52>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
This is the question : I don't exactly know what my teacher means by hold ; The response property is the most commonly used in formal software verification. Response(p; q) is true if every time p holds, it is followed by q. Formally, Response(p; q) is true iff ∀ moments of time (states) i, if p(i) (meaning p holds at moment i) then ∃ j ≥ i and q(j). Write an ACL2 function (response qp) that takes a list of characters and returns T if the above relation is satisfied, and returns nil if p holds at some moment and is not followed by q at a later moment. In your modeling of the list, use the notation (a b) to indicate that a and b hold at the same moment. For exam- ple, in the list ’(a b (c d) e), a holds at moment 1, b at moment 2, c and d at moment 3, and e at moment 4. Possible cases to test your function against are: • (response qp (a b c d)) returns T. • (response qp (a b q d)) returns T. • (response qp (a b p q)) returns T. • (response qp (a b (p q) d)) returns T. • (response qp (a b q q p)) returns Nil. Here is what I have Code: Select all (DEFUN RESPONSE_qp (p q) (COND ((consp p)(cdr q)) ((listp p) (response_qp(car q) p)) (t (response_qp (p) (response_qp( q)))))) Re: Need Help shoulder wrote:This is the question : I don't exactly know what my teacher means by hold ; There are those things called "dictionaries": hold, in this case the meaning 8b is applicable. In this particular case for this particular model it merely means that if there is p in the list, it must be followed by q. The function you gave isn't even syntactically correct. I would suggest approaching this problem by first constructing a function to determine a position of a symbol in a list even if it is in a sublist. Using that checking the response condition is
{"url":"http://www.lispforum.com/viewtopic.php?f=20&t=513&p=3425","timestamp":"2014-04-18T20:46:20Z","content_type":null,"content_length":"16153","record_id":"<urn:uuid:4fff8fb2-391e-499a-80fb-bdb5f4af6f01>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Recursion Problem Replies: 11 Last Post: Nov 22, 2011 9:44 PM Messages: [ Previous | Next ] Re: Recursion Problem Posted: Nov 17, 2011 11:27 AM On Nov 17, 4:13 am, William Elliot <ma...@rdrop.com> wrote: > On Wed, 16 Nov 2011, junoexpress wrote: > > The question is this: > > "If I know the influxes for each year and start out with no deficit > > what is the maximum fixed amount of M&Ms (i.e. the maximum value for > > B) I can take out so at the end of n years, I will have no deficit?" > min{ a1, a2,.. a_n } > > Not an easy problem, and I don't think it probably has a "nice" > > solution, but just thought I would see if anyone had a better thought. > It's made messy by the results depending not only > on the values of the a's but also on their order. Your mention of the minimum of the inputs got to me thinking about what the solution must be like. One complicating aspect of this problem is that significant portions of the data may be unimportant (as in the toy problem). Some key facts, I believe however are that: 1) B is a solution when it causes us to break even the last year 2) The window of data points relevant to our analysis must go from the minimum of the {A_i} to the last data point.
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2316593&messageID=7610659","timestamp":"2014-04-19T00:20:18Z","content_type":null,"content_length":"30100","record_id":"<urn:uuid:866dc8cc-6e38-453b-bd48-9af67f1601d3>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
Answer - Engineers Answer : This puzzle was artfully devised by the yellow man. It is not a matter for wonder that the representatives of the five countries interested were bewildered. It would have puzzled the engineers a good deal to construct those circuitous routes so that the various trains might run with safety. Diagram 1 shows directions for the five systems of lines, so that no line shall ever cross another, and this appears to be the method that would require the shortest possible mileage. The reader may wish to know how many different solutions there are to the puzzle. To this I should answer that the number is indeterminate, and I will explain why. If we simply consider the case of line A alone, then one route would be Diagram 2, another 3, another 4, and another 5. If 3 is different from 2, as it undoubtedly is, then we must regard 5 as different from 4. But a glance at the four diagrams, 2, 3, 4, 5, in succession will show that we may continue this "winding up" process for ever; and as there will always be an unobstructed way (however long and circuitous) from stations B and E to their respective main lines, it is evident that the number of routes for line A alone is infinite. Therefore the number of complete solutions must also be infinite, if railway lines, like other lines, have no breadth; and indeterminate, unless we are told the greatest number of parallel lines that it is possible to construct in certain places. If some clear condition, restricting these "windings up," were given, there would be no great difficulty in giving the number of solutions. With any reasonable limitation of the kind, the number would, I calculate, be little short of two thousand, surprising though it may appear.
{"url":"http://www.pedagonet.com/puzzles/chinese1.htm","timestamp":"2014-04-20T19:03:09Z","content_type":null,"content_length":"12174","record_id":"<urn:uuid:4942d8af-2204-4504-804f-92bf4b27178a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
The Arcsine Law: Why Traders Get Stuck Either in the Red or in the Green This result explains why 50% of the people consistently lose money, while 50% consistently win. Let's compare stock trading to coin flipping (tails = loss, heads = gain). Then • The probability that the number of heads exceeds the number of tails in a sequence of coin-flips by some amount can be estimated with the Central Limit Theorem and the probability gets close to 1 as the number of tosses grows large. • The law of long leads, more properly known as the arcsine law, says that in a coin-tossing games, a surprisingly large fraction of sample paths leave one player in the lead almost all the time, and in very few cases will the lead change sides and fluctuate in the manner that is naively expected of a well-behaved coin. • Interpreted geometrically in terms of random walks, the path crosses the x-axis rarely, and with increasing duration of the walk, the frequency of crossings decreases, and the lengths of the “waves” on one side of the axis increase in length.
{"url":"http://www.analyticbridge.com/group/computationalfinance/forum/topics/the-arcsine-law-why-traders","timestamp":"2014-04-19T04:57:11Z","content_type":null,"content_length":"55908","record_id":"<urn:uuid:b8486790-c297-4a7e-8b80-9597dbe502c9>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
Celia Hoyles Profile Page Name: Professor Celia Hoyles Dame Professor Celia Hoyles Qualifications and position: O.B.E PhD, BSc. (Hons), M.Ed, FIMA, AcSS Professor of Mathematics Education Faculty of Children and Learning Culture, Communication and Media London Knowledge Lab Interests: policy: research: students' conceptions of proof, mathematical skills in modern workplaces, computational environments for learning and sharing mathematics and systemic change in teaching mathematics 2004-2007 Government Chief Adviser for Mathematics. 2007- 13 Director of the National Centre for Excellence in the Teaching of Mathematics 2004 International Commission of Mathematics Instruction (ICMI) Hans Freudenthal medal 2004 Made Officer of the British Empire 2011 Royal Society Kavli Education Medal About: Hon Doctorates awarded by the Open University 2006, Loughborough University 2008, and Sheffield Hallam University, 2011 2014- 2015 President of the Institute of the Mathematics and its Applications 2014 Made Dame Commander of the British Empire Masters and PhD teaching in Mathematics Education Postgraduate Research: Research Theses supervised (some jointly) by Prof Celia Hoyles Sutherland, Rosamund. 1988 Robins, Keith. 1989 Bamfield, Moira Catherine. 1989 Evans, Richard. 1991 Garnier, Rowan.1992 Dowrick, Nicholas. 1992 Ursini-Legovich, Sonia. 1994 Magina, Sandra. 1994 Morgan, Candia Ruth. 1995 Correia Dias, Angela Alvares. 1996 Gomes Ferreira, Veronica Gitirana. 1997 Anakwue, Festus Onyeama. 1997 Morais de Lira, Ana Karina. 2000 Healy, Siobhan (Lulu). 2002 Chua, Boon Liang (MME), 2014 Languages Spoken: French (basic) Contact Email: This e-mail address is being protected from spam bots, you need JavaScript enabled to view it Department Profile www.ioe.ac.uk/staff/LKLB/LKLB_21.html Selected publications Kilpatrick, J., Hoyles, C., Skovsmose, O. (eds.)† in collaboration with Valero, P. (2005), Meaning in Mathematics Education, Springer USA Hoyles. C and Lagrange J-B (eds)† (2009) Mathematics Education and Technology-† Rethinking the terrain† Springer† Hoyles, C. Noss, R., Kent,P. and Bakker, A. (2010)† Improving Mathematics at Work: The need for techno-mathematical literacies Routledge Selected Research Publications: refereed journals & book chapters Hoyles, C. (1982) ëThe pupil's view of mathematics learning'. Educational Studies in Mathematics, 13, October, 349-372. Hoyles, C., Healy, L. and Pozzi, S. (1992) ëInterdependence and Autonomy: Aspects of Groupwork with Computers'.† In Mandel, H., De Corte, E., Bennett S.N. and Friedrich H.F (eds), Learning and Instruction, European Research in International Context. Vol. 2, 239-257. Publications: Hoyles, C., Noss, R. and Pozzi, S. (2001), ëProportional Reasoning in Nursing Practice'. Journal for Research in Mathematics Education, 32, 1, 4-27. Hoyles, C., K¸chemann, D., Healy, L. & Yang, M. (2005), ëStudents' Developing Knowledge in a Subject Discipline: Insights from combining Quantitative and Qualitative Methods'. International Journal of Social Research Methodology, Vol. 8, 3 pp225-238 K¸chemann, D. & Hoyles, C.† (2006)† Influences on students' mathematical reasoning and patterns in its development: insights from a longitudinal study with particular reference to geometry International Journal of science and maths education 4, (4), 581- 608 K¸chemann, D. & Hoyles, C.† (2009) From empirical to structural reasoning in mathematics: tracking changes over time† In Stylianou, D.A,† Blanton, M. L. &† Knuth, E.J. (Eds) Teaching and Learning Proof Across the Grades K-16 Lawrence Erlbaum Associates, 171- 191 Hoyles C, & Noss, R (2009) The Technological Mediation of Mathematics and Its Learning †In Nunes,T (ed) Special Issue, ëGiving Meaning to Mathematical Signs: Psychological, Pedagogical and Cultural Processes'† Human Development, Vol 52, No 2, April, pp. 129-147 Times Educational Supplement (TES) Maths Podcast (12/11/13) Cornerstone Maths research project Research Theses supervised (some jointly) by Prof Celia Hoyles Sutherland, Rosamund. 1988 A longitudinal study of the development of pupils' algebraic thinking in a logo environment Robins, Keith. 1989 Mathematical language Bamfield, Moira Catherine. 1989† Parental attitudes and involvement in learning mathematics: a study of infant schools Evans, Richard.† 1991 A turning point in the education of women in Western Europe (1400-1600): with special reference to mathematics Garnier, Rowan. 
1992
 Understanding logical connectives: a comparative study of language influence 
 Dowrick, Nicholas. 1992 Six and seven year olds working in pairs at an arithmetic task Students: Ursini-Legovich, Sonia. 1994 Pupil's approaches to different characterizations of variable in logo Magina, Sandra. 1994 Investigating the factors which influence the child's conception of angle Morgan, Candia Ruth. 1995 An analysis of the discourse of written reports of investigative work in GCSE mathematics Correia Dias, Angela Alvares. 1996 Ways of seeing geometrical meaning in different situations Gomes Ferreira, Veronica Gitirana.† 1997 Exploring mathematical functions through dynamic microworlds Anakwue, Festus Onyeama. 1997† A study of training programmes for school mathematics teachers in Nigeria Morais de Lira, Ana Karina. 2000† Separating variables in the context of data handling Healy, Siobhan (Lulu). 2002 Iterative design and comparison of learning systems for reflection in two dimensions LKL News Celia Hoyles is featured in the following news items: Date Title Tuesday, 07 January 2014 Professor Celia Hoyles earns Damehood Tuesday, 01 October 2013 Celia Hoyles elected IMA President Monday, 10 December 2012 Cornerstone Project Thursday, 24 November 2011 Professor Celia Hoyles awarded Honorary doctorate Monday, 11 April 2011 Medals for Hoyles and Noss Monday, 10 January 2011 The first Royal Society Medal has been awarded to Professor Celia Hoyles Thursday, 23 September 2010 New book on Mathematics Education and Technology Thursday, 06 May 2010 Improving Mathematics at Work Thursday, 04 March 2010 LKL forges Australian links Tuesday, 13 March 2007 TLRP Research Award Thursday, 13 July 2006 WebReports code released Monday, 10 October 2005 new centre for maths education Full LKL news listing
{"url":"http://www.lkl.ac.uk/cms/index.php?option=com_comprofiler&task=userProfile&user=91","timestamp":"2014-04-20T13:18:29Z","content_type":null,"content_length":"27573","record_id":"<urn:uuid:19577619-4b28-464d-97b9-cd8cc754314e>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
Discrete Mathematics & Theoretical Computer Science Volume 3 n° 3 (1999), pp. 73-94 author: Peter Bürgisser title: On the Structure of Valiant's Complexity Classes keywords: Structural complexity, Algebraic theories of NP-completeness diagonalization, Poset of degrees. abstract: In Valiant developed an algebraic analogue of the theory of NP-completeness for computations of polynomials over a field. We further develop this theory in the spirit of structural complexity and obtain analogues of well-known results by Baker, Gill, and Solovay, Ladner, and Schöning. We show that if Valiant's hypothesis is true, then there is a p-definable family, which is neither p-computable nor VNP-complete. More generally, we define the posets of p-degrees and c-degrees of p-definable families and prove that any countable poset can be embedded in either of them, provided Valiant's hypothesis is true. Moreover, we establish the existence of minimal pairs for VP in VNP. Over finite fields, we give a specific example of a family of polynomials which is neither VNP-complete nor p-computable, provided the polynomial hierarchy does not collapse. We define relativized complexity classes VP^h andVNP^h and construct complete families in these classes. Moreover, we prove that there is a p-family h satisfying VP^h = VNP^h. reference: Peter Bürgisser (1999), On the Structure of Valiant's Complexity Classes, Discrete Mathematics and Theoretical Computer Science 3, pp. 73-94 ps.gz-source: dm030301.ps.gz (66 K) ps-source: dm030301.ps (194 K) pdf-source: dm030301.pdf (154 K) The first source gives you the `gzipped' PostScript, the second the plain PostScript and the third the format for the Adobe accrobat reader. Depending on the installation of your web browser, at least one of these should (after some amount of time) pop up a window for you that shows the full article. If this is not the case, you should contact your system administrator to install your browser correctly. Automatically produced on Tue Jun 29 14:25:43 MEST 1999 by gustedt
{"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/dmtcs/article/viewArticle/99/226","timestamp":"2014-04-19T02:31:09Z","content_type":null,"content_length":"15512","record_id":"<urn:uuid:161ea7f3-f4e0-41c0-8ec0-735d2541a919>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum Computation Roadmap The overall purpose of this roadmap is to help facilitate the progress of quantum computation research towards the quantum computer science era. It is a living document that will be updated at least Please e-mail comments on the quantum computation roadmap to Richard Hughes with a copy to Malcolm Boshier. Quantum Computing Roadmap Overview Section 6.1: Nuclear Magnetic Resonance Approaches to Quantum Information Processing and Quantum Computing Section 6.2: Ion Trap Approaches to Quantum Information Processing and Quantum Computing Section 6.3: Neutral Atom Approaches to Quantum Information Processing and Quantum Computing Section 6:4 Cavity QED Approaches to Quantum Information Processing and Quantum Computing Section 6.5: Optical Approaches to Quantum Information Processing and Quantum Computing Section 6.6: Solid State Approaches to Quantum Information Processing and Quantum Computing Section 6.7: Superconducting Approaches to Quantum Information Processing and Quantum Computing Section 6.8: "Unique" Qubit Approaches to Quantum Information Processing and Quantum Computing Section 6.9: The Theory Component of the Quantum Information Processing and Quantum Computing Roadmap Whole Roadmap (6 MB)
{"url":"http://qist.lanl.gov/qcomp_map.shtml","timestamp":"2014-04-21T09:35:53Z","content_type":null,"content_length":"7942","record_id":"<urn:uuid:bec45a24-daaa-42d2-b88c-478654fc8b5e>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
The Mathematics of a Pyramid Scheme - The Scam Explained There are many "get rich quick" pyramid schemes on the internet. Some are illegal "pyramid schemes" (scams), and some are barely legal "business propositions" that bear a striking resemblance to an illegal pyramid scheme. The Simplest Pyramid Scheme - A Chain Letter Scam An example of the simplest form of a pyramid scheme would involve Person A sending a letter to 2 other people (Level B) requesting them to send $1 back, and to then send letters out to 2 other people (Level C) requesting a dollar. So Person A and all people at Level B would theoretically make $10. This type of letter is known as a chain letter and is also often accompanied by superstitious claims that "breaking the chain" will bring bad luck. This is a pyramid scheme since the number of participants increases geometrically as shown below. A chain letter of this type is clearly illegal in the A Modified Pyramid Scheme - The 8-Ball Model In The 8-Ball Model, the person recruiting does not get paid at all until they have recruited 3 levels worth of new members. Thus Person A at level 1 recruits 2 people at level 2, these 2 recruit 4 at level 3, and these 4 recruit 8 at level 4. When the 8 are recruited, Person A receives the "participation fee" for all 8 people of level 4. If the fee was $1000, then Person A would receive $8000. If 16 people are then recruited at level 5, then the 2 people at level 2 would each receive $8000, as shown below. The appeal of this system over the simple chain letter is that fewer people need to be recruited, payout is higher, and there is incentive to help those in levels below you succeed. This type of scheme, if it does not deliver a product worth the participation fee, is illegal in the U.S. and is similar to that used by illegal scams called gifting clubs. What Makes Many Pyramid Schemes Illegal? An illegal pyramid scheme, by definition, is one that involves a geometric series of new members where "membership fees" do not pay for any product of value equal to the membership fee. So if one paid $1000 to join such a scheme yet received $1000 worth of goods or services, then the scheme would not be illegal The problem with providing goods or services actually worth $1000 is that the full $1000 can not be given back to the top level "sponsors", but rather only a small percentage equal to the profit margin. To both satisfy legality requirements and keep the full margin, pyramid schemes will often deliver a product with inflated value that involves virtually no cost. An example of such a product would be ebooks. A scheme could "sell" the new member 60 ebooks, each "valued" at $40 and thus provide a total value of $2400 for a reduced price of $1000. Why Are Pyramid Schemes Such a Bad Thing? In short, pyramid schemes that do not provide new members with a product that is fully worth the "membership fee" will result in a majority of all members losing their money. If we look at the 8-Ball Scheme, there are twice as many members in each new level. The bottom 3 levels always lose their money, and as shown below, this will be at least 7/8 ≈ 88% of the total of all participants. Also, see this explanation of pyramid schemes. A New and Popular Scheme - The 2-Up System In the 2 up System, the "sales income" from the first 2 people you recruit goes to the person that recruited you. The sales income from the 3rd and each subsequent recruit you obtain goes to you, along with the first two "sales" of each of your recruits, as shown below for a system priced at $1000 per sale. This system is very popular because the income levels for the person at the top can grow exponentially. Also, there is tremendous incentive for each of your recruits to pursue that 3rd sale, since this it the one that has potential to create a lot of income. The Problems With the 2 Up System The 2-Up system, like the 8-Ball Scheme, involves a geometric series where each new level is three times as large in number the previous level. If the "product" is not something worth the cost of joining and/or it a product that offers no prospects for repeated sales to the same person, we run out of people willing to buy into this “system” very quickly. So the recruits at the bottom level lose all their money. Mathematically, that will be more than 2/3 of all the people joining as shown below! Note also that the people on the second from the last level only break even, and may end up losing amounts equal to advertising costs associated with bringing in 3 recruits. The formula used above is the geometric series formula commonly used in calculus.
{"url":"http://www.mathmotivation.com/money/pyramid-scheme.html","timestamp":"2014-04-20T05:43:31Z","content_type":null,"content_length":"11396","record_id":"<urn:uuid:12cab008-2119-4436-aacc-4ff9f25609f8>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
Data-partition and migration for efficient communication in distributed memory architectures are critical for performance of data parallel algorithms. This research presents a formal methodology for the process of data-distribution and redistribution using tensor products and stride permutations as mathematical tools. The algebraic expressions representing data-partition and migration directly operate on a data vector, and hence can be conveniently embedded into an algorithm. It is also shown that these expressions are useful for a clear understanding and to efficiently interleave problems that involve different data-distributions at different phases. This compatibility made us successfully utilize these expressions in developing and demonstrating matrix transpose and fast Fourier transform algorithms. Usage of these expressions for data interface generated efficient parallel implementation to solve Euler partial differential equation. An endeavor to minimize communication cost using expressions for data-distribution disclosed a routing scheme for Fourier transform evaluation. Results promised that for large parallel marchines, this scheme is a solution to today's problems which feature enormous data. Finally, a unique data-distribution technique that effectively uses transpose algorithms for multiplication of two rectangular matrices is derived. Performance of these algorithms are evaluated by carrying out implementations on Intel's i860 based iPSC/860, Touchstone Delta, and Paragon supercomputers. This is the abstract of the Doctorate Degree's Dissertation published in September 1994 by Nagesh V. Anupindi in Department of Electrical and Computer Engineering at University of Rhode Island, Kingston, RI, USA.
{"url":"http://www.nagesh.com/publications/academic.html","timestamp":"2014-04-19T09:29:47Z","content_type":null,"content_length":"25348","record_id":"<urn:uuid:a17f8834-683f-4ed1-b392-6f892444d039>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
<language> (Named after the "lollipop" operator "-o") An interpreter for logic programming based on linear logic, written by Josh Hodas <hodas@saul.cis.upenn.edu>. Lolli can be viewed as a refinement of the Hereditary Harrop formulas of Lambda-Prolog. All the operators (though not the higher order unification) of Lambda-Prolog are supported, but with the addition of linear variations. Thus a Lolli program distinguishes between clauses which can be used as many, or as few, times as desired, and those that must be used exactly once. Lolli is implemented in SML/NJ. [Josh Hodas et al, "Logic Programming in a Fragment of Intuitionistic Linear Logic", Information and Computation, to appear]. Last updated: 1992-11-18 Try this search on Wikipedia, OneLook, Google Nearby terms: Lojban « LOL « LOLITA « Lolli » LOM » longitudinal parity » Longitudinal Redundancy Check Copyright Denis Howe 1985
{"url":"http://foldoc.org/Lolli","timestamp":"2014-04-20T10:53:58Z","content_type":null,"content_length":"5577","record_id":"<urn:uuid:c3fb0872-e760-46dd-96aa-28a2057de337>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
Adding Algebraic Fractions, Different Denominators Date: 8/24/96 at 17:41:49 From: Anonymous Subject: Adding Fractions with Different Denominators My problem is: 7 2-x 3+5x - + --- + ---- 2x 3x^2 6x^3 How on earth do I solve this? Help! Date: 8/26/96 at 12:30:58 From: Doctor Mike Subject: Re: Adding Fractions with Different Denominators Hello Louie, It isn't really that hard. It's just that the original problem has you adding apples and oranges and mangoes, so to speak. I am going to show you how to convert this to a problem where you are adding all mangoes, and then the result will be mangoes, what else! The secret is to get a Common Denominator. If you have fractions to add and all the denominators are EXACTLY the same, then you just use that denominator which is common to all the fractions as the denominator for your result, and add the numerators to get the numerator of the result. In symbols, a/x + b/x + c/x = (a+b+c)/x . The common denominator in your problem is 6x^3. The way you get the other 2 fractions to have the same denominator is to multiply each of them by one - not just any old one, but a clever version of one: 7 3x^2 2-x 2x 3+5x 21x^2 4x-2x^2 3+5x --*------ + ----*---- + ---- = ------ + ------- + ---- 2x 3x^2 3x^2 2x 6x^3 6x^3 6x^3 6x^3 Now you can add and simplify the numerators to get the answer. See? I'm sure you can finish it off. I hope this helps. -Doctor Mike, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/57374.html","timestamp":"2014-04-20T09:30:55Z","content_type":null,"content_length":"6489","record_id":"<urn:uuid:4e1b5bf4-c80b-448f-9e09-74479d5d1e6a>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
Ultrasound contrast microbubbles in imaging and therapy: physical principles and engineering Microbubble contrast agents and the associated imaging systems have developed over the past twenty-five years, originating with manually-agitated fluids introduced for intra-coronary injection. Over this period, stabilizing shells and low diffusivity gas materials have been incorporated in microbubbles, extending stability in vitro and in vivo. Simultaneously, the interaction of these small gas bubbles with ultrasonic waves has been extensively studied, resulting in models for oscillation and increasingly sophisticated imaging strategies. Early studies recognized that echoes from microbubbles contained frequencies that are multiples of the microbubble resonance frequency. Although individual microbubble contrast agents cannot be resolved—given that their diameter is on the order of microns—nonlinear echoes from these agents are used to map regions of perfused tissue and to estimate the local microvascular flow rate. Such strategies overcome a fundamental limitation of previous ultrasound blood flow strategies; the previous Doppler-based strategies are insensitive to capillary flow. Further, the insonation of resonant bubbles results in interesting physical phenomena that have been widely studied for use in drug and gene delivery. Ultrasound pressure can enhance gas diffusion, rapidly fragment the agent into a set of smaller bubbles or displace the microbubble to a blood vessel wall. Insonation of a microbubble can also produce liquid jets and local shear stress that alter biological membranes and facilitate transport. In this review, we focus on the physical aspects of these agents, exploring microbubble imaging modes, models for microbubble oscillation and the interaction of the microbubble with the endothelium.
{"url":"http://pubmedcentralcanada.ca/pmcc/articles/PMC2818980/?lang=en-ca","timestamp":"2014-04-17T13:45:27Z","content_type":null,"content_length":"300477","record_id":"<urn:uuid:d618381a-3400-4007-9af8-f7e09257be90>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
Algorithms/Distance approximations Calculating distances is common in spatial and other search algorithms, as well as in computer game physics engines. However, the common Euclidean distance requires calculating square roots, which is often a relatively heavy operation on a CPU. You don't need a square root to compare distancesEdit Given (x1, y1) and (x2, y2), which is closer to the origin by Euclidean distance? You might be tempted to calculate the two Euclidean distances, and compare them: d1 = sqrt(x1^2 + y1^2) d2 = sqrt(x2^2 + y2^2) return d1 > d2 But those square roots are often heavy to compute, and what's more, you don't need to compute them at all. Do this instead: dd1 = x1^2 + y1^2 dd2 = x2^2 + y2^2 return dd1 > dd2 The result is exactly the same (because the positive square root is a strictly monotonic function). This only works for comparing distances though, not for calculating individual values, which is sometimes what you need. So we look at approximations. Approximations of Euclidean distanceEdit The w:taxicab distance is one of the simplest to compute, so use it when you're very tight on resources: Given two points (x1, y1) and (x2, y2), $dx = | x1 - x2 |$ (w:absolute value) $dy = | y1 - y2 |$ $d = dx + dy$ (taxicab distance) Note that you can also use it as a "first pass" since it's never lower than the Euclidean distance. You could check if data points are within a particular bounding box, as a first pass for checking if they are within the bounding sphere that you're really interested in. In fact, if you take this idea further, you end up with an efficient spatial data structure such as a w:Kd-tree. However, be warned that taxicab distance is not w:isotropic - if you're in a Euclidean space, taxicab distances change a lot depending on which way your "grid" is aligned. This can lead to big discrepancies if you use it as a drop-in replacement for Euclidean distance. Octagonal distance approximations help to knock some of the problematic corners off, giving better isotropy: A fast approximation of 2D distance based on an octagonal boundary can be computed as follows. Given two points $(p_x, p_y)$ and $(q_x, q_y)$, Let $dx = | p_x - q_x |$ (w:absolute value) and $dy = | p_y - q_y|$. If $dy > dx$, approximated distance is $0.41 dx + 0.941246 dy$. Some years ago I developed a similar distance approximation algorithm using three terms, instead of just 2, which is much more accurate, and because it uses power of 2 denominators for the coefficients can be implemented without using division hardware. The formula is: 1007/1024 max(|x|,|y|) + 441/1024 min(|x|,|y|) - if ( max(|x|.|y|)<16min(|x|,|y|), 40/1024 max(|x|,|y|), 0 ). Also it is possible to implement a distance approximation without using either multiplication or division when you have very limited hardware: ((( max << 8 ) + ( max << 3 ) - ( max << 4 ) - ( max << 1 ) + ( min << 7 ) - ( min << 5 ) + ( min << 3 ) - ( min << 1 )) >> 8 ). This is just like the 2 coefficient min max algorithm presented earlier, but with the coefficients 123/128 and 51/128. I have an article about it at http://web.oroboro.com:90/rafael/docserv.php/index/programming/article/distance --Rafael (Apparently that article has moved to http://www.flipcode.com/archives/Fast_Approximate_Distance_Functions.shtml ?) Top, Chapters: 1, 2, 3, 4, 5, 6, 7, 8, 9, A Last modified on 22 May 2013, at 13:23
{"url":"http://en.m.wikibooks.org/wiki/Algorithms/Distance_approximations","timestamp":"2014-04-17T03:57:39Z","content_type":null,"content_length":"20435","record_id":"<urn:uuid:93389c22-33a2-427a-b177-bf5e62d12d7f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the square root of pi divided by 2? Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers. It is useful in many branches of mathematics, including algebraic geometry, number theory, applied mathematics; as well as in physics, including hydrodynamics, thermodynamics, mechanical engineering and electrical Murray R. Spiegel described complex analysis as "one of the most beautiful as well as useful branches of Mathematics". A mathematical constant is a special number, usually a real number, that is "significantly interesting in some way". Constants arise in many different areas of mathematics, with constants such as e and π occurring in such diverse contexts as geometry, number theory and calculus. What it means for a constant to arise "naturally", and what makes a constant "interesting", is ultimately a matter of taste, and some mathematical constants are notable more for historical reasons than for their intrinsic mathematical interest. The more popular constants have been studied throughout the ages and computed to many decimal places. The imaginary unit or unit imaginary number, denoted as i, is a mathematical concept which extends the real number system ℝ to the complex number system ℂ, which in turn provides at least one root for every polynomial P(x) (see algebraic closure and fundamental theorem of algebra). The imaginary unit's core property is that i2 = −1. The term "imaginary" is used because there is no real number having a negative square. There are in fact two complex square roots of −1, namely i and −i, just as there are two complex square roots of every other real number, except zero, which has one double square root. Hospitality Recreation Mathematical analysis is a branch of mathematics that includes the theories of differentiation, integration, measure, limits, infinite series, and analytic functions. These theories are usually studied in the context of real and complex numbers and functions. Analysis evolved from calculus, which involves the elementary concepts and techniques of analysis. Analysis may be distinguished from geometry. However, it can be applied to any space of mathematical objects that has a definition of nearness (a topological space) or specific distances between objects (a metric space). Early results in analysis were implicitly present in the early days of ancient Greek mathematics. For instance, an infinite geometric sum is implicit in Zeno's paradox of the dichotomy. Later, Greek mathematicians such as Eudoxus and Archimedes made more explicit, but informal, use of the concepts of limits and convergence when they used the method of exhaustion to compute the area and volume of regions and solids. In India, the 12th century mathematician Bhāskara II gave examples of the derivative and used what is now known as Rolle's theorem. In mathematics, an algebraic number is a number that is a root of a non-zero polynomial in one variable with rational coefficients (or equivalently—by clearing denominators—with integer coefficients). Numbers such as π that are not algebraic are said to be transcendental; almost all real and complex numbers are transcendental. (Here "almost all" has the sense "all but a countable set"; see Properties below.) The sum, difference, product and quotient of two algebraic numbers is again algebraic (this fact can be demonstrated using the resultant), and the algebraic numbers therefore form a field, sometimes denoted by A (which may also denote the adele ring) or Q. Every root of a polynomial equation whose coefficients are algebraic numbers is again algebraic. This can be rephrased by saying that the field of algebraic numbers is algebraically closed. In fact, it is the smallest algebraically closed field containing the rationals, and is therefore called the algebraic closure of the rationals. Hospitality Recreation Related Websites:
{"url":"http://answerparty.com/question/answer/what-is-the-square-root-of-pi-divided-by-2","timestamp":"2014-04-17T06:41:05Z","content_type":null,"content_length":"29789","record_id":"<urn:uuid:d49e9191-b719-4a73-bde0-e7224b6ea6b9>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
Errors of Two Kinds I have had my results for a long time: but I do not know yet how I am to arrive at them. ~ Carl Friedrich Gauss (found here) Psychologists are chronically – or rather intermittently – worried about how scientific their science is. The hallmark of a respectable science is that it yields reproducible results. When this reproducibility is threatened, the science itself is threatened. Psychologists have shown much ingenuity in finding clever, even elegant, ways to harness phenomena. They have also been at the forefront of developing sophisticated methods of analyzing numerical data. Yet, the old headache, the ancient fear of being shown up as an impostor, just won’t go away.
{"url":"http://www.psychologytoday.com/blog/one-among-many/201211/errors-two-kinds","timestamp":"2014-04-20T13:55:59Z","content_type":null,"content_length":"66047","record_id":"<urn:uuid:e89216e0-52c1-48a7-8f33-33048e1619a1>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
The Category Theory of Appendages There has been much discussion recently on haskell-cafe about Monoids, much (too much!) focusing on the possibility of renaming of the existing monoid typeclass to "Appendable"... which is a daft idea, not least because then we'd need to rename Monad to be "Appendage". Anyway, I wanted to focus a bit on the link between the two concepts (monoid and monad), and how that's treated from the Category Theoretic perspective. (I'm not aiming at any kind of rigour ... simply trying to gain some basic intuitions). In terms of assumptions, I assume familiarity with the definitions of Category, Functor and ideally categorical products / co-products, initiality and This article aims to cover similar ground to one of sigfpe's , but at a slightly higher (hopefully simpler) level. Traditional Monoids The basic concept of Monoid is treated from the standard mathematical (abstract algebra) perspective , and from the practical (Haskell) perspective sigfpe has a nice article So, from a traditional point of view, a monoid is: * A set, along with... * an associative binary function over that set, accompanied by ... * an identity element The basic (non-category theoretic) view of monoids in Haskell is pretty much the standard one, viewing Haskell types as sets. So in Haskell common examples of monoids are: (String, (++), []) (Int, (+), 0) (Int, (*), 1) (Bool, (&&), True) (Bool, (||), False) In the Haskell setting this means that the function has both arguments and result with the same type - ie the function's type is "a -> a -> a", and the identity element is a distinguished value of type "a". And, of course, it is exactly this which is embodied in the Category takes this standard view of monoids and generalises it somewhat. This generalisation has two parts - not only does it generalise so that we can talk about monoids in different categories, but it also generalises so that we can have multiple ways of identifying monoids a single category. Categories which can contain Monoids From the category theory perspective, a category can only "play host" to monoids under the following conditions: * It must be equipped with a over the category which maps any pair of objects to another object, and any pair of arrows to another arrow. (This type of functor is similar to a normal functor apart from operating on pairs - and is known as a Bifunctor). Let us call this functor " * It must have a distinguished (which we'll call " ") which acts as an identity for this functor. This then gives us what is known as a Monoidal Category . (Note that by making different choices for the (bi)functor and distinguished object we may have several different ways to view an underlying category as monoidal). The simplest example is on Set - we take the standard product as the (bi)functor (which maps any pair of sets to the set which is their product), and a single-element set (ie terminal object) as the distinguished object. In the context of Haskell, we take as the (bi)functor, the product (bi)functor (which maps any pair of types, say 'a' and 'b', to their product type "(a,b)", and any pair of functions, say 'a->c' and 'b->d' to the function '(a,b)->(c,d)'). As the distinguished object we take "()" - the unit type whose sole value "()" is written syntactically identically to its type. Category Theoretical Monoids Once we have those two components, we have a category in which it is possible to identify categorical monoids, but we don't actually have the definition of a categorical monoid itself. A monoid within a monoidal category is defined to be: * An object (which we'll call "c") * An arrow from (c,c) to c. * An arrow from to c. ...such that some basic diagrams commute . Now, recall that " " is the (bi)functor which we selected when creating our monoidal category, so " (c,c)" is just another object. So, to make this concrete - if we consider Haskell as a monoidal category as above, then we can take "c" to be "String", our first arrow to be "(++) :: (String,String) -> String", and our second arrow to be the function ":: () -> String" which takes "()" as its argument and returns "[]". If we compare this with our first, non-categorical, definition of (String, (++), []) as a monoid above, the parallels are quite clear. The differences are firstly that we now need to pretend that (++) has type "(String,String) -> String" rather than "String -> String -> String", which we can do simply by viewing it as an uncurried function, and secondly that we're representing "[]" by a function rather than a value. This extra baggage only really becomes useful when we look at other categorical monoids (which aren't plain, normal monoids). So far, when discussing Haskell, we've implicitly had in mind the category "Hask" which has types as its objects, and Haskell functions between those types as its arrows. What we can now do, is to consider another category which is closely related to "Hask" - namely the category formed by taking as objects all Haskell functors ("Maybe", "[]", etc...) and as arrows all functions from one functor type to another ("listToMaybe :: [a] -> Maybe [a]", "maybeToList :: Maybe a -> [a]", etc...). This is the "endofunctor" category over Hask, and these arrows are natural transformations. Next we're going to go looking for monoids in this "endofunctor" category using our above definitions. We need to keep a slightly clear head at this point, because we need to remember that the objects of this category are functors over another category - hence when we just say "functor" we need to be clear whether we're talking about one of these objects (ie a functor in the underlying category ), or about a functor over this category First we need to choose a (bi)functor over this category - we'll choose composition of functors (so this takes a pair of objects - say "Maybe" and "[]" and maps them to their composition "[Maybe]"). Secondly we need to choose a distinguished object - we'll choose the identity functor "Id". Finally, using our second definition, we can see that a monoid in this category must be an object along with two suitable arrows (ie this will be a functor and two natural transformations in Hask). We can take 'Maybe' as the object, 'join :: Maybe (Maybe a) -> Maybe a' as the first arrow, and 'Just :: a -> Maybe a' as our second arrow. Thus equipped, "Maybe" can be seen as a monoid in the endofunctor category over Hask. And that, of course, is what a monad is - a monad over a category 'C' is a (categorical) monoid in the endofunctor category over 'C'. Back to Set (...and Hask?) We can now go back and take another look at the Set category. This time we can look for some monoids which are monoids from the category theoretic perspective, but from the traditional perspective. We can do this by viewing Set as a monoidal category using as the (bi)functor (ie disjoint union rather than product) and taking the initial object (the empty set) as the distinguished object. Under this definition, "monoids" would be objects (ie Sets - as before), equipped with (a) a function to the set from the disjoint union of the set with itself, and (b) a function to the set from the empty set - ie the empty function. Now, I have to confess I don't remotely understand the implications of this. I haven't ever seen any reference to what such objects would be called in "Set". In Hask, the coproduct of 'a' and 'b' is 'Either a b', and on functions takes 'a->c' and 'b->d' to 'Either a b -> Either c d'. Also, I'm not sure how much sense it makes to talk about an empty type in Hask, or an empty function in Hask. It seems like there ought to be something useful (back in Haskell land) to drop out of this... I'd be very interested in anyone has any ideas / pointers... 48 comments: > Under this definition, "monoids" would be objects (ie Sets - as before), equipped with (a) a function to the set from the disjoint union of the set with itself, and (b) a function to the set from the empty set - ie the empty function. Let’s see what we can make out of this. I’m going to write the elements of S={x,y,..}, and the elements of the disjoint union of S with itself as {Lx,Rx,Ly,Ry,...}, and the function that you describe in (a) as f. Looking at the diagrams on http://en.wikipedia.org/wiki/Monoid_object, I think associativity requires f(f(LRx))=f(f(RLx)) and the other diagram implies that f(Lx) = x and f(Rx) = x (given that α, λ and ρ are chosen in the expected way). It seems to me that this shows that each set is a categorial monoid in exactly one way. Basically, it’s forgetting which copy in a disjoint union an element lies. In Haskell, this would be the function > forgetEither :: Either a a -> a > forgetEither (Left x) = x > forgetEither (Right x) = x and the empty function could be this one: I think the coproduct monoid would correspond to a Haskell class with methods "copappend :: Either a a -> a" and "copempty :: Void -> a". The latter function is completely useless, since there are no values of type Void. While I’m at it, I find it strange that this library defines these instances: > HasIdentity Hask (,) Void > Monoidal Hask (,) Void I’d expect > HasIdentity Hask Either Void > Monoidal Hask Either Void > HasIdentity Hask (,) () > Monoidal Hask (,) () Also I’d expect the methods "idl" and "idr", which correspond λ and ρ, as methods of HasIdentiy and I’m missing an method of Monoidal representing α (using the notation of http://en.wikipedia.org /wiki/Monoidal_category). Maybe I should submit a patch. I wrote a patch and submitted it, you can have a look at it here: Hi there. Actually the choice of Void over () is deliberate and somewhat subtle. () has two distinguishable members. _|_ and (), so there is extra 'information' available in a pair of () and another value, while Void is always _|_, so there is no information lost, hence its status as an identity. Technically since (,) introduces a bottom, it is already a bit of an approximation to say that Hask is Monoidal. If you go back to ~1.4 with strict pairs its true though. You can't have any extra information imparted because of the requirement that idr/coidr reflects a natural isomorphism. coidr should be called opidr or something, but see below. > Why do you define inl and indr as methods of Monoidal? To me it seems as if they'd belong to HasIdentity, as they are defined for this property (based on http://en.wikipedia.org/wiki/Monoidal_category) In Hask^op, (,) is the Sum, which has an identity, but coidr and coidl are defined, not idl and idr, so integrating them into HasIdentity is flawed because of lax scenarios where one way exists but not the other. * Are you sure > HasIdentity Hask (,) Void > Monoidal Hask (,) Void is correct? I'd expect > HasIdentity Hask Either Void > Monoidal Hask Either Void > HasIdentity Hask (,) () > Monoidal Hask (,) () The use of the terms Monoidal and Comonoidal in that file are non-standard and apparently I dropped my documentation to that effect. ;) Mainly because idl and idr only witness the 'forward half' of the natural isomorphism between I * A and A for an identity I for * and so should probably be called something like LaxMonoidal, while coidl/coidr witness the 'backwards half' of the natural transformation, which is required for a lax comonoidal category. Since you need a full natural isomorphism to be truly monoidal, the irony is you need both definitions for idr/idl and coidr/coidl to be truly monoidal, and similarly you need the same definitions to be truly comonoidal. However, this then necessitates 4 functions with the exact same signature and no ambiguity, I should go through and clarify that in the commentary, and probably rename Monoidal/Comonoidal. Arguably they should be something like idr/idl/opidr/opidl and coidr/coidl/coopidr/coopidl, but the signatures are the same, co and op 'cancel' and it gets really confusing fast, i.e. which comes first in the name, co or op? With that in mind, in a perfect world, a language's category could be a monoidal category over (,) and a comonoidal category over Either, which means in the terminology above, that Either would have a Monoidal Void-like Identity, unfortunately, for Either to have a true identity the type has to be truly uninhabited, which isn't possible in Haskell because _|_ is a member of every type. An Monoidal Identity for Either wouldn't even have _|_ as a member, so calling that identity 'Absurd' for the nonce, it would mean that if you have Either a Absurd, that you cannot have used the Right constructor at all, because you couldn't have applied it to a value. This means that it'd be safe to idr to always reduce Either a Absurd -> a without creating a _|_. Unfortunately the decision to allow _|_ in the Right slot is a decision of how Either and laziness interact and has nothing to do with the definition of Absurd. This is clearly required by the requirement that idr /opidr (er.. coidr currently) reflect a natural isomorphism. Either a Absurd can't have any more members than 'a', but Right _|_ is an extra member in Haskell. Contradiction. An arguably more correct choice of terminology would be to move idl and idr into something like HasLaxIdentity and coidl and coidr into something like HasColaxIdentity, then make Monoidal and Comonoidal both require HasLaxIdentity and HasColaxIdentity, then correctness would be restored. In the coproduct monoidal category, shouldn't the coidentity be 'a' (the type of 'undefined')? dysfunctor, unfortunately _|_ inhabits 'a' as well. this seems like it works until you realize that there can't be a natural isomorphism between Either a Whatever and a because the extra member Right _|_ has no place it can be mapped to uniquely in a. Happy New Year~~!!!............................................................ pleasure to find such a good artical! please keep update!! ........................................ nice to know you, and glad to find such a good artical!......................... may the blessing be always with you!! ........................................ hello~welcome my world~<. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 很喜歡你的blog哦...加油唷 ........................................ No pains, no gains....................................................... thank you for you to make me learn more,thank you∩0∩ ........................................ Never put off till tomorrow what may be done today................................................................ 85cc卡通成人片 av免費視訊尋夢網聊天室免費成人高雄交友網女優寫真網站性感影片歐美自拍 ut聊天室女女愛愛情趣桃園成人網站少女自拍 av影城台灣a片免費看情色漫畫色情"" 一夜情 383 18禁玩美女人電眼美女辣妹 性感人妻相簿, 激情熟女影音視訊聊天室視訊聊天 a圖片免費看寫真免費影片 85cc成人圖片杜雷斯成人微風免費視訊視訊聊天室自拍性愛貼圖援交妹拍影片情人趣味免費線上影片 av免費影片觀賞洪爺影城如何下載大奶色 情影片 520sex 成人片下載新任巨乳女教師奈奈下性愛光碟後宮電影院+入口限制級 0204 性愛免費看歐美正妹走光照 you always know the right thing to say!............................................................ your english is incredible............................................................ 若無一番寒徹骨,焉得梅花撲鼻香。 ............................................................ Homes For Sale, Foreclosures listings from homesbylender.com register free. If you are selling your home by owner visit homes by lender and list it here for free. Homes For Sale 寂寞又無聊看到你的BLOG 加油喔!!.................................................................. IS VERY GOOD.............................. A stitch in time saves nine.............................................................
{"url":"http://nattermorphisms.blogspot.com/2009/01/category-theory-of-appendages.html","timestamp":"2014-04-19T01:49:14Z","content_type":null,"content_length":"226574","record_id":"<urn:uuid:73f006fa-1c69-4163-a2be-4a65dc508f46>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
A Guide to the R. G. Lubben Papers, A Guide to the R. G. Lubben Papers, 1918-1974 Creator: Lubben, R. G. (Renke Gustav), 1898-1980 Title: R. G. Lubben Papers Dates: 1918-1974 Abstract: R. G. Lubben (1898-1980) was R. L. Moore's second doctoral student at The University of Texas at Austin. After graduating, he remained at UT for his entire career. Records of Lubben's teaching form the largest part of his papers, including files of course outlines, teaching notes, and examinations for several courses. Accession 86-9; 2006-202; 2012-184 Extent: 6 ft., plus books and journals Language: Material is written in English. Repository: Dolph Briscoe Center for American History, The University of Texas at Austin Renke Gustav Lubben was born on March 7, 1898, at Utica, Nebraska. His family moved to Jackson County, Texas, and he entered the University of Texas in 1916. He there received his B.A. in 1921 and his Ph.D. in 1925; he was R.L. Moore's second Texas doctoral student. Lubben remained at the University of Texas at Austin for his whole career. He taught courses in advanced geometry, advanced algebra, analytic geometry, and trigonometry. His research was in point-set topology and foundations of geometry. He died in 1980. Records of Lubben's teaching form the largest part of his papers, including files of course outlines, teaching notes, and examinations for several courses. There are also many photographs of R. L. Moore and his students, chiefly from the early 1930s. The collection also includes approximately 225 mathematical books and dissertations from Lubben’s personal library, as well as issues of mathematical journals. Materials were later added to the collection by Ross Pounders, Lubben’s nephew (accession 2012-184). Forms part of the Archives of American Mathematics. Organized into six series: General Correspondence Other Professional Activities Books, Dissertations, and Journals Access Restrictions Unrestricted access. Use Restrictions The majority of these papers are stored remotely at CDL and LSF. Advance notice required for retrieval. Contact repository for retrieval. Photographs are stored on-site. Lubben, R. G., 1898-1980 -- Archives University of Texas at Austin. Dept. of Mathematics Geometry, Projective Mathematics -- Study and Teaching -- United States R. G. Lubben Papers, 1918-1974, Archives of American Mathematics, Dolph Briscoe Center for American History, University of Texas at Austin. Ross Pounders accession processed by Elliot Williams, October 2012. A Detailed Description of the Papers 86-9/1 Incoming, 86-9/1 Advanced algebra Analytic geometry; Topological groups Non-Euclidean geometry Analytic I Analytic II Axioms, Trigonometry, 1934;1966-68 86-9/2 Common; Hyperbolic Old, including notes from Dr. Moore's class; Problems and exams Projective geometry Lattices; Old notes, problems, exams; Projective geometry, 1941 86-9/3 Polarities in the plane, Cross ratios, Quadrangular sets Projectivities, Algebra of points, Collineations and polarities, Correlations Projectivities, Conic sections, Regulus, Desargues' theorem Topics in modern algebra Beginning, Galois theory, 1960-68;1952-68 86-9/4 Groups Log of course, Problems and exams, Class notes Matrices, Vector spaces, Linear transformations, Determinants, Linear equations and affine spaces Normal matrices, Linear transformations, Affine transformations Problems, quizzes 86-9/5 Restricted: Gradebooks [Note: student grade information is restricted for 80 years from the date of creation. See archivist for more Other professional activities 86-9/4 UT Mathematics Department, 1969-73, and undated Math Club, 1935-47 Sigma Xi, 1935-44 AAM-OS/2 National Research Council Research Fellowship Board certificate for Lubben's appointment as 1926 National Research Fellow in Mathematics AAM-MNR/4 National Research Council Research Fellowship correspondence [donated by Ross Pounders], 1926-1927 86-9/4 Mathematicians - Biographical and memorial items, newspaper clippings; Postcards of Gottingen and Gottingen mathematicians Notes on mathematical books; Book purchases; Abstract, 1941 86-9/7 R.H. Bing H.J. Ettlinger F. B. Jones R. G. Lubben 86-9/8 R. L. Moore R. L. Wilder By others Bound volumes 86-9/9 Bound volumes By Lubben, 1928-1940 86-9/6 Family and other letters (outgoing) Notes and examinations relating to Lubben's education, 1922-1923 and undated Notes, Book purchases Notes on European languages and geography 86-9/7 Other notes 3W111 Honorary Golden Anniversary Diploma, conferred by the University of Texas Ex-Students' Association, Mar. 27, AAM-MNR/4 U.S. Army discharge papers [donated by Ross Pounders], 1918 Family letters (outgoing, from Europe) [donated by Ross Pounders], 1926-1927, 1931-1932 [Note: writing in black ink on the backs of the photographs was done by R. E. Greenwood, 1975] 3T401e Panoramic print: group photograph, Semi-centennial celebration of the American Mathematical Society, New York City, September 6-9, 4RM51 R. G. Lubben, ca. 1927-1935 and undated Karl Menger, [alone and with Porter, Roberts, and Lubben], ca. 1931 R. L. Moore, ca. 1927-1935 UT Mathematics Department faculty and students, ca. 1927-1935 and undated Outdoor [camping?] photos, includes Lubben, Vickery, Klipple, Biesele, F. B. Jones, adn others, ca. 1930s Unidentified group portraits [Lubben's math students?], 1927-1929 and undated Unidentified [family?], 1932 Landscapes and buildings, undated 3So14a Group photograph, Zurich International Mathematical Congress, 1932 3S105b Negatives [2.5" X 3.5"] of RLM portraits and group photo of students at Moore's house, spring 1935 3S199e [Note: these photographs were transferred from the Harry Ransom Humanities Research Center]: Christine Lubben, portrait on convex-oval prints, ca. 1900 Dickie and Elsie Lubben, portrait on convex-oval prints, ca. 1910 ca. 1910 3Y111d Unidentified portraits, undated Outdoors and camping, undated University of Texas campus, undated 3W111 UT mathematics department, 1966-1967 Group photographs: University of Texas Ex-Students' Association [class of 1921], Mar. 21, 1971 Books, Dissertations, and Journals 86-9/10 (LSF) Books and Dissertations Cook, David Edwin, Concerning Contact Point Sets with Noncompact Clusters [typed dissertation signed by Lubben and R.L. Moore], Coxeter, H.S.M., Non-Euclidean Geometry, 1947 Wolfe, Harold Eichholtz, Introduction to Non-Euclidean Geometry, 1947 CDL The remainder of Lubben’s library is shelved at CDL (3rd floor). Contact archivist for a complete list of the books included. [A complete list of all available issues for the following journals is available; consult archivist for more information.] American Mathematical Monthly, 1972 [incomplete] Bulletin of the American Mathematical Society, 1923-1938 [incomplete] 1944-1951 [incomplete] 1952-1972 [complete] 1973 [incomplete] Combined Membership List: AMS, MAA, SIAM, 1952, 1966-1967, 1968-1973 Duke Mathematical Journal, 1935 [incomplete], 1939-1946 [complete] Notices of the American Mathematical Society, 1972-1974 [incomplete] Proceedings of the American Mathematical Society, 1950-1965 [complete] Rendiconti del Seminario Matematico (Conferenze di Fisica e di Matematica, Universitá e Politecnico di Torino), vol. 28 1968-1969 University of Texas at Austin, Publications of the Faculty and Staff, 1969-1970
{"url":"http://www.lib.utexas.edu/taro/utcah/00225/00225-P.html","timestamp":"2014-04-21T06:11:14Z","content_type":null,"content_length":"26637","record_id":"<urn:uuid:2a23e7cc-eed9-4b66-94cb-47411bf9845a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
Pooya Hatami E-mail: pooya at cs . uchicago . edu I am currently a PhD student in Department of Computer Science at University of Chicago under the supervision of Alexander Razborov and co-advised by Madhur Tulsiani. Research Interests: Combinatorics, Additive Combinatorics, and Property Testing. Theory Group at our department. My Curriculum Vitae (might not be up-to-date) Research Papers P. Hatami, Lower Bounds on Testing Functions of Low Fourier Degree, Masters Paper, University of Chicago, 2013 I've recently started taking photos with a Fujifilm X20, here's my Photohshoot blog.
{"url":"http://people.cs.uchicago.edu/~pooya/","timestamp":"2014-04-17T09:34:08Z","content_type":null,"content_length":"8566","record_id":"<urn:uuid:e3e80276-d054-46be-8646-74d6268e8505>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof that 1 + 1 = 1? Date: 09/04/97 at 20:43:37 From: Ben Mayer Subject: 1 + 1 = 1? My programming teacher said that there was a proof that proved that 1 + 1 = 1. He also said in this proof that one of the steps was a little bit "in the gray area." I was wondering if you could shed any light on this subject. Thank you, Benjamin W. Mayer Date: 09/08/97 at 11:36:34 From: Doctor Guy Subject: Re: 1 + 1 = 1? Sure, I can do that. Here goes: Let a = 1 and b = 1. Therefore a = b, by substitution. If two numbers are equal, then their squares are equal, too: a^2 = b^2. Now subtract b^2 from both sides (if an equation is true, then if you subtract the same thing from both sides, the result is also a true equation) so a^2 - b^2 = 0. Now the lefthand side of the equation is a form known as "the difference of two squares" and can be factored into (a-b)*(a+b). If you don't believe me, then try multiplying it out carefully, and you will see that it's correct. So: (a-b)*(a+b) = 0. Now if you have an equation, you can divide both sides by the same thing, right? Let's divide by (a-b), so we get: (a-b)*(a+b) / (a-b) = 0/(a-b). On the lefthand side, the (a-b)/(a-b) simplifies to 1, right? and the righthand side simplifies to 0, right? So we get: 1*(a+b) = 0, and since 1* anything = that same anything, then we have: (a+b) = 0. But a = 1 and b = 1, so: 1 + 1 = 0, or 2 = 0. Now let's divide both sides by 2, and we get: 1 = 0. Then we add 1 to both sides, and we get what your programming teacher said, namely: 1 + 1 = 1. In fact, you can prove that 47 = -3 or anything else you want. But of course you know that is wrong. Do you know what I did that was not correct? Shall I tell you? If you want to work it out for yourself before viewing my answer, I will space down a few lines so you can hide my response and work it out for yourself. not yet... Okay, here's the bad thing I did. You can divide both sides of an equation by the same thing ONLY AS LONG AS YOU ARE NOT DIVIDING BY ZERO. In fact, you cannot ever divide by zero. When I divided by (a-b), that means a somewhat disguised form of 0, since a = b = 1. That's where I went wrong. Did you figure that out by yourself, or did you need the hint? -Doctor Guy, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/57110.html","timestamp":"2014-04-16T04:31:03Z","content_type":null,"content_length":"7514","record_id":"<urn:uuid:d1115a2b-0d4e-4e4d-a9d7-6747c07c1990>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiplying and Dividing Integers PowerShow.com is a leading presentation/slideshow sharing website. Whether your application is business, how-to, education, medicine, school, church, sales, marketing, online training or just for fun, PowerShow.com is a great resource. And, best of all, most of its cool features are free and easy to use. You can use PowerShow.com to find and download example online PowerPoint ppt presentations on just about any topic you can imagine so you can learn how to improve your own slides and presentations for free. Or use it to find and download high-quality how-to PowerPoint ppt presentations with illustrated or animated slides that will teach you how to do something new, also for free. Or use it to upload your own PowerPoint slides so you can share them with your teachers, class, students, bosses, employees, customers, potential investors or the world. Or use it to create really cool photo slideshows - with 2D and 3D transitions, animation, and your choice of music - that you can share with your Facebook friends or Google+ circles. That's all free as well! For a small fee you can get the industry's best online privacy or publicly promote your presentations and slide shows with top rankings. But aside from that it's free. We'll even convert your presentations and slide shows into the universal Flash format with all their original multimedia glory, including animation, 2D and 3D transition effects, embedded music or other audio, or even video embedded in slides. All for free. Most of the presentations and slideshows on PowerShow.com are free to view, many are even free to download. (You can choose whether to allow people to download your original PowerPoint presentations and photo slideshows for a fee or free or not at all.) Check out PowerShow.com today - for FREE. There is truly something for everyone! presentations for free. Or use it to find and download high-quality how-to PowerPoint ppt presentations with illustrated or animated slides that will teach you how to do something new, also for free. Or use it to upload your own PowerPoint slides so you can share them with your teachers, class, students, bosses, employees, customers, potential investors or the world. Or use it to create really cool photo slideshows - with 2D and 3D transitions, animation, and your choice of music - that you can share with your Facebook friends or Google+ circles. That's all free as well! For a small fee you can get the industry's best online privacy or publicly promote your presentations and slide shows with top rankings. But aside from that it's free. We'll even convert your presentations and slide shows into the universal Flash format with all their original multimedia glory, including animation, 2D and 3D transition effects, embedded music or other audio, or even video embedded in slides. All for free. Most of the presentations and slideshows on PowerShow.com are free to view, many are even free to download. (You can choose whether to allow people to download your original PowerPoint presentations and photo slideshows for a fee or free or not at all.) Check out PowerShow.com today - for FREE. There is truly something for everyone!
{"url":"http://www.powershow.com/view/23e2f2-NjAwN/Multiplying_and_Dividing_Integers_powerpoint_ppt_presentation","timestamp":"2014-04-18T05:39:55Z","content_type":null,"content_length":"93514","record_id":"<urn:uuid:8eada623-8844-4a96-96d5-6d9f3472729e>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: Efficient Simplification of Large Vector Maps Rendered onto 3D Landscapes March/April 2011 (vol. 31 no. 2) pp. 14-23 ASCII Text x Ling Yang, Liqiang Zhang, Jingtao Ma, Zhizhong Kang, Lixin Zhang, Jonathan Li, "Efficient Simplification of Large Vector Maps Rendered onto 3D Landscapes," IEEE Computer Graphics and Applications, vol. 31, no. 2, pp. 14-23, March/April, 2011. BibTex x @article{ 10.1109/MCG.2010.63, author = {Ling Yang and Liqiang Zhang and Jingtao Ma and Zhizhong Kang and Lixin Zhang and Jonathan Li}, title = {Efficient Simplification of Large Vector Maps Rendered onto 3D Landscapes}, journal ={IEEE Computer Graphics and Applications}, volume = {31}, number = {2}, issn = {0272-1716}, year = {2011}, pages = {14-23}, doi = {http://doi.ieeecomputersociety.org/10.1109/MCG.2010.63}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - MGZN JO - IEEE Computer Graphics and Applications TI - Efficient Simplification of Large Vector Maps Rendered onto 3D Landscapes IS - 2 SN - 0272-1716 EPD - 14-23 A1 - Ling Yang, A1 - Liqiang Zhang, A1 - Jingtao Ma, A1 - Zhizhong Kang, A1 - Lixin Zhang, A1 - Jonathan Li, PY - 2011 KW - vector map KW - simplification KW - level of detail KW - digital elevation models KW - overlay KW - computer graphics KW - graphics and multimedia VL - 31 JA - IEEE Computer Graphics and Applications ER - Real-time rendering of large-scale vector maps over terrain surfaces requires displaying substantial numbers of polylines and polygons. Because a long latency in display and manipulation is fatal to maintaining presence in a virtual environment, a new method efficiently simplifies and renders such maps. This method consists of three steps. First, it simplifies the vector map while maintaining the map's topological consistency and preventing local conflicts such as intersections or self-intersections. Second, it generates view-dependent level-of-detail (LOD) models. Finally, it overlays the maps onto multiresolution terrain models through the stencil shadow volume algorithm and other techniques. Experiments demonstrated the method's efficiencies in real-time rendering of large-scale vector maps over various LOD terrain surfaces. 1. V. Chandru, V.T. Rajan, and R. Swaminathan, "Monotone Pieces of Chains," ORSA J. Computing, vol. 4, no. 4, 1992, pp. 439–446. 2. N. Mustafa et al., "Dynamic Simplification and Visualization of Large Maps," Int'l J. Geographical Information Science, vol. 20, no. 3, 2006, pp. 273–320. 3. D.H. Douglas and T.K. Peucker, "Algorithms for the Reduction of the Number of Points Required to Represent a Digitized Line or Its Caricature," Canadian Cartographer, vol. 10, no. 2, 1973, pp. 4. M. Schneider and R. Klein, "Efficient and Accurate Rendering of Vector Data on Virtual Landscapes," J. WSCG, vol. 15, nos. 1–3, 2007, pp. 59–65. 5. Z. Zhao and A. Saalfeld, "Linear-Time Sleeve-Fitting Polyline Simplification Algorithms," Proc. Auto-Carto XIII, pp. 214–223. 6. B.S. Yang, R. Purves, and R. Weibel, "Efficient Transmission of Vector Data over the Internet," Int'l J. Geographical Information Science, vol. 21, no. 2, 2007, pp. 215–237. 7. S. Atlan and M. Garland, "Interactive Multiresolution Editing and Display of Large Terrains," Computer Graphics Forum, vol. 25, no. 2, 2006, pp. 211–224. 8. S. Röttger et al., "Real-Time Generation of Continuous Levels of Detail for Height Fields," Proc. 6th Int'l Conf. in Central Europe on Computer Graphics and Visualization (WSCG '98), IEEE Computer Society Press, 1998, pp. 315–322. 1. D.H. Douglas and T.K. Peucker, "Algorithms for the Reduction of the Number of Points Required to Represent a Digitized Line or Its Caricature," Canadian Cartographer, vol. 10, no. 2, 1973, pp. 2. M. Visvalingam and J.D. Whyatt, "Line Generalisation by Repeated Elimination of Points," Cartographic J., vol. 30, no. 1, 1993, pp. 46–51. 3. Z. Zhao and A. Saalfeld, "Linear-Time Sleeve-Fitting Polyline Simplification Algorithms," Proc. Auto-Carto XIII, pp. 214–223. 4. W.Z. Shi and C.K. Cheung, "Performance Evaluation of Line Simplification Algorithms for Vector Generalization," Cartographic J., vol. 43, no. 1, 2006, pp. 27–44. 5. R. Estkowski and J.S.B. Mitchell, "Simplifying a Polygonal Subdivision While Keeping It Simple," Proc. 17th ACM Symp. Computational Geometry, ACM Press, 2001, pp. 40–49. 6. L.J. Guibas et al., "Approximating Polygons and Subdivisions with Minimum Link Paths," Int'l J. Computational Geometry and Applications, vol. 3, no. 4, 1993, pp. 383–415. 7. A. Mantler and J. Snoeyink, "Safe Sets for Line Simplification," Proc. 10th Ann. Workshop Computational Geometry, Stony Brook Univ., 2000. 8. N. Mustafa et al., "Dynamic Simplification and Visualization of Large Maps," Int'l J. Geographical Information Science, vol. 20, no. 3, 2006, pp. 273–320. 1. Z. Wartell et al., "Rendering Vector Data over Global, Multiresolution 3D Terrain," Proc. 2003 Symp. Data Visualization, ACM Press, 2003, pp. 213–222. 2. O. Kersting and J. Döllner, "Interactive Visualization of Vector Data in GIS," Proc. 10th ACM Int'l Symp. Advances in GIS, ACM Press, 2002, pp. 107–112. 3. M. Schneider and R. Klein, "Efficient and Accurate Rendering of Vector Data on Virtual Landscapes," J. WSCG, vol. 15, nos. 1–3, 2007, pp. 59–65. Index Terms: vector map, simplification, level of detail, digital elevation models, overlay, computer graphics, graphics and multimedia Ling Yang, Liqiang Zhang, Jingtao Ma, Zhizhong Kang, Lixin Zhang, Jonathan Li, "Efficient Simplification of Large Vector Maps Rendered onto 3D Landscapes," IEEE Computer Graphics and Applications, vol. 31, no. 2, pp. 14-23, March-April 2011, doi:10.1109/MCG.2010.63 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/mags/cg/2011/02/mcg2011020014-abs.html","timestamp":"2014-04-18T08:07:54Z","content_type":null,"content_length":"55406","record_id":"<urn:uuid:07f35eaa-a075-4d8b-aff4-968b542a00be>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by black mamba on Wednesday, March 23, 2011 at 11:47am. what is the area of circle with the diameter of 15 feet, rounded to the nearest whole number? • math - Writeacher, Wednesday, March 23, 2011 at 11:59am http://www.aaamath.com/ >>> http://www.aaastudy.com/geo.htm >>> Surface area • math - jai, Wednesday, March 23, 2011 at 1:16pm recall that the area of a circle is given by: Area = pi*r^2 since in the problem, diameter is given, we first get radius: diameter = 2*radius d/2 = r 15/2 = 7.5 ft = r A = pi*r^2 A = 3.14*7.5^2 A = ? now solve this using calculator,, to round off, first you need to know the place of the digit you want to round off,, in the problem it's 'to the nearest whole number (or the ones digit)'. thus we look at the number to the right of ones digit if the number to the right of ones digit is: *greater than or equal to 5, you add 1 to ones digit *less than 5, the ones digit does not change for example,, 342.67 --->rounding off to nearest whole number gives 343 now apply this to the answer you've got. :) hope this helps~ :) • math - Anonymous, Wednesday, March 23, 2011 at 4:56pm A=176.7144375 ft^2 A=177 ft^2 rounded to the nearest whole number Related Questions geometry - A CIRCLE HAS AN AREA OF 50 SGUARE FEET.WHAT IS THE APPROXIMATE ... 10th grade - If a circle has a diameter of 7inches, what is the circumference ... math - I'm asked to find the circumference of a circle to the nearest tenth and ... Circumference and Area of Circles - can someone help me with these:Find the ... Math - A circular swimming pool is 4 feet deep and has a diameter of 15 feet. If... math - i am a decimal with my last digit in the hundredths. rounded to the ... math - A storage area measures 15.6 feet by 10.2 feet by 8.5 feet. If it is 1/3 ... Math - The diameter of a circle is 7 inches. What is the circumference of the ... math - 1.) Calculate the area of a circle with radius 8.5 inches. . 2.) ... Algebra - Which must be a whole number? A. The time it takes to drive 300 miles...
{"url":"http://www.jiskha.com/display.cgi?id=1300895275","timestamp":"2014-04-18T11:11:54Z","content_type":null,"content_length":"9694","record_id":"<urn:uuid:d70d018d-b738-4604-b3dd-d7e18983b3e4>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
scipy.stats.describe(a, axis=0)[source]¶ Computes several descriptive statistics of the passed array. a : array_like Parameters : axis : int or None axis along which statistics are calculated. If axis is None, then data array is raveled. The default axis is zero. size of the data : int length of data along axis (min, max): tuple of ndarrays or floats minimum and maximum value of data array arithmetic mean : ndarray or float mean of data along axis Returns : unbiased variance : ndarray or float variance of the data along axis, denominator is number of observations minus one. biased skewness : ndarray or float skewness, based on moment calculations with denominator equal to the number of observations, i.e. no degrees of freedom correction biased kurtosis : ndarray or float kurtosis (Fisher), the kurtosis is normalized so that it is zero for the normal distribution. No degrees of freedom or bias correction is used.
{"url":"http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.describe.html","timestamp":"2014-04-18T03:01:00Z","content_type":null,"content_length":"8385","record_id":"<urn:uuid:15410cb5-7694-433f-ae91-993d1bddfaca>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
Baghdad Bazaar: A Statistics Primer Magic is a game of chance. Well, more specifically, Magic is a game of strict probabilities, enriched by randomization and tempered by judicious playskill. And although you'll never know for certain what the top card of your library is when you shuffle, you can certainly limit the number of possibilities and how you play based on that knowledge. But such strategy has been synonymous with the game of Magic since the beginning. Players constantly throw around vague approximations and hastily-derived figures to seem intelligent on a daily basis... but how much does the average Magic player truly understand of how probability works? Sadly, very little. Let's take a look at how all the numbers really break down, how to use them to build your own deck at home, and perhaps a trick or two to help you in the middle of a gripping I. General Rules STATISTICS, BOY!!! : the relative possibility that an event will occur, as expressed by the ratio of the number of actual occurrences to the total number of possible occurrences. (quoted from www.reference.com) For example, on a regular six-sided die, the probability of any particular number coming up on any one roll is 1/6, or roughly 16.67%. Sounds easy enough, right? A. Single-Draw Problems Let's start with something simple. Question 1: What are the chances of drawing any particular card in a 60-card deck? One out of sixty, or 1/60, or roughly 1.67% Trying to draw one particular card out of sixty means you'll be successful one out of every sixty times you draw, or 1/60. As a percentage, that's 1.67%, rounded a bit. Unlike a regular deck of cards, for example, which can only have one King of Spades, our Magic decks are a little different. Let's rephrase our initial question. Question 2: What are the chances of drawing any particular card on a single draw if you have four of them in your 60-card deck? 4/60, or 6.67% Now that you have a whole playset to play with, you are four times as likely to draw one of the needed cards on any single draw. Question 3: Your friend has just cast his Oversoul of Dusk , which your deck has almost no answer for. All you have to deal with it are three Oblivion Ring s and two Neck Snap s. Your deck has 45 cards left in it. What are the chances you'll topdeck one on your next draw? 5/45, or or 11.11% Of course, that throws a small, furry wrench in our equations. This is just like the 4 card/60 deck problem, except now there are five cards you can use and only 45 cards to draw them out of. Unfortunately, that still only translates to a 11.11% chance of drawing something to save your butt. Ugh. Better cross your fingers and pray. Hey, so far, this probability stuff is pretty intuitive! B. Multiple-Draw Problems Question 4: You have four Rampant Growth s in your deck to accelerate into four lands on turn 3. What are the chances you'll draw one in your opening hand? Calculate for being both on the play and on the draw. Should be easy, right? On the play: If you're "on the play," you don't get to draw a card your first turn, meaning you have only 7 cards to play with. This means your chances of drawing a Rampant Growth must be seven times better than drawing 4/60, or (4/60)*7, right? (4/60) * 7 = 46.67% Real answer = Here's where things get interesting! Each time you draw a card, the deck gets smaller. You'd be much closer with the numbers (4/60)+(4/59)+(4/58)+(4/57)+(4/56)+(4/55)+(4/54)=49.19%, although even this won't turn out right. Let's see The first set of numbers (4/60) assumes we have four copies of our card in the deck and 60 cards left to choose from. Fair enough so far. The second set, however (4/59), assumes we still have those same four copies left in the deck and only 59 cards to choose from now. But what if there had been a Rampant Growth on the very top of the library? What if it was now in our hand? Wouldn't that make the next numbers (3/59)+(3/58) instead, and so on? What if we draw two in the first two draws? What if we draw none? Calculating "the number of successes in a sequence of n draws from a finite population without replacement" is something known affectionately by statisticians as Hypergeometric Distribution. (Readers who feel their eyes already glazing over already can just skip down to where you see a simplified equation in bold.) Hypergeos, which sound like they should be awesome Null Profusion-like combo enablers, can be described by the following problem and solution. (The following is drawn from the entry in the Wikipedia, which is always an excellent place to begin when researching a topic for the first time.) Quote from The classical application of the hypergeometric distribution is sampling without replacement. Think of an urn with two types of marbles, black ones and white ones. Define drawing a white marble as a success and drawing a black marble as a failure (analogous to the binomial distribution). If the variable N describes the number of all marbles in the urn (see contingency table above) and m describes the number of white marbles (called defective in the example above), then N − m corresponds to the number of black marbles. Now, assume that there are 5 white and 45 black marbles in the urn. Standing next to the urn, you close your eyes and draw 10 marbles without replacement. What is the probability that exactly 4 of the 10 are white? Note that although we are looking at success/failure, the data cannot be modeled under the binomial distribution, because the probability of success on each trial is not the same, as the size of the remaining population changes as we remove each marble. This problem is summarized by the following contingency table The probability of drawing exactly k white marbles can be calculated by the formula: Hence, in this example k = 4, calculate: I seriously have no idea what they just said. And I doubt the average person reading this article would either. Not only that, but these equations only calculate what the probability is of your drawing "exactly" k number of marbles/cards in any given hand, not "at least" as many. The calculations for that get a lot more complicated. So what is a poor, liberal-arts-educated writer with no math experience in the last decade to do? These are Yuan Ti. Tee hee hee! Enter our moderator friend "YuanTi", who I'll simply refer to from now on as "that friendly, helpful, campy D&D humanoid snake creature." Because it's shorter. Or something. Anyway, having dealt with these sorts of problems on a daily basis for some time now, the snakeman hissed his advice: in math, there can be any number of ways to calculate the answer to a problem. (Case in point: you can measure the height of a building by a) measuring the length of its shadow and using the tangent of the angle to the apex, or b) drop something from the top and use the coefficient of gravity and the object's terminal velocity to punch out some numbers.) And it's also true for this problem. Specifically, it's a lot easier to figure how likely it is we won't draw a Rampant Growth in the first seven cards. Think of it this way, if it helps: your deck is an evil, conniving little devil who wants to cheat you out of good cards every chance it gets. (Which seems true often enough anyway.) On any given draw, we have a 1/60 chance of getting a particular card. Which means our deck has a 59/60 chance of pushing it down to the bottom of our deck... or at least to the point where we're already dead. And what of the next draw? Our vicious deck's chances of screwing us completely get just a tad a little smaller...58/59 smaller, actually. Which means that over the course of the first two draws, our deck has a 59 out of 60 chance (first draw) of getting a later 58 out of 59 chance (second draw) of making us curse our shuffling skills. In head-numbing pseudo-formula form (I'm no math major), that's: Probability of drawing at least 1 of a given card = 1 - chance of never drawing it = 1 - (56/60)(55/59)(54/58)...and so on, one for each draw. And that's a lot easier way of getting your head around the problem than the hypergeowhatever route. The easiest way to crunch the numbers, if we're using nothing fancier than a pocket calculator, is to multiply all the numbers on the tops of the equation (56*55*54*etc.) and then divide the final number by all the numbers on the bottom (/60, /59, /58, etc.). (And yes, Mom, I do remember the terms "numerator" and "denominator." Gosh. I haven't forgotten everything.) So let's try our question again. What's the likelihood that we'll draw at least one Rampant Growth in our first seven cards? On the play: 1 - (56/60)(55/59)(54/58)(53/57)(52/56)(51/55)(50/54) = 39.95% Giving chance the finger. If we're on the play, our deck gets seven chances to try and completely screw us. That means it gets a probability of (56/60)(55/59)(54/58)(53/57)(52/56)(51/55)(50/54) that we'll get absolutely none, blame the shuffler (if we're playing online), and mulligan. Calculating this out, this means there is a 60.05% chance we won't draw one. On the flip side, that means there is also a 39.95% chance that we will draw at least one (or two or three) of our beloved mana-accelerators in our opening hand. (And margins of error be damned! You math majors know what I'm talking about.) On the draw: 1 - (56/60)(55/59)(54/58)(53/57)(52/56)(51/55)(50/54)(49/53) = 44.48% Our evil deck has one more draw now from us to contend with, meaning its 60.05% chance just got 49/53 smaller. This translates to a 55.52% chance of not drawing one, or a 44.48% chance of drawing one if we're not going first. Question 5: Technically, you have until your turn 2 draw to find a Rampant Growth and still cast it on time. What are the chances you'll draw one by or on turn 2? On the play: By turn 2 on the play, we'll have drawn eight cards off our deck. (56/60)(55/59)(54/58)(53/57)(52/56)(51/55)(50/54)(49/53) = 55.52 chance of getting screwed = 44.48% of drawing one. On the draw: We'll have seen nine cards so far: (56/60)(55/59)(54/58)(53/57)(52/56)(51/55)(50/54)(49/53)(48/52) = 51.25 chance of getting screwed = 48.75% of drawing one. Okay, now that we have that down, let's try something a little more complicated. Question 6: Your newest casual deck's "God Hand" involves having both a Gemstone Caverns and a Chrome Mox in your opening hand for three mana on turn 1. Wisdom of this deck design aside, what's the likelihood of your having at least one of each in your opening hand, assuming you have three Caverns and four Moxes? On the draw: Actually, a proper rundown of the mathematics for this problem is beyond the scope of this article. Sorry! We can't simply find the chances of drawing one of seven total cards, since it matters that we have at least one of two completely different types. Nor can we simply multiply the chances of drawing at least one Caverns by the chances of drawing at least one Mox. The draw events are not "independent," to use a statistics term. This means that if we draw a Caverns on our very first card, it's going to change the likelihood of whether we'll draw a Mox on our next card or even a second Caverns. To ultimately solve this problem, we'll actually have to resort to those migraine-causing hypergeos. But fear not! Nathan knows of two methods you can use that don't involve pulling out pen, paper, a TI-83, and aspirin: Nathan's Super-Useful Tip #1: If you have access to Microsoft Excel, there's a "HYPERGEODMIST" function you can use to have it crunch the numbers for you. This article explains how to make the spreadsheet work for you, and this forum response explains a little of how you'd apply it to multiple-draw-type problems. Still too scary? Try the next one! Nathan's Super-Useful Tip #2: Often, if I want to calculate some quick probabilities without writing out all the maths (or, in my case, without wondering if I've missed something important), I open up MTGO and crunch the numbers there. You can do it too! Try this at home: Step 1: Open up MTGO and go to the "Deck Editor" tab. Step 2: Throw together a quick deck of 60 basic lands or so. Let's say Forests. Step 3: Replace as many basic lands with cards you'll care about drawing. For example, if you want to check your answers for Question 4 (four-of for an opening hand), trade out four Forests for four Llanowar Elves. Step 4: Click "Stats" near the top right of your window; then click "Probabilities" on the pop-up window. Step 5: Click the "Any Creature" button at the top of the pop-up window (it's that cute little claw symbol you'll remember from Future Sight.) Step 6: Voila! (Notes: MTGO doesn't care about decimal places. MTGO also assumes you're always on the play; so if you want to calculate for being on the draw, just look under the numbers for "Turn 2.") So how would you check your numbers for Question 6? The MTGO calculator needs to know we're looking at two completely different types of cards, not just seven of the same. So we'll add three creatures (Llanowar Elves again) for our Caverns and perhaps four enchantments (why not Fertile Ground?) for our Moxes. Then add "Any Creature" and "Any Enchantment" to your calculations. MTGO should tell you that you have a 12% chance (see above pic) of drawing at least one of both in your opening hand. (The actual answer is around 11.40%. MTGO is close but not perfect.) Obviously, these numbers are true regardless of what your individual cards are (whether they are creatures + enchantments or Caverns + Moxes), which is why MTGO is such a great tool for speedy deck calculations. II. Some Finer Points of Deck-Building Question 7: What are the chances you'll draw any particular one-of card in a 60 card deck, compared to from a 61-card deck? 60-card deck: 1/60, or 1.67% 61-card deck: 1/61, or 1.64% Difference: .03%... and this is rounded up. Since we're not calculating for multiple draws or different card types here, the numbers are thankfully easy. As we can see, the difference between a 60- and 61-card deck is negligible - not even half of a tenth of a percent for any given draw. Still, it can be argued that it's a non-zero difference, and a non-zero difference is bound to make an impact in some game along the way if you play long enough. Considering that you draw seven or eight cards in just one opening hand, plus perhaps at least another half dozen in a single game, even something as small as .03% might be noticeable in one day of PTQ event. Let's look at another question before we continue the debate. Question 8: What are the chances you'll draw a one of a 3-of in a 60 card deck? 4-of in a 61? 3-of in a 60: 3/60, or 5.00% 4-of in a 61: 4/61, or 6.56% Difference: 1.56% On the bright side, there's a 75% chance you won't die instantly. These numbers bring up an interesting point that espousers of 61-card decks use. Keeping your deck as small as possible is certainly laudable, but when does the need for an extra full playset outweigh the need for overall consistency? When does the need for four Cryptic Commands in a deck that is already 57 cards strong outweigh the need to trim a playset somewhere? Why not just up your deck count to 61 and include four? Let's look at it this way: say that you're prepping a run-of-the-mill Kithkin White Weenie deck to take to your local FNM. Your local metagame has been seeing a good share of Monored-style decks lately, and Burrenton Forge-Tenders turn out to be key in that matchup. Yet, to survive the other matchups, you need your deck chock full of Wizened Cenns, Knight of Meadowgrains, Knight of the White Orchards, and other such nonsense; and you only have three slots left for the Tender. Question 9: What are the chances you'll draw at least one of three Burrenton Forge-Tender s in a 60 card deck in your opening hand? At least one of four Forge-Tenders in a 61? On the play: 3-of in a 60: 1 - (57/60)(56/59)(55/58)(54/57)(53/56)(52/55)(51/54) = 31.54% 4-of in a 61: 1 - (57/61)(56/60)(55/59)(54/58)(53/57)(52/56)(51/55) = 39.40% Difference: 7.86% On the draw: 3-of in a 60: 1 - (57/60)(56/59)(55/58)(54/57)(53/56)(52/55)(51/54)(50/53) = 35.42% 4-of in a 61: 1 - (57/61)(56/60)(55/59)(54/58)(53/57)(52/56)(51/55)(50/54) = 43.89% Difference: 8.47% As you can see, these numbers are a lot more significant than the .03% difference from the single extra card in the entire deck. They become even more significant when the card you added is key to an entire matchup you're planning for. Sure, you could argue to keep one copy sideboarded and bring it in for game 2, but I think just a strong case can be made for not losing game 1 to begin with. Assuming each of your playsets is something you want to go four-of, any single one of the cards you've chosen could be instrumental in winning the game for you. And in some cases (as with White Weenie vs. Monored), that extra playset is something you might want to plunk down on the table on turn 1. (What are your two cents? Do you think there's always room to cut or that some decks need as many of their playsets pre-board as possible? Do you think these line of reasoning supports 62/63/ whatever-sized decks; and if so, where does this madness end??? Let me know on the forums!) III. During the Game Probability is all well and good when discussed in a relaxed, classroom environment. But what do you do when you're staring down an opponent across from you, trying to judge just how likely it is he topdecked his win condition after you emptied his hand? Are there any good rules of thumb or quick "guesstimations" you can do to judge the likelihood? Question 10: Your opponent is playing Elfball-style combo deck. He's played a land every turn and has dropped two Bramblewood Paragon s and a Rhys, the Redeemed . He has four cards in hand. It's about to be his fourth turn; what are the chances he'll draw the Heritage Druid he needs to start his combo? About 8% What's there not to love about adorable little me? He has four cards in hand and six permanents on the board. This means he has 50 cards left in his deck. To make a quick guesstimation of 4 Druids/50 cards, multiply both the top and bottom by 2 to get 8/100. In other words, he has an 8% chance of getting the combo enabler he needs. Multiplying numbers is always easier to do in one's head than dividing. To get a quick estimate of a single-draw solution, figure out how much you'd have to multiply the number of cards left in his library to get to 100, and then multiply that number by however many cards there are left that he's trying to get. Let's try again for practice: Question 11: What if you've been milling your opponent, and he only has 38 cards left for his Druids to hide in? About 10% There's no easy number to multiply 38 by to get 100. Multiplying it by 2 and then half again will get you close enough, though (to 95, specifically). Multiplying the number of Heritage Druids left in his library (4) by that same number (2.5) will give you 10, meaning he has about a 10% chance to get that card. Question 12: What if you've been milling your opponent, and he only has 38 cards left for his Druids to hide in, but on his fourth turn he drops a land and casts About 40% He's drawn four cards so far this turn. We know from our last problem that he has a roughly 10% chance of pulling the card he needs on his first draw. To guesstimate, multiply that number by the number of cards he's drawn this turn: 4 * 10% = 40% Obviously, we know from the earlier multiple-draw problems that this number is not exactly accurate. Still, actual calculations show: 1 - (34/38)(33/37)(32/36)(31/35) = 1 - 62.83% = 37.17% Meaning that our quick, two-second guesstimations are only 3% off of the real numbers. Close enough for government work, in other words. And, perhaps, close enough to help you plan your next move. (Of course, it should be noted that if he had drawn ten cards somehow, you'd have a false 100% certainty he'd have a Heritage Druid in hand. Still, with real numbers weighing in at a whopping 72.26%, that may not be far off from the truth!) IV. Final Exam! Let's see how well you've been paying attention, shall we? Question 13: You're playing Extended at a local get-together, and you're pretty sure your opponent is netdecking the Faeries deck that Denis Sinner used to take 7th place at Pro Tour: Berlin. Fortunately, you're a little familiar with the deck, and you know that it only runs three Cryptic Command It's late in the first game. Your Elves have been lucky, and you've managed to survive one Command already and still get him within alpha-strike distance. He has seven lands on the table; eight cards in the graveyard; a with two Faerie tokens; and only one card in hand (which you're pretty sure is a land he's bluffing with). However, to finish him on your next turn, you'll need to both play and activate the Mirror Entity sitting in your hand. On his (hopefully) last turn alive, your opponent finally resolves an Ancestral Vision . He gives you a cryptic stare (ha! I made a funny!), and passes the turn. What are the chances he's just ripped one of his two remaining Cryptic Command s and is waiting to counter your spell/tap down all your creatures? You're doomed. Real Answer: If you read the question right (and I hope you did), you should know that your opponent has only 42 cards left in his library before the crucial draws (7 lands + 8 cards in graveyard + 1 enchantment + 1 in hand + 1 suspended sorcery = 18). The Ancestral Visions + regular draw = 4 cards drawn this turn. Therefore: Guesstimations = (2/42) * 4 = (5/about 100) * 4 = about 20%. Actual chance you're toast = 1 - (40/42)(39/41)(38/40)(37/39) = 1 - .8165 = 18.35%. According to the math, you actually have a 4 out of 5 chance of taking him out this turn, assuming the Command was the only thing that could stop you. In real life, however, your opponent always draws the card he needs, Faeries > anything else, no one loves you, and you'll never move out of your parent's basement. Just accept the facts; it'll be easier that way. And that's all for today! Hopefully, these tips can come in handy the next time you put a deck together, sit down at your next gaming session, or have to explain the maths behind your favorite game. • To post a comment, please login or register a new account.
{"url":"http://www.mtgsalvation.com/articles/15551-baghdad-bazaar-a-statistics-primer?cookieTest=1","timestamp":"2014-04-25T08:36:46Z","content_type":null,"content_length":"75364","record_id":"<urn:uuid:f73fe489-cf7e-4477-873e-785c8929f287>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
[Kinematics] Don't know how to find value May 24th 2010, 04:38 AM #1 [Kinematics] Don't know how to find value Crate is being lowered by a crane at a constant speed of 2 m/s when a ball falls out of the hole in the crate, it hits the floor 10s before the crate. Find the height of the crate when the ball fell out of the crate. How do I find the height? All formulas gave me 20 as the height.. which is wrong. Crate is being lowered by a crane at a constant speed of 2 m/s when a ball falls out of the hole in the crate, it hits the floor 10s before the crate. Find the height of the crate when the ball fell out of the crate. How do I find the height? All formulas gave me 20 as the height.. which is wrong. let h = height of the crate when the ball drops out ball's initial speed = 2 m/s let t = time for the ball to hit the ground t+10 = time for the crate to hit the ground for the crate ... $<br /> h = 2(t+10)<br />$ for the ball (using 10 for g) ... $<br /> h = 2t + 5t^2<br />$ $<br /> 2t + 5t^2 = 2(t+10)<br />$ solve for t, then determine h Help please Thanks skeeter. That helped, I got the right answer. I got more questions I do not understand. Edit: I forgot to specify what I'm actaully stuck on, the 5th question, where the skater is 2 seconds earlier than the cyclist. I'm just wondering how to put it on the graph. So, I assume SKATER: $160=12*22$, is that right? A cyclist, passes the starting line $t=0$ The cyclist reaches the finish at $(t=22)$ What I have so far: 1- the distance of the starting from the finish line 2- the distance travelled after the finish line 3- the acceleration of the cyclist 4- the average speed of the cyclist from the starting to finish 5- A skater passed the starting line 2 seconds before cyclist. Skater is travelling at a constant speed of $11ms^-1$. Who wins the race and how much? Last edited by Cthul; May 28th 2010 at 04:27 AM. Thanks skeeter. That helped, I got the right answer. I got more questions I do not understand. Edit: I forgot to specify what I'm actaully stuck on, the 5th question, where the skater is 2 seconds earlier than the cyclist. I'm just wondering how to put it on the graph. So, I assume SKATER: $160=12*22$, is that right? A cyclist, passes the starting line $t=0$ The cyclist reaches the finish at $(t=22)$ What I have so far: 1- the distance of the starting from the finish line 2- the distance travelled after the finish line 3- the acceleration of the cyclist 4- the average speed of the cyclist from the starting to finish $10ms^-1$|v{avg}| = (260 m)/(22 s) 5- A skater passed the starting line 2 seconds before cyclist. Skater is travelling at a constant speed of $11ms^-1$. Who wins the race and how much? t = 260/11 = 23.6 s ... but he got a 2 second headstart, so the skater crosses the finish line at t = 21.6 s relative to when the cyclist started. next time, start a new problem with a new thread. next time, start a new problem with a new thread. Understandable. Thanks for the help. (Solved) May 24th 2010, 05:13 AM #2 May 27th 2010, 11:19 PM #3 May 28th 2010, 07:19 AM #4 May 28th 2010, 04:08 PM #5
{"url":"http://mathhelpforum.com/math-topics/146206-kinematics-don-t-know-how-find-value.html","timestamp":"2014-04-18T06:46:04Z","content_type":null,"content_length":"49282","record_id":"<urn:uuid:4fe1c360-d337-457d-9ae1-b24ee4aebbcc>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: use -nlsur- to deal with nonlinear IV estimation? [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: use -nlsur- to deal with nonlinear IV estimation? From "Austin Nichols" <austinnichols@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: use -nlsur- to deal with nonlinear IV estimation? Date Fri, 11 Jul 2008 12:01:11 -0400 Hau Chyi <hauchyi@gmail.com>: I assume this is related to "The effects of single mothers' welfare participation and work decisions on children's attainments" at http://www.stata.com/meeting/snasug08/abstracts.html which cites existing results (e.g. "children's number of years of schooling is relatively unresponsive to mothers' work and welfare participation choices" --is this a statement about statistical significance or substantive size of effect?). What do those preliminary results look like in an -esttab- table? What are the instruments Z? In general, it sounds like you have a regression of Y on X,W,E where the two variables W and E might be endogenous, so you are using IV. If you then interact W and E with some X, you have four endogenous variables W and E and XW and XE, and you can still run IV. But the issues involved in running IV are likely much more important to your interpretation of results than some weak endogeneity and some modest nonlinearity or interactions; see Stata Journal 7(4) pp. 465-541. On Fri, Jul 11, 2008 at 10:58 AM, Hau Chyi <hauchyi@gmail.com> wrote: > Dear Stata users, > I sent this email to the list but didn't see it showing up (I think > the problem is beause gmail default setting is formatted). Allow me to > try again. > Bear me if this question is somewhat long. > I have a model of a child's outcome (Y) which is determined by > time-invariant exogenous charactersitics (X) and years of childhood > welfare (W) and mother's work experiences (E): > Y = f(X) + gamma_1*W + gamma_2*E + gamma_3*f(X)*W + > gamma_4*f(X)*E + epsilon, > where f(X)=X'b is a linear combination of X. Since X is > time-invariant, people in the literature typically use it as a measure > of initial ability. In my model, marginal productivity of a mother's > decisions depend on child's initial ability. So, for example, dY/dW = > gamma_1 + gamma_3* f(X). > Given that the mother's deicisions, W and E, may be correlated with > her child's unobserved characteristics, I find a set of IV (Z) for W > and E. > The problem is this model is nonlinear in parameters (b, gamma). As a > result, I can't use standard IV method. Given a (somewhat) detailed > search, I realize Stata doesn't have a command for nonlinear IV > estimation. So the first question is, does anyone know if Stata has > one (I know from Stata archive that EViews has it). > What I figure I can do in Stata is a 3-stage type of method, where I > use -nlsur- to specify three equations: first one is the child's > attainments equation; the other two are linear projections of W and E > on (X and Z). > I want to ask if a nonlinear 3-stage approach like this behave > differently than typical 3-stage linear approach? > Thanks a lot! > Hau * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-07/msg00454.html","timestamp":"2014-04-19T12:01:01Z","content_type":null,"content_length":"8428","record_id":"<urn:uuid:21995fec-ff5c-4362-8ac7-14c17a383908>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability Problem February 7th 2013, 05:55 PM Probability Problem Attachment 26895 This problem was in the textbook. To find the probability of mu we have P(-0.96<Z<1.09)=0.69. So I reasoned the probability of at least for was (0.69)^4(0.31)+(0.69)^5. But the solution has 5* (0.69)^4(0.31)+(0.69)^5. Maybe it is just last, but I'm not sure why we are multiplying by five. February 7th 2013, 10:02 PM Re: Probability Problem Hey Hal2001. Hint: For independent A and B, P(A and B) = P(A)P(B) February 8th 2013, 02:59 AM Re: Probability Problem It was late :). Woke up this morning and realized the five is simply the binomial coefficient......
{"url":"http://mathhelpforum.com/advanced-statistics/212746-probability-problem-print.html","timestamp":"2014-04-20T02:09:36Z","content_type":null,"content_length":"4228","record_id":"<urn:uuid:7c44d69e-df4c-4753-981f-1d11deba6a68>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Understanding Natural Frequency: Masses on Rods A common technique for analyzing structures is to treat them as lumped masses on a single rod, which simplifies the analysis. A single story building would then be modeled as a rod of the height of the structure with all of the weight of the structure located at the top of the rod. A multi-story building would be modeled by a rod with a weight located at each floor. This simplified lumped-mass model allows researchers to scale the structure for testing and make simple calculations as to how it will respond to an earthquake. Learning Objectives and Standards Links to the National Science Standards and to individual State Science Standards are available by using this link: Understanding Natural Frequency: Masses on Rods A common technique for analyzing structures is to treat them as lumped masses on a single rod, which simplifies the analysis. A single story building would then be modeled as a rod of the height of the structure with all of the weight of the structure located at the top of the rod. A multi-story building would be modeled by a rod with a weight located at each floor. This simplified lumped-mass model allows researchers to scale the structure for testing and make simple calculations as to how it will respond to an earthquake. Figure 1: Actual structure (left), five-story structural support system (middle), simplified lumped-mass model (right). The two important considerations in this model are the mass (weight) of the structure and the stiffness. The stiffness and the mass of the building control how the building will respond to an earthquake. The stiffness of a structure corresponds to how much it will deform when a force is applied to it. In the instance of earthquake design, the outside force is the ground motion that forces the structure to move side to side (horizontal force). The building walls or the structural frame are the main contributor to the stiffness. The stiffness of a structural member is determined by three parameters: 1. the length, 2. material property, and 3. the moment of inertia (I). The material property is called the Modulus of Elasticity (E) and can be thought of as how well the building material can resist force without bending. Two common building materials are steel and concrete. Steel is about eight times stiffer than concrete, meaning its modulus of elasticity is eight times larger than concrete. The moment of inertia can be thought of as an indicator of how much material there is. This makes sense because a larger wall will be stiffer than a smaller wall of the same material. An earthquake causes a building to vibrate, and engineers are interesting in studying the frequency of these vibrations. Frequency is how many times per second something will move back and forth. For example, if you stretch a spring it will contract back to its original position but will also overshoot this position and end up stretching back the other way and when it gets to the end of that side it will repeat the process stretching back in the other direction. This is similar to what an earthquake does to a building, because during an earthquake the ground moves back and forth which is like stretching the spring in one direction and then moving it back to the other repeatedly. Engineers are particularly interested in when the earthquake causes the structure to vibrate at its natural frequency. The natural frequency is a special rate of moving back and forth because at this frequency the building will shake uncontrollably. This uncontrollable vibration causes damage because the building bends more than it was designed for. Once damage starts, each back and forth movement of the building will be farther away from its straight shape. This experiment attempts to show what happens when a building is shaken at its natural frequency. To illustrate this concept, several small rods will be fixed to the shake table with different weights placed at different heights to show the effects of increasing weight and height. Figure 2: Experiment set up. Engineers use a simple equation to determine the natural frequency of a structure when it is modeled as a lumped mass. However, for this experiment, we will be observing exactly how the model behaves. Therefore, we can accurately calculate the natural frequency and then test to see if our calculations are correct. The equation for natural frequency of lumped-mass structure is: In this equation, f is the natural frequency and M is the mass of the weight at the top of the rod, which is found by dividing the weight (W) by gravity. M=W/g K is the stiffness of the rod and depends upon the length of the rod, Modulus of Elasticity (material property), and the moment of inertia (how much material). In this case, we can measure the length and the moment of inertia. Also, we know that we are using steel so we know the Modulus of Elasticity as well. Therefore, we can use the following equation to determine the stiffness (K). This means that to move the weight an inch to the side, you need to push on it horizontally with a force of 0.51 pounds. This is a small force and will be seen that our rods will move quite a bit back and forth. Because we know how much the metal weight weighs, we can calculate the natural frequency using the equation discussed earlier. The unit of the frequency is Hertz (Hz) which is the number of cycles (movements back and forth) per second. This calculation is for the tall single mass rod. This calculation of the natural frequency holds true for a single story building. Unfortunately, most buildings at risk of damage due to earthquakes are taller than one story and thus the engineer needs to find another method to analyze these buildings. Using the simplified model of a lumped mass, the engineer can imagine a multi-story building as a rod with multiple weights placed at the level of each floor. This arrangement allows the engineer to make simplified calculations of the natural frequencies. However, because there is more than one weight, there is more than one natural frequency. These new natural frequencies are related to the mode shapes of the structure. A mode shape is simply shape of a structure as it shakes back and forth. With a single weight there is only one mode, however, with two stories there are two modes. These are shown in Figure 3. Figure 3: Modes of vibration of a two-story building The number of mode shapes increases with the number of floors. However, the first mode shape is usually the most important because as the mode number goes up, so does the corresponding frequency of vibration. This means that the earthquake needs to shake back and forth faster and faster to get the structure to move in these ways. This is shown in the demonstration for the two-mass rod. Experimental Procedure: The lumped mass model of buildings simplifies not only the analysis of a building but the experimental testing as well. With minimal setup, the behavior of a building can be experimentally verified. In this experiment, a plate with four rods and various weights will be placed on the shake table and shaken to determine the natural frequencies of the various rods. This is analogous to several different sized buildings being shaken by an earthquake. The first step is to attach the masses on rods plate to the shake table using the two screws to securely fasten the plate to the table. Test to see if the plate is fastened to the table by shaking it with your hands. Next, start the shake table and allow it to calibrate using the procedure outlined in the shake table operations manual. Navigate to the sine mode (option 2 at main menu) and select displacement mode (option 1). Then before selecting any other option, move the displacement and frequency models and establish how quickly they change the values of either the displacement or the frequency. Then set the displacement to 12% and set the frequency to 0.1Hz. Press the # key to start the experiment and then slowly increase the frequency. While you are increasing the frequency, point out to the students how the different rods respond to the shaking, especially when you reach a natural frequency of one of the rods. When this happens, stop increasing the frequency and allow the students to see the increased movement of the rods. Keep going until you have hit all of the natural frequencies of the rods. The natural frequencies are labeled below on the picture. As a check, calculate the frequencies by counting the number of cycles that the systems undergo over a one minute interval after the structures are each individually deflected and released. The frequency would be the number of cycles that occurred during the interval divided by the time in seconds. To exit the displacement mode simply press the 0 key and you will abort to the main menu. Simply remove the rods from the table by unscrewing the plate from the table. Discussion Questions: Below are some discussion questions to ask the students and encourage discussion. • As the frequency increases, will the rods swing back and forth more? • No, the biggest deflection (movement back and forth) will happen when the rod is being shaken at its natural frequency. Once the frequency increases past this point the rods will stop shaking nearly as hard. • What effect will increasing the weight have on the natural frequency? • As the mass increases, the natural frequency will go down. This is an actual technique used in practice, where mass is added to a structure to lower the natural frequency out of the range of frequency in the earthquake. • What effect will increasing the height of the weight have on the natural frequency? • As the length increases, the natural frequency will go down. This can be seen by comparing the short single weight rod to the long single weight rod. Advanced Topics: Now that we have seen the effect that natural frequency and resonance can have on system, it is important to find a way to use this information to design safe structures. An important part of an earthquake record is the frequency that the ground moves. If this frequency is the natural frequency of a structure, the building will resonate, move back and forth uncontrollably, and cause large forces on the building. A unique trait of earthquakes is that the ground motion moves in repeating frequencies. These frequencies are determined by the local geology of the area and can be determined from historical records. Therefore, engineers try to figure out what these frequencies for a particular fault and region are so that they can make sure that buildings in those areas do not have natural frequencies that are the same as the earthquakes. If this can be avoided, buildings will generally survive the earthquake better because it will not have the extreme movement caused by resonance, due to shaking the building near its natural frequency. One tool that engineers use to accomplish this goal is to develop response spectra (plural of spectrum). A response spectrum can be thought of as a graph that has the natural frequency on the x-axis and the maximum displacement or acceleration on the y-axis. This corresponds to the maximum displacement or acceleration for a given natural frequency. This spectrum is created by testing an earthquake record with different masses on rods which have varying natural frequencies and determining the maximum acceleration and displacement. Then the information is plotted on a response spectrum which gives a visual representation of the worst case scenario for each natural frequency. An example is shown below: On this plot, the maximum acceleration is plotted on the y-axis and the natural period is plotted on the x-axis. The natural period is the time it takes for the mass to move back and forth once and is directly related to the natural frequency and is given by the following equation: Where T[n] is the natural period and f[n] is the natural frequency. Therefore, it is easy to switch back and forth between the natural period and the natural frequency. From this plot we see that the worst acceleration happens when the natural period is 0.5 seconds which corresponds to a natural frequency of 12.6 Hz. Therefore, if an engineer is designing a building in the area, it would be prudent to make sure that it does not have a natural frequency close to this value. • Which rods would be affected the most by this earthquake record based on the response spectrum? • All of the rods except the two story rod would fair well because their natural frequencies are significantly lower than the worst case. However, the two story mass-rod system has a second mode natural frequency of 10.4 Hz which is relatively close to the worst case scenario. Chopra, Anil. Dynamics of Structures: Theory and Applications to Earthquake Engineering. 2nd Edition, Prentice Hall, New Jersey, 2001. Some things to take away 1. Structures vibrate at a natural frequency that depends on E, I, L, boundary conditions of the structure and the distribution of masses. 2. The response of a structure is greatly amplified when it is excited (vibrated) at its natural frequency. Cite this work Researchers should cite this work as follows: • Catherine Ellen French (2011), "Understanding Natural Frequency: Masses on Rods," http://nees.org/resources/3609. BibTex | EndNote
{"url":"http://nees.org/resources/3609","timestamp":"2014-04-21T10:30:42Z","content_type":null,"content_length":"46160","record_id":"<urn:uuid:b8106be6-3cfc-40ab-ada5-7cb68142394e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
First Idea; Best Idea... ...and the Worst Idea Creating a Culture of Questions was, by far, the most popular post on this blog until someone somewhere starting linking to the post on Exponent Rules. I think a natural follow up to the Culture piece would be with regards to establishing a classroom culture where feedback is given and accepted. The First Idea is the Best Idea and the Worst Idea The first time students hear this, I usually get, "Gosh, that's mean." But we discuss how the first person who puts forth an idea holds the best idea as there is nothing to which we can compare it. But using the same logic, this idea should be the worst. This assumes the flow of ideas that should follow. I think this encourages two important things: 1. "If I go first, it doesn't matter that my idea isn't fully formed." This student has established a floor on which each other student can stand and/or build. 2. "I can take someone's idea and help them make it better." The real work is done by the first follower. This student chips away at any imperfections and helps the first student refine her idea. Subsequent students then follow suit. What's this look like? Yesterday, we trying to determine the equation between the points below and students wanted the y-intercept. Students were using what they knew about slope to find other points and had to wrestle with the fact this particular line doesn't have a lattice point for a y-intercept. Once we were finished, I asked students to write down any questions they had. Student 1: "I have a comment." "Ok, what is it?" Student 1: "No matter which points we choose, the slope simplifies to the same thing." "Can you turn your observation into a question?" Student 1: "Will that happen all the time?" Now here is where it happens. "I can misunderstand [Student 1]'s question, can we make this more precise?" Student 2: "Will the slopes always simplify to the same thing?" Student 3: "Will the slopes between two points always simplify to the same thing?" "Are we only using two points?" Student 4: "Will the slopes between three points always simplify to the same thing?" Student 5: "Will the slopes between any two pairs of points always simplify to the same thing?" Student 6: "Are the slopes between any two pairs of points always equal?" "Are we really talking about any 4 points here?" Student 7: "Are the slopes between any two pairs of points on a line always equal?" 3 comments: Benjamin Clay Morris said... I like this. I can introduce it when using estimation180 as my warm-up. It's soooo hard to get them to feel comfortable throwing things out there. And I catch myself saying "don't just guess" all of the time. I need to do a better job of differentiating between when it's good (read: okay) to guess and when it's not. David Cox said... I kind of see guessing as part of the precision continuum. We want to avoid random guessing, of course. But, a continuum that goes something like: Gut Guess -> Guess with Reason -> Precise Method is always welcome. I think it's my job to take the gut guesses and help the student funnel his reason towards precision. Lisa Jenifer said... Your article about the First idea Best idea is really good article. You have include so many helpful points in your article. I must follow those points. I appreciated your work. You have done such a good job. Keep it up. Fun Math Worksheets
{"url":"http://coxmath.blogspot.com/2013/11/first-idea-best-idea.html","timestamp":"2014-04-18T18:10:59Z","content_type":null,"content_length":"84663","record_id":"<urn:uuid:453a31e1-0633-4cc7-b166-ef81d3ae3270>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
What Are The Odds When Rolling 3d6? posted Tuesday, May 25th 2010 by None of the Above You kids today have it easy. Back in the early days of D&D you rolled 3d6 for ability scores, placed in the order you rolled them. No “4d6 drop lowest” or “arrange as desired” – if you roll a 3 for Strength, your character has 3 Strength. Looks like you aren’t playing a fighter this time. Labyrinth Lord (free download here) is a Basic D&D retro-clone that still uses 3d6 for ability score generation. By third edition or fourth edition D&D standards, LL is quite deadly: poison kills you outright, and dragon’s breath weapons deal their current hit points in damage. But what are the odds of rolling each ability score on 3d6? I wrote up a quick python script like my 4d6 drop lowest calculator from 2006. Firstly, the average roll on 3d6 is 10.5, compared to 12.244 on 4d6 drop lowest. The “human average” +0 modifier in third edition D&D comes from this result. The odds of rolling an 18 are an unlikely 0.463%, or one in 216. The odds of rolling at least one 18 are 2.746%. In both cases this is about 3.5 times harder than third edition D&D’s 4d6 drop lowest. It’s around 20 times harder to get an 18 in a particular ability score, since early D&D required that you assign the ability scores in the order they’re rolled. Here’s the full chart of results of 3d6: Score Freq Percentage ----- ---- ---------- 3 1 0.463% 4 3 1.389% 5 6 2.778% 6 10 4.630% 7 15 6.944% 8 21 9.722% 9 25 11.574% 10 27 12.500% 11 27 12.500% 12 25 11.574% 13 21 9.722% 14 15 6.944% 15 10 4.630% 16 6 2.778% 17 3 1.389% 18 1 0.463% 1. MJ Harnish One minor correction: 10.5 is not the average roll on 3d6. It’s the mean of the most frequent (common) outcomes on the probability distribution (10 & 11 being the more likely results when you roll 3d6). The most interesting difference between 3d6 and 4d6 IMO, is that 3D6 generates a normal distribution while 4D6, discard the lowest, creates a negatively skewed distribution which means the probability of any outcome leans towards better scores creating more “heroic” characters. It’s important though to realize that the distribution and significance of modifiers associated with ability scores is quite different between 3rd edition & older versions of D&D. In 3rd (and later) edition, modifiers associated with high ability scores are much more important because they’re factored in to the CRs of encounters in a formulaic way. In earlier editions there’s almost no mathematical consideration of game balance (instead it was left up to DM’s discretion) and so high ability scores aren’t nearly as critical (though they certainly help!). 2. Michelle High ability scores didn’t buy you much in D&D either. Wizards found it easier to learn spells, and you could get a small XP bonus if you had a high score in your main ability. D&D was an odd game, for sure. I remember in one of our first sessions, one of my brothers rolled up a pair of halflings named Honest Bob and Friendly Bob (think used-car sales) who each had exactly 1 hit point. I don’t remember the names of his next characters, but he rolled them up about 15 minutes later, as the Bobs had been trundled off to Lemon Law Hell. 3. When Do Smarter Wizards Deserve More XP? « Jonathan Drain’s D20 Source: Dungeons & Dragons Blog [...] I said previously on the odds of each result when rolling 3d6, D&D was a lot more lethal back in 1981. Your character can start with as little as one hit [...] 4. gurps lover Just play gurps. Build your character and hope that you don’t have a killer gm (meaning he’s out to destroy your characters). 5. Jackalope @MJ Harnish The term “average” in mathematics can describe mean, median, or mode. The average IS 10.5 when calculated as a mean. The average roll is 10 or 11 when calculated as a median or mode. 10.5 is arguably the more useful way to think of it, because the easiest way to think of d6 is having 3.5 as the average roll. Average roll of 5d6? 3.5 x 5 = 17.5 6. Kurt I hate to be “that guy”, but rolling 3d6 only is a myth with 1E (unless your talking original White box…). On page 11 of the DMG there’s 4 different ‘official’ and recommended methods for rolling up characters that give you better odds than 3d6. Method 1 is 4d6 drop the lowest. Comments for this article are closed.
{"url":"http://www.d20source.com/2010/05/what-are-the-odds-when-rolling-3d6","timestamp":"2014-04-19T11:56:51Z","content_type":null,"content_length":"24343","record_id":"<urn:uuid:95dc3c9e-0ce4-4b16-9b9b-efffde575066>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
Does the Aharonov–Bohm Effect Exist? Find out how to access preview-only content June 2000 Volume 30 Issue 6 pp 893-905 Does the Aharonov–Bohm Effect Exist? Purchase on Springer.com $39.95 / €34.95 / £29.95* Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. Get Access We draw a distinction between the Aharonov–Bohm phase shift and the Aharonov–Bohm effect. Although the Aharonov–Bohm phase shift occurring when an electron beam passes around a magnetic solenoid is well-verified experimentally, it is not clear whether this phase shift occurs because of classical forces or because of a topological effect occurring in the absence of classical forces as claimed by Aharonov and Bohm. The mathematics of the Schroedinger equation itself does not reveal the physical basis for the effect. However, the experimentally observed Aharonov–Bohm phase shift is of the same form as the shift observed due to electrostatic forces for which the consensus view accepts the role of the classical forces. The Aharonov–Bohm phase shift may well arise from classical electromagnetic forces which are simply more subtle in the magnetic case since they involve relativistic effects of the order v^ 2 /c^ 2 . Here we first review the experimentally observable differences between phenomena arising from classical forces and phenomena arising from the quantum topological effect suggested by Aharonov and Bohm. Second we point out that most discussions of the classical electromagnetic forces involved when a charged particle passes a solenoid are inaccurate because they omit the Faraday induction terms. The subtleties of the relativisitic v^ 2 /c^ 2 classical electromagnetic forces between a point charge and a solenoid have been explored by Coleman and Van Vleck in their analysis of the Shockley–James paradox; indeed, we point out that an analysis exactly parallel to that of Coleman and Van Vleck suggests that the Aharonov–Bohm phase shift is actually due to classical electromagnetic forces. Finally we note that electromagnetic velocity fields penetrate even excellent conductors in a form which is unfamiliar to many physicists. An ohmic conductor surrounding a solenoid does not screen out the magnetic field of the passing charge, but rather the time-integral of the magnetic field is an invariant; this time integral is precisely what is involved in the classical explanation of the Aharonov–Bohm phase shift. Thus the persistence of the Aharonov–Bohm phase shift when the solenoid is surrounded by a conductor does not exclude a classical force-based explanation for the phase shift. At present there is no experimental evidence for the Aharonov–Bohm effect. 1. R. P. Feynman, R. B. Leighton, and M. Sands, The Feynman Lectures on Physics, Vol. 3 (Addison-Wesley, Reading, MA, 1965), p. 1–1. 2. G. Matteucci and G. Pozzi, Phys. Rev. Lett. 54, 2469 (1985). 3. A. W. Overhauser and R. Colella, Phys. Rev. Lett. 33, 1237 (1974). R. Colella, A. W. Overhauser, and S. A. Werner, Phys. Rev. Lett. 34, 1472 (1974). 4. Y. Aharonov and D. Bohm, Phys. Rev. 115, 485 (1959). 5. See the review by S. Olariu and I. I. Popescu, Rev. Mod. Phys. 57, 339 (1985) or the review by M. Peshkin and A. Tonomura, “The Aharonov- Bohm Effect,” in Lecture Notes in Physics, No. 340 (Springer, New York 1989), which list hundreds of papers on the subject. 6. M. P. Silverman, More Than One Mystery: Explorations in Quantum Interference (Springer, New York 1995), Chapter 1. M. P. Silverman, And Yet It Moves: Strange Systems and Subtle Questions in Physics (Cambridge University Press, New York 1993), Chapter 1. 7. D. J. Griffiths, Introduction to Quantum Mechanics (Prentice- Hall, Englewood Cliffs, NJ, 1995) writes on page 349, “What are we to make of the Aharonov- Bohm effect? Evidently our classical preconceptions are simply mistaken: There can be electromagnetic effects in regions where the fields are zero.” 8. T. H. Boyer, Nuovo Cimento B 100, 685 (1987). 9. T. H. Boyer, Phys. Rev. D 8, 1667 (1973). 10. W. Shockley and R. P. James, Phys. Rev. Lett. 18, 876 (1967). 11. S. Coleman and J. H. Van Vleck, Phys. Rev. 171, 1370 (1968). 12. T. H. Boyer, “Classical electromagnetism and the Aharonov- Bohm phase shift,” Found. Phys. 30 (6) 2000 (following article). 13. Indeed, it turns out that this same magnetic force on a magnetic moment gives exactly the displacement which is required to account for the Aharonov- Casher effect as a classical lag effect; see T. H. Boyer, Phys. Rev. A 36, 5083 (1987). 14. H. Erlichson, Am. J. Phys. 38, 162 (1976). 15. B. Lischke, Z. Physik 239, 360 (1970). Although a superconductor expells the magnetic field lines of a time-independent magnetic field in the Meissner effect, a superconductor acts similarly to a normal metal for high frequency fields. 16. The erroneous point of view regarding the role of conducting materials appears on p. 426 of the review by Olariu and Popescu listed in Ref. 5, and on p. 123 of the review by Tonomura. 17. A. Tonomura, N. Osakabe, T. Matsuda, T. Kawasaki, J. Endo, S. Yano, and H. Yamada, Phys. Rev. Lett. 56, 792 (1986). 18. T. H. Boyer, “Understanding the penetration of electromagnetic velocity fields into conductors,” Am. J. Phys. 67, 954 (1999). Does the Aharonov–Bohm Effect Exist? Cover Date Print ISSN Online ISSN Kluwer Academic Publishers-Plenum Publishers Additional Links Industry Sectors Author Affiliations □ 1. Department of Physics, City College of the City University of New York, New York, New York, 10031
{"url":"http://link.springer.com/article/10.1023/A:1003602524894","timestamp":"2014-04-21T04:59:42Z","content_type":null,"content_length":"47581","record_id":"<urn:uuid:5c1a3cda-7f43-492b-8994-f52815c05500>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
is Hasse principle a birational invariant? Take the 2-minute tour × MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. ...it is probably a very trivial question, but I am a beginner in arithmetics. I'm not sure how well-posed this questions is. A particular variety has points or it doesn't, so "the Hasse principle holds" is the sort of thing one says about a collection of varieties characterized in some way. For a particular such collection, it makes sense to ask if it is closed under birational isomorphism, but I'm not sure what the question in the title could mean precisely as stated... Ramsey Sep 27 '12 at 17:51 I guess that the answer to my question is very likely to be NO. I assume that for the variety (say projective) A the Hasse principle holds. The projective variety B is birationally equivalent to A, does the Hasse principle hold for B? IMeasy Sep 27 '12 at 17:57 I guess my point is that, for a fixed variety $A$, the statement that "the Hasse principle holds for $A$" has little meaning. This variety either has a rational point or it does not. Ramsey Sep 27 '12 at 20:06 OK sorry. I see what you mean, I was being a little sloppy, you are right. Let's put it this way: say I have a class of varieties for which the principle holds: I pick up one V and find another variety W - not belonging to the same class- that is birationally equivalent to V. Do I expect W to belong to a second class of varieties for which the principle holds? IMeasy Sep 27 '12 at 20:11 Saying that for a variety X over a global field k the HP holds does have meaning: saying HP holds for X means that the implication ``X has local points everywhere'' $\Rightarrow$ `X has a global point'' is true. The only way in which it could fail if X has local points everywhere but does NOT have a global point. In this way, HP can be said to hold or not hold for any set or class of varieties, including singleton sets. René Sep 27 '12 at 21:14 show 3 more comments In this generality, the answer is no. The projective curve $X$ given by $2y^2z^2 = x^4 - 17z^4$ over the rationals satisfies the HP, since it has local points everywhere (the affine part $z \neq 0$ is given by $2y'^2=x'^4-17$, which is the famous Reichardt-Lind equation which is known to be everywhere locally, but not globally, soluble) and it has the unique rational point $(0:1:0)$. However, this point is singular: so now consider the normalization $X'$ of $X$: it has two points above $(0:1:0)$, neither of which is rational. By the parenthetical remark, $X'$ has local points everywhere, but it doesn't have rational points: therefore $X'$ does not satisfy the HP. Also, $X'$ is birational to $X$, being its normalization. If you restrict to smooth varieties however, the answer is yes: by Lang-Nishimura, if $X$ and $X'$ are smooth varieties over any field $k$ that are birational to each other, then $X$ has a $k$ point iff $X'$ does. nice example, thank you! IMeasy Sep 28 '12 at 8:42 I think for the OP's benefit I should also clarify that for Lang-Nishimura you also need properness. Indeed, given a smooth variety $X$ with a single rational point $x$, the variety $Y=X \setminus x$ is birational to $X$ yet has no rational points. Daniel Loughran Sep 28 '12 at 9:31 Yes, of course. Thank you, Daniel! René Sep 28 '12 at 13:53 add comment
{"url":"https://mathoverflow.net/questions/108270/is-hasse-principle-a-birational-invariant","timestamp":"2014-04-21T05:04:09Z","content_type":null,"content_length":"59151","record_id":"<urn:uuid:2d715e32-474a-4f32-8996-bf42f22bf326>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
Need your input about difficult program? 02-12-2002 #1 Registered User Join Date Jan 2002 Need your input about difficult program? I do not understand how to get started with my program. I need to write a program that calculates the sin of x with a truncation error of less than 0.000001. The truncation error is the absolute value of the difference between the previously calculated sine of x and the current calculated sine of x. For each iteration of the series I need to print out the iteration number, the calculated value, truncation error, and the C math library calculated sine of x. I really need help getting started. You say that you need the "calculated value" and the "C Math library value of the sine of x" ? Both values will be one and the same won't they ?? Just trying to help you clarify the problem better. Im assuming your gonna be using that formula 1 - x/1! + x^2/2! -... er whatever it is. Like I always say, the easiest way to start a project is to just start it (well Ive never really said that before, but whatever). What I mean is dont think too much (especially on somthing this small), just write code to do exactly what the program should do. Im again assuming the user will enter some x, so thats where you start... // variable declarations // prompt user and input value // calc sin x return 0 just fill in the blanks. The formula you need: sin x = x^1/1! - x^3/3! + x^5/5! - x^7/7! + ... + (-1)^n (x^(2n+1)/(2n+1)!) A serie is a sum. So you need to setup a variable to contain the sum. Then you need to use iteration to approximate sin x. x = user input; max_iterations = user input; sinx_approx = 0; sinx_library = sin (x); for n = 0 to max_iterations sinx_approx = sinx_approx + (-1)^n (x^(2n+1)/(2n+1!)); error = abs (sinx_library - sinx_approx); Hope this helps you a little further. 02-12-2002 #2 02-12-2002 #3 Registered User Join Date Jan 2002 02-13-2002 #4 Join Date Aug 2001 Groningen (NL)
{"url":"http://cboard.cprogramming.com/c-programming/10837-need-your-input-about-difficult-program.html","timestamp":"2014-04-16T13:44:32Z","content_type":null,"content_length":"50319","record_id":"<urn:uuid:df6141b6-93ee-4eaf-81f8-6f324c8e86e0>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
Le Monde puzzle [#14.2] I received at last my weekend edition of Le Monde and hence the solution proposed by the authors (Cohen and Busser) to the puzzle #14. They obtain a strategy that only requires at most 19 steps. The idea is to start with a first test, which gives a reference score S[0], and then work on groups of four questions, whose answers can be found in at most three steps. For instance, starting with x [1],…,x[25], the second test uses (1-x[1]),(1-x[2]),x[3],…,x[25], and changes the score S[0 ]by -2,2 or 0. In the first two cases, this determines y[1],y[2 ]and it suffices to use x[1],x[2],(1-x[3]), (1-x[4]),x[5]…,x[25], to find y[3],y[4 ]in one or two steps. If the score S[0 ]does not change, considering x[1],(1-x[2]),(1-x[3]),(1-x[4]),x[5]…,x[25], and then maybe x[1],(1-x[2]),x[3],(1-x[4]),x [5]…,x[25], produces again the value of y[1],…,y[4]. If one repeats the algorithm one group of four after another, there are six such groups and the maximal number of step is since the final answer y[25] is known by deduction from S[0]. An additional improvement not mentioned in the journal is achieved in checking after any change whether or not the new score is equal to zero. (In both my solution and theirs, there is an extra step to propose the correct solution, which means in my case an exact average of steps equal to 25, by the geometric argument.) For the current solution, here is an R code that evaluates the distribution of the number of steps: for (t in 1:10^5){ for (j in 1:6){ if (abs(Delta-Delta0)==2){ The fit by a binomial is rather poor, but this is not surprising given the two-stage decision. In any case, this does better than my earlier solution! One Response to “Le Monde puzzle [#14.2]” 1. [...] puzzle of last weekend in Le Monde was about finding the absolute rank of x9 when given the relative ranks of x1,….,x8 and the [...]
{"url":"http://xianblog.wordpress.com/2011/05/15/le-monde-puzzle-14-2/","timestamp":"2014-04-20T08:14:56Z","content_type":null,"content_length":"38588","record_id":"<urn:uuid:0700a583-c95f-4ec0-8bf2-47495bf0432a>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability of defective circuit card in the sample 1. 144390 Probability of defective circuit card in the sample Printed circuit cards are placed in a functional test after being populated with semiconductor chips. A lot contains 140 cards, and 20 are selected without replacement for functional testing. a.) If 20 cards are defective, what is the probability that at least 1 defective card is in the sample? b.) If 5 cards are defective, what is the probability that at least 1 defective card appears in the sample? The solution shows how to calculate the probability of defective circuit card in the sample.
{"url":"https://brainmass.com/statistics/probability/144390","timestamp":"2014-04-19T17:02:51Z","content_type":null,"content_length":"25425","record_id":"<urn:uuid:6ae5838a-4137-4bb0-9799-912df5a826e2>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
Workshops & Conferences Spin Glasses: What's the Big Idea? (Is There One?) Host Department: Physics Date: 04/02/2014 Time: 4:00 PM - 5:00 PM Location: 340 West Hall The aim of this talk is to introduce the subject of spin glasses, and more generally the statistical mechanics of quenched disorder, as a problem of general interest to physicists and mathematicians from multiple disciplines and backgrounds. Despite years of study, the physics and mathematics of quenched disorder remains poorly understood, and represents a major gap in our understanding of the condensed state of matter. While there are many active areas of investigation in this field, I will narrow the focus of this talk to our current level of understanding of the low-temperature equilibrium structure of realistic (i.e., finite-dimensional) spin glasses. I will begin with a brief review of the basic features of spin glasses and what is known experimentally. I will then turn to the problem of understanding the nature of the spin glass phase --- if it exists. The central question to be addressed is the nature of broken symmetry in these systems. Parisi's replica symmetry breaking approach, now mostly verified for mean field spin glasses, attracted great excitement and interest as a novel and exotic form of symmetry breaking. But does it hold also for real spin glasses in finite dimensions? This has been a subject of intense controversy, and although the issues surrounding it have become more sharply defined in recent years, it remains an open question. I will explore this problem, introducing new mathematical constructs such as the metastate along the way. The talk will conclude with an examination of how and in which respects the statistical mechanics of disordered systems might differ from that of homogeneous systems.
{"url":"http://www.lsa.umich.edu/vgn-ext-templating/v/index.jsp?vgnextoid=3f06c37f7a3c2410VgnVCM100000c2b1d38dRCRD&vgnextchannel=9a34f78dcbd8f210VgnVCM100000c2b1d38dRCRD&vgnextfmt=detail","timestamp":"2014-04-18T03:33:03Z","content_type":null,"content_length":"17282","record_id":"<urn:uuid:e49a59b2-8b58-4678-a570-16ff556c8e4b>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with UnivariateTwinAnalysis_MatrixRawConACE.R Wed, 10/17/2012 - 16:06 The documentation on mxCI is The documentation on mxCI is not too bad: mxCI(reference, interval = 0.95, type=c("both", "lower", "upper")) A character vector of free parameters, mxMatrices, mxMatrix elements and mxAlgebras on which confidence intervals for free parameters are to be estimated, listed by name. So once you have an algebra which computes ACE.A/ACE.V (say named stdA) you can request Here std stands for standardized, not sexually transmitted disease :) Access to the parameter estimates from a fitted model can be had in a couple of ways. One is fittedACE <- mxRun(ACE) and another is via the summary sumACE <- summary(fittedACE) Yet another way is to use the wonderful function omxGetParameters. From ?omxGetParameters omxGetParameters(model, indep = FALSE, free = c(TRUE, FALSE, NA), fetch = c('values', 'free', 'lbound', 'ubound', 'all')) a MxModel object fetch parameters from independent submodels. fetch either free parameters (TRUE), or fixed parameters or both types. Default value is TRUE. which attribute of the parameters to fetch. Default choice is ‘values’. Wed, 12/05/2012 - 13:22 I forgot to say: thanks for I forgot to say: thanks for your helpful comments! I have gotten everything working now. Kind regards
{"url":"http://openmx.psyc.virginia.edu/thread/1652","timestamp":"2014-04-18T18:49:00Z","content_type":null,"content_length":"28074","record_id":"<urn:uuid:4b883a71-a12a-4957-b55b-1ed099803510>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Can you solve this question? Replies: 2 Last Post: Oct 21, 2011 8:30 AM Messages: [ Previous | Next ] Frank Can you solve this question? Posted: Oct 19, 2011 2:08 PM Posts: 1 Registered: 10/19/11 Can you solve this question? Date Subject Author 10/19/11 Can you solve this question? Frank 10/19/11 RE: Can you solve this question? Ben Brink 10/21/11 Re: Can you solve this question? Angela Richardson
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2307045","timestamp":"2014-04-19T00:35:04Z","content_type":null,"content_length":"19195","record_id":"<urn:uuid:5004ac19-a461-4967-80e6-46a47a1c0450>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
He now has a WAR of 9.1. Click here to see the leaders . Trout is tied for 94th. Actually 94th-103rd. He also is 2nd only to Babe Ruth's 1923 season in WAR per game (.0858). Click here to see that list. If Trout could get just .0429 WAR per game over the last 33 games, he would add 1.4 WAR. He would finish with 10.5. That would be the 24th highest ever. Click here to see the rookie list. . I think Joe Jackson has the previous best rookie season at a WAR of 9.0. Through Aug. 30, Trout has slipped to 8.9 WAR Through Aug. 31, Trout is up to 9.2 WAR Part 2 is below with a link to part 1. I used four stats: one for fielding, one for speed (based on triples), one for hitting for average and one for hitting for power (ISO). The four stats were multiplied by each other and I calculated their geometric mean. Players had to have 5000+ career PAs. They are relative to the league average and there is a park adjustment. I have already used both a triples stat and then a SB stat to measure speed. Here I combined them. I used the square root of the SB stat (see earlier posts for how that worked). Then that was added to the triples stat and I simply took the average of the two. One reason I took the square root of the SB stat is that it had a very large range and this brings it more in line with the other stats. This is actually similar to what Bill James does with his "speed score." One of his stats in that involves SBs and he takes the square root of it. He also uses triples. But I don't use fielding range factor, partly because it would be hard to get it for each guy but also I am already using fielding here. I don't uses CS or GIDP since we don't have them for all of history. I don't use runs scored since that depends on your teammates. So here is the top 25. Here are the numbers for Mays. Recall that they are relative to the leage average: Fielding: 1.13 Isolated Power: 1.89 Average: 1.14 Speed: 1.49 Mays got a slight bumb from the park adjustment (an increase of 2%). For the above replacement level version here are the leaders (this was done the same way as in the first two posts). Mays has a solid lead. He has been the best or near the top so far in all the ways I have looked at this. To see Part 1, go to Who Was The Greatest "All-Around" Player Ever? Another Quantitative Attempt. I used four stats: one for fielding, one for speed (based on triples), one for hitting for average and one for hitting for power (ISO). There was a park adjustment, too. The four stats were multiplied by each other and I calculated their geometric mean. Players had to have 5000+ career PAs. In this case, I used a stat for stolen bases for speed instead of the one for triples. Because of that, the isolated power stat here was not adjusted for triples like I did the first time. The stolen base stat is SBs divided by number of times reaching first base (singles + walks + HBP). That was then divided by the league average. The next post I do will combine triples and SBs to make a speed stat. I think that neither a SB stat nor a triples stat are good enough by themselves. So here is the new set of leaders. The 1.57 for Bobby Bonds means that the geometric average of the 4 stats was 1.57. If, for example, he had been 57% better than the league average in each of the 4 stats, his geometric average would have been 1.57. His stats were actually Fielding: 1.08 Isolated Power: 1.63 Batting Average: 1.03 SBs: 3.41 He was 8% better at fielding. His SB rate was 3.41 times the league average. Willie Mays once again does extremely well (Bobby Bonds was 23rd before). He was 3rd in the other measure using 3Bs for speed. DiMaggio was first. But here he fell to 449th. His stealing rate was only .37. Rickey Henderson rose from 284th place since he did not have very many triples (he was just about average). So some players moved up or down in the rankings quite a bit. The SB rate had a bigger range than the SB rate (although the correlation between the two rates was about .56). It went from 5.35 down to 0.019. The triple rate ranged from 3.85 down to 0.197. I also calculated an "above replacement" level value, using the same method as last time. Here is the top 25. I don't think the number has any real meaning or interpretation. This just allows us to take into account longevity and the typical decline in performance that we normally see. Mays was 1st in this one last time. Mays probably would gain on Henderson if I could split fielding into throwing and catching (which I will try to do at some point). Would it be enough to pass Henderson? If so, I think it would strengthen the case for Mays being the greatest all around player ever since he ranks so high using either SBs or 3Bs for speed. According to Baseball Reference, since 1876, there have been 499,590 errors made. So only 410 to go. With 30 teams and about 40 games left for each, there are close to 1200 games left in the season. With a rate of .5-.6 errors per game, there should be around 600 more errors this season and we will pass the 500,000 mark. There have also been 98,790 hit batters, so that could be passed in a couple of years. Update: Jose Reyes made the 500,000th error on Sept. 15th. See Marlins' Jose Reyes fumbles way into history with baseball's 500,000th error by Jeff Passan. It also has a video clip. I tried to compile a list of the best rookie seasons by WAR from Baseball Reference. See my earlier post from August 1, Does Mike Trout Already Have One Of The Top Ten Rookie Seasons Ever? Trout now has a WAR of 8.6. My list would have him 2nd only to Joe Jackson in 1911 (9.0). With almost 40 games left, it seems like he will pass that. He has gained 2.0 in WAR in the last 19 games. He leads the 2nd best position player in the league, Robinson Cano, by 3.0 (Cano has 5.6). The last time a position player was 3.0 or more better for a whole season than the 2nd best position player was in 2002 when Barry Bonds had 11.6 and Jeff Kent had 6.9. That is what the announcer said on ESPN's "Baseball Tonight" last night (it was not one of the ex-player analysts who said this). Here is how Ramirez has done in OPS+ in each year of his career: His career OPS+ is 92. His career OBP is .317 and this year it is .282. He is not making up for it with base stealing. He does have 14 SBs with only 5 CS this year. But that is of marginal importance and in his career his SB-CS is 61-32. He has 24 HRs and 39 SBs in 99 games so far. I called up all the seasons with 20+ HRs and 35+ SBs using the Lee Sinins "Complete Baseball Encyclopedia." 46 players have done this at least once. Then I went through all of those individual seasons and looked at their monthly & daily splits at Retrosheet & gamelogs at Baseball Reference. Here the only players I think have done this before: Cesar Cedeno-1974 Joe Morgan-1976 Rickey Henderson-1985, 1990 Eric Davis-1986, 1987 Barry Bonds-1990 Let me know if you think I missed anyone. Trout has a good chance to get 30+ HRs and 50+ SBs. Only two players have ever done that, Barry Bonds and Eric Davis, each once. 10 games ago, I reported that Trout had a WAR per game of .0859 (he had 7.3 WAR in 85 games). He now has 8.1 WAR in 95 games or .0853 per game. That is still 2nd only to Ruth's .0901 in 1923 (since 1900). Trout has .08 WAR per game over his last 10 games. If a player did that for a whole season, it would still be the 7th best year since 1900. See Mike Trout's WAR Per Game Is Historically One Of The Best (So Far) He has had about .043 WAR per game this year (assuming 4.5 PAs per game). So over the next 45 games, it would be 1.94. If Gregor Blanco takes over, he would get .76 WAR (he had been averaging about .017 WAR per game). So that is a loss of 1.18 wins. Of course, this is only based on this year's stats. Blanco was not in the majors last year and as of now, his career WAR per game is .01. Last year Cabrera had .026 WAR per game. It is hard to say what the real talent level of each guy is. But if I use the latter 2 numbers, it will cost the Giants .72 wins. If I average that with the 1.18, we get .95 wins. So still in the range of about one less win. Some players are said to have 5 tools: they can hit for average, hit for power, run the bases, catch the ball and throw it. It might seems simple to come up with a rating for this-you could just add, say HRs and SBs. But a player who hits 500 HRs and has 0 SBs has less "all-around" ability than a guy that hits 200 HRs and steals 200 bases. If you added HRs + SBs, the first guy looks better. So I will use a geomtric mean (Dan Levitt and Jim Baker made helpful comments-any mistakes, of course, are due solely to me). Here is what Wikipedia says about it: "A geometric mean is often used when comparing different items- finding a single "figure of merit" for these items- when each item has multiple properties that have different numeric ranges." In fact, some of the values I will be using will have much larger ranges than others. Here are the measures I will use: Fielding: I use FRAA from the Lee Sinins Complete Baseball Encyclopedia (fielding runs above average). Sinins got them from Michael Humphreys' recent book Wizardry. I don't have a way of easily breaking this down into throwing (arm) and catching (glove), but I have something in mind that I will use in a future post. I converted it into a rate relative to the league average (like 1.10 means that you saved 10% more runs than average). Hitting for Average: I will use average relative to the league average. Hitting for Power: I will use isolated power relative to the league average. Speed: I use a player's 3B/(2B + 3B) relative to the league average. This is an idea from Voros McCracken. The idea is that it is a % of the time you hit an extra-base hit that you get a 3B. The faster players will have a higher rate here. It is relative to the league average. Handedness is adjusted for. Triples were taken out of the relative isolated power calculation (for both the player and the league average) since they get used in the speed rating. Actually, for isolated power they were considered doubles. Both the speed and power ratings had ranges far beyond those of fielding and hitting for average. A park adjustment was also made to each player's overall rating (this was based on hitting only). The four ratings, each relative to the league average, are multiplied times each other and then I take the 4th root of that number (the geometric mean). I used all players who had 5000+ PAs through 2011. The table below shows the top 25. Some players are not too surprising while others are. One possible weakness is that some players played long careers and in their later years, the averages and rates were dragged down. So I also tried to create a rating for "all-aroundness" above replacement level. If the average level would be 1.00, then replacement could be .8 (what I used). This might be reasonable because 80% of 81 is about 65 wins, an acceptable replacement level (if we want to go down to say 54 or 52 wins, we could assume that the replacement level pitchers would get you the rest of the way there-remember, these are only position players and their hitting, fielding and running). So how was this calculated? Mays has 1.385. That minus .8 gets us .585. That is how far above replacement Mays was, qualitatively. To quantifiy this, I assumed a 700 plate appearance levl for a full season. Mays had about 17.85 full seasons. So that times .585 gets us 10.44. He is the leader here. Alot of all-time greats here. But Steve Finley really jumps out as a surprise. I hope to post more in the next week or two on some of the assumptions I made and how I came up with some of the measures. Other issues include using SBs instead of 3Bs for speed (Rickey Henderson would rank higher then), using HR% instead of isolated power, how park affects play a role in the 3B/(2B + 3B) rate, the handedness adjustment for that stat, including the ability to get a walk as a "tool" and the earlier mentioned issue of breaking up the fielding rating into throwing and catching (it is possible that someone got a good fielding rating and was not really well balanced, that is, it was all due to their arm or their glove). I also need to explain how I turned FRAA into a rate and how I adjusted for handedness in 3B/(2B + 3B). Same for how I calculated and used the park effect BR means Baseball Reference and FG means Fangraphs. I don't know why they are so different on WAR. I also don't know if FIP, xFIP, Sierra and tERA are park adjusted. It looks like BR gives Sale about a 105 for pitching in U.S. Cellular Field and Strasburg does not seem to get adjusted much, so his park must be neutral. So for the league %'s on FIP, xFIP, Sierra and tERA, we could multiply what Sale has by about .95. He would still be higher than Strasburg, but a little closer. Through yesterday's game, he had 7.3 WAR in 85 games. I found all the seasons with 7.3 or more WAR from 1901-1912 at Baseball Reference. Then ranked them by WAR per game. Trout is 2nd only to Ruth's 1923 season. Then I found the top 200 seasons in WAR from 1871-1900 and ranked them by WAR per game. The table below shows the top 10 from the two searches. At Baseball Reference, Barney (Cubs) has 3.3 defensive WAR while Ryan (Mariners) has 3.2. With about a third of the season left, they are on a pace to get about 4.8-4.9 defensive WAR. Here are the top 4 seasons ever: 1. Terry Turner-5.4-1906 2. Art Fletcher-5.1-1917 3. Mark Belanger-4.9-1975 4. Ozzie Smith-4.7-1989 So far this year, after Barney and Ryan, the next highest defensive WAR belongs to Yunel Escobar of the Blue Jays with 2.2 Using the Baseball Reference Play Index, I found the best seasons for anyone in their 1st, 2nd, 3rd, or 4th seasons. If a player had 130 or more career ABs before a given season, I no longer considered him a rookie (I think that is the rule and I think there is something also about days on the roster but I don't know how to find that). So here are what I think are the top 10 seasons ever for a rookie position player in WAR. Please let me know if you think anyone is missing. Notice how few games Trout has played, so it looks like he could pass Jackson. MJ Lloyd of HaloHangout has a good post on how Trout compares to other great 20 year old players. See Mike Trout Is In Elite Company. AROD has the highest with 9.2 in 1996. So Trout could break that,
{"url":"http://cybermetric.blogspot.com/2012_08_01_archive.html","timestamp":"2014-04-19T14:43:56Z","content_type":null,"content_length":"104977","record_id":"<urn:uuid:7a1c5b42-a766-4cc3-86c9-a2e3776722ac>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: FW: August administration of Algebra 2 trig Regents exam Replies: 2 Last Post: Apr 27, 2011 5:12 PM Messages: [ Previous | Next ] FW: August administration of Algebra 2 trig Regents exam Posted: Apr 27, 2011 12:14 PM From: NYSED EMSCASSESSINFO [EMSCASSESSINFO@MAIL.NYSED.GOV] Sent: Monday, April 25, 2011 12:53 PM To: Luisa Duerr Subject: Re: August administration of Algebra 2 trig Regents exam Good Afternoon, Mrs. Duerr, Thank you for writing to the assessment Policy, Development and Administration Office. That is correct. The Algebra 2/Trigonometry exam will not be offered during the August 2011 administration. Eileen Becker Office of State Assessment, Communications>>> Luisa Duerr <DuerrL@binghamtonschools.org> 4/25/2011 12:15 PM >>> Hi. In August 2010, I read a state memo which stated there will be a reduced number of Regents exam that will be offered in summer school 2011. One exam of particular interest to me is the Algebra 2 Trig Regents exam. Will this exam still not be offered in August 2011? Thank you for your timely response. Mrs. Luisa Duerr Binghamton High School Date Subject Author 4/27/11 FW: August administration of Algebra 2 trig Regents exam Luisa Duerr 4/27/11 Re: FW: August administration of Algebra 2 trig Regents exam Ellen Falk
{"url":"http://mathforum.org/kb/message.jspa?messageID=7441009","timestamp":"2014-04-20T04:22:27Z","content_type":null,"content_length":"19119","record_id":"<urn:uuid:cadb525f-d744-4191-8342-6dff64c19896>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
Reality Conditions Math Jokes Via the a link in a Quantum Pontiff thread , I found an excellent collection of mathematical humour , which (amazingly) includes many jokes I had never seen or heard before, and many of them good ones! (For highly nerdy values of "good".) For example, the hyperbolas joke made me laugh out loud: Two hyperbolas were sitting on a plane. The first hyperbola says to the other "I sure wish I could oscillate." The second one replies, "Holy crap! A talking hyperbola!" Fooled you there, didn't I? I bet you were expecting some atrocious mathematical pun instead of a variation of the Great Muffin Joke . Me too, and that's why I laughed. On a more conventional note, the site includes great lists of "...walks into a bar" jokes , of dubious proof methods ), and my favourite math joke ever The cocky exponential function e^x is strolling along the road insulting the functions he sees walking by. He scoffs at a wandering polynomial for the shortness of its Taylor series. He snickers at a passing smooth function of compact support and its glaring lack of a convergent power series about many of its points. He positively laughs as he passes x for being nondifferentiable at the origin. He smiles, thinking to himself, "Damn, it's great to be e^x. I'm real analytic everywhere. I'm my own derivative. I blow up faster than anybody and shrink faster too. All the other functions suck." Lost in his own egomania, he collides with the constant function 3, who is running in terror in the opposite direction. "What's wrong with you? Why don't you look where you're going?" demands e^x. He then sees the fear in 3's eyes and says "You look terrified!" "I am!" says the panicky 3. "There's a differential operator just around the corner. If he differentiates me, I'll be reduced to nothing! I've got to get away!" With that, 3 continues to dash "Stupid constant," thinks e^x. "I've got nothing to fear from a differential operator. He can keep differentiating me as long as he wants, and I'll still be there." So he scouts off to find the operator and gloat in his smooth glory. He rounds the corner and defiantly introduces himself to the operator. "Hi. I'm e^x." "Hi. I'm d / dy." Labels: fun stuff, maths 4 Comments: • Ha; good one: that cocky e^x got what it deserved! By , at 10:34 PM, November 20, 2007 • :) You know, because of Spanish "la funcion", whenever I anthropomorphize functions I think naturally of them as female. (Except those that have masculine names in Spanish, like sine, cosine or polynomial.) This particular e^x sounds more like a male, though... this has confused me a couple of times when telling the joke in English, making me switch genders in the middle or something. • By , at 9:17 PM, October 09, 2009 • we make up our own math jokes and illustrate famous ones: Links to this post: << Home
{"url":"http://realityconditions.blogspot.com/2007/11/math-jokes.html","timestamp":"2014-04-16T04:36:55Z","content_type":null,"content_length":"36777","record_id":"<urn:uuid:eba0f8c8-4cc5-4cb2-99ab-0be3ae466c7e>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
atter)- Why Are They Secure I was recently using SHA512 as a means of encrypting a password for a login script and I got to wondering as to why encryptions are hard to crack. I'll start with my question and then give it a little bit of explanation: Why can't an encryption function be inverted easily such that if enc(a) = a', enc'(a') = a So, i understand that there is a lot of complexity to encryptions making them hard to crack - it's kind of the point. BUT (by my logic) they can't contain randomness since enc(a) will always equal a' so it must be a fixed encryption function. Much in the same way that you can integrate a derivative to obtain the original (including a constant, granted, but these are relatively easy to find), surely there's a way to do this with encryption? Someone created the encryption function to run through a series of statements and operations to get to the hash, why can't they work backwards to obtain the original? This post has been edited by macosxnerd101: 01 July 2013 - 09:11 AM Reason for edit:: Renamed topic for better discussion
{"url":"http://www.dreamincode.net/forums/topic/324009-sha-encryption-or-any-for-that-matter-why-are-they-secure/","timestamp":"2014-04-21T05:41:28Z","content_type":null,"content_length":"132514","record_id":"<urn:uuid:25c9a9aa-3cff-42ee-a750-d85e6ae01ea3>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
How to draw the US Flag mathematically? Re: How to draw the US Flag mathematically? Hi Sumasoltin; I would have to copy the entire page to do that. Could be copyright problems. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=215689","timestamp":"2014-04-18T05:56:41Z","content_type":null,"content_length":"33047","record_id":"<urn:uuid:99ac784c-7e86-4aba-9632-31cbd7226865>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
Portfolio credit risk with ext Assaf Zeevi Portfolio credit risk with extremal dependence: Asymptotic analysis and efficient simulation Coauthor(s): Achal Bassamboo, Sandeep Juneja. Adobe Acrobat PDF We consider the risk of a portfolio comprising loans, bonds, and financial instruments that are subject to possible default. In particular, we are interested in performance measures such as the probability that the portfolio incurs large losses over a fixed time horizon, and the expected excess loss given that large losses are incurred during this horizon. Contrary to the normal copula that is commonly used in practice (e.g., in the CreditMetrics system), we assume a portfolio dependence structure that is semiparametric, does not hinge solely on correlation, and supports extremal dependence among obligors. A particular instance within the proposed class of models is the so-called t-copula model that is derived from the multivariate Student t distribution and hence generalizes the normal copula model. The size of the portfolio, the heterogeneous mix of obligors, and the fact that default events are rare and mutually dependent make it quite complicated to calculate portfolio credit risk either by means of exact analysis or naïve Monte Carlo simulation. The main contributions of this paper are twofold. We first derive sharp asymptotics for portfolio credit risk that illustrate the implications of extremal dependence among obligors. Using this as a stepping stone, we develop importance-sampling algorithms that are shown to be asymptotically optimal and can be used to efficiently compute portfolio credit risk via Monte Carlo simulation. Source: Operations Research Exact Citation: Bassamboo, Achal, Sandeep Juneja, and Assaf Zeevi. "Portfolio credit risk with extremal dependence: Asymptotic analysis and efficient simulation." Operations Research 56, no. 3 (2008): 593-606. Volume: 56 Number: 3 Pages: 593-606 Date: 2008
{"url":"http://www0.gsb.columbia.edu/whoswho/more.cfm?uni=ajz2001&pub=3892","timestamp":"2014-04-19T02:30:49Z","content_type":null,"content_length":"5232","record_id":"<urn:uuid:be4631b8-d0f1-4178-a5e6-bdc923d3156e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability of finishing in first three places of a race October 10th 2010, 07:19 AM #1 Oct 2010 Probability of finishing in first three places of a race If we have a 10 runner horse race and we perceive the probabilities of winning the as follows... Horse 1 35% Horse 2 15% Horse 3 10% Horse 4 10% Horse 5 10% Horse 6 5% Horse 7 5% Horse 8 4% Horse 9 3% Horse 10 3% how would formulate the chance of a single horse finishing either first, second or third? Or would the probabilities of winning the race not be enough? p.s. I'm no maths expert so a dumbed down version would do if possible. you need more info you really need to know the joint distribution of all 10, how they compete against each other. I've seen where in baseball team A can beat team B, while team B can dominate team C and yet C beats on A as well. October 10th 2010, 11:36 PM #2
{"url":"http://mathhelpforum.com/statistics/159034-probability-finishing-first-three-places-race.html","timestamp":"2014-04-20T21:37:21Z","content_type":null,"content_length":"32913","record_id":"<urn:uuid:f3fcfc99-d538-4ccb-9a10-a7310daae487>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
Security Scene Errata From: Declan McCullagh To: politech@vorlon.mit.edu ---------- Forwarded message ---------- Date: Wed, 24 Jun 1998 12:31:21 -0500 From: Bruce Schneier To: ryasin@cmp.com, stito@cmp.com, advancedinfo@email.msn.com Subject: Comments on "Encryption is Key to Securing Data" 22 Jun 98 This note is to comment on your article, "Encryption is Key to Securing Data," that appeared in the 22 Jun 98 issue of InternetWeek. I'm not sure where to start. Almost every statement made in the copy is erroneous. (There was an error when I tried to download the article from your website, so I will retype. Apologies for minor typos.) Encryption is Key to Securing Data by Dayna DelMonico "Although encryption terminology can make even the most technically astute user cringe, encryption is fairly simple." I agree, although the rest of this article seems to prove me wrong. "It's the process of scambling and unscrambling information." Partly. It's confidentially (what you said above--making sure secret things stay secret), integrity (making sure data doesn't get modified in transit), authentication (the digital analogue of a signature), and non-repudiation (making sure someone can't say something and then later deny saying it). Cryptography is a lot more than simple encryption. "Encryption products showed up when MIS managers adopted two basic encryption technologies from the federal government: private and public-key encryption." Nothing correct here. The Federal government has no public-key cryptography standards; MIS managers have nothing to adopt from the Federal government. You might be thinking of DES, a symmetric encryption algorithm. (More on this confusion below.) In any case, commercial use of cryptography products has developed completely independently from government interference. In fact, things like export control and key escrow are making it harder to buy secure commercial products, not easier. "With private key encryption, the sends and recieve are the holders and use the same key (algorithm) to secure information." No. First off, private-key encryption is a bad term; use "symmetric encryption." Second, I'm not sure what they are the holders of. With symmetric encryption, both the sender and the receiver must have the same key; that may be what you mean. In any case, the key and the algorithm are completely seperate. This is one of the cornerstones of post-Medieval "With public key encryption, senders and receivers hold a commonly used public key, with an additional private key held only by specific Not even close. With public-key encryption, each receiver has a public key and a private key. The public key is published. The private key is held, in secret, by the receiver. To send a message to someone, the sender gets the public key form some public database and uses it to encrypt the message. The receiver uses his private key to decrypt it. There are no specific institutions that have additional private keys, unless you are thinking about key escrow systems (which are related, but not the same). "To protect systems from the loss of the key, many vendors offer assymetric encypion, which uses two keys." Sort of. Assymetric encryption is the same as public-key encryption (just another word), and there are two keys. But the reason for using public-key encryption is not to prevent the loss of a key, but to facilitate key management. (With symmetric encryption, both the sender and receiver have to share a key. How this sharing takes place can be very complicated.) "Users can choose from products based on various schemes. Beware, however, that even stronger encryption methods are on the horizon and destined for the next generation of encryption products. The National Institute of Standards and Technology (NIST) is expected to complete and Advanced Encryption Standard (AES) by the end of the year." Even NIST has said that AES will not be finalized before 2000. And AES is just a new symmetric algorithm; it has nothing to do with public-key "The new standard wil luse a 128-bit block size, with key lengths of 128-, 192-, and 256-bits, as opposed to the current 64-bit blocks with 56-bit key standard." True. The current standard is DES. "RSA Data Security also has proposed its own algorithm to content for the new AES standard." Sort of. NIST has solicited proposals for algorithms. Fifteen groups submitted, including RSA Data Security. RSADSI is competing with other groups--Cylink, IBM, Entrust, Counterpane Systems, NTT, etc--not with NIST. NIST does not hav an algorithm. "RSA's extensions to DES, RC4 and RC5 implement multiple keys as well as digital signatures." Many mistakes here. RSADSI submitted RC6 to the AES process, which uses many ideas from RC5. It has nothing to do with DES or RC4. I'm not sure why multiple keys is something to talk about; as I said above, every algorithm post the Middle Ages uses multiple keys. And digital signatures have nothing to do with the AES process. AES is a new symmetric encryption algorithm; digital signatures are done with public key cryptography. They are different. "Another contender is Blowfish II." I submitted this algorithm. We called it Twofish, but some early press reports called it Blowfish II. "The Blowfish scheme, often referred to as PGP (Pretty Good Privacy), lets the sending and receiving computers negotiate a complex number." First, Blowfish is never referred to as PGP. Blowfish is a symmetric encryption algorithm. PGP is an email security product. PGP could have decided to use Blowfish, but it used IDEA and CAST instead. Those are two different encrytpion algorithms. And neither PGP, nor Blowfish, nor anything else discussed to or alluded to in this article, involve sending and receiving computers negotiating complex numbers. "That number is used to scamble the transmission and unscramble the data that is received." What you mean to say, I think, is that PGP uses public-key cryptography for key exchange. The sender uses the reciver's public key to encrypt a session key, and that session key is used to encrypt the email message. "For more on encryption and encryption products, a buyer's first stop should be the International Computer Security Association. This ICSA is an independent organization that tests and certifies security products." ICSA is a private for-profit company, despite what the name implies. And while they do do some testing and certifying of security products, they do not test or certify encryption products. Honestly, I can't believe this article made it into print. Don't you have Bruce Schneier President, Counterpane Systems Author, Applied Cryptography
{"url":"http://attrition.org/errata/media/art.0009.html","timestamp":"2014-04-17T22:04:35Z","content_type":null,"content_length":"7666","record_id":"<urn:uuid:6889aeb5-b371-4bb5-a61c-dacf73a62d22>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
Martins Add, MD Math Tutor Find a Martins Add, MD Math Tutor ...I have been playing chess since I was 8 years old. I've read multiple strategy books and am currently ranked #418/1364 on the itsyourturn.com chess ladder. I have been a Christian my entire life, and I've been studying the Bible since I could read. 27 Subjects: including algebra 1, algebra 2, calculus, MATLAB ...I can help you achieve your business school dream by reviewing the math you need to know and only the math you need to know. We will also go over test taking strategies to master the CAT. I have an official GMAT score of 720 (94th Percentile) with 48Q and 41V. 12 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...For more than 20 years, I have helped others market their skills appropriately, develop new skills, and take the steps to land the job of their dreams. I am the first in my coal mining family to attend college, and was admitted to Harvard, Early Admission, Navy ROTC. In addition to the ROTC Sch... 82 Subjects: including geometry, precalculus, SAT math, MCAT ...I also have an extensive experience as a tutor at both the elementary and tertiary level of education. I am very comfortable with using information technology in teaching and learning. I believe in learning from first principles and always ensure my students understand the basis of the lesson before building on it. 11 Subjects: including calculus, geometry, biochemistry, algebra 1 ...Reading a passage, summarizing a passage, then looking at the question and answer portion of a test is the strategy I use with students. Many students try to completely understand every word and conclusion after reading a passage once, and I find that to be a very difficult and intimidating way ... 33 Subjects: including calculus, ACT Math, trigonometry, English Related Martins Add, MD Tutors Martins Add, MD Accounting Tutors Martins Add, MD ACT Tutors Martins Add, MD Algebra Tutors Martins Add, MD Algebra 2 Tutors Martins Add, MD Calculus Tutors Martins Add, MD Geometry Tutors Martins Add, MD Math Tutors Martins Add, MD Prealgebra Tutors Martins Add, MD Precalculus Tutors Martins Add, MD SAT Tutors Martins Add, MD SAT Math Tutors Martins Add, MD Science Tutors Martins Add, MD Statistics Tutors Martins Add, MD Trigonometry Tutors Nearby Cities With Math Tutor Bethesda, MD Math Tutors Chevy Chase Math Tutors Chevy Chase Village, MD Math Tutors Chevy Chs Vlg, MD Math Tutors Garrett Park Math Tutors Glen Echo Math Tutors Kensington, MD Math Tutors Martins Additions, MD Math Tutors Mount Rainier Math Tutors N Chevy Chase, MD Math Tutors North Chevy Chase, MD Math Tutors Silver Spring, MD Math Tutors Somerset, MD Math Tutors University Park, MD Math Tutors West Mclean Math Tutors
{"url":"http://www.purplemath.com/martins_add_md_math_tutors.php","timestamp":"2014-04-20T08:50:57Z","content_type":null,"content_length":"24136","record_id":"<urn:uuid:825eeb08-c671-49b4-bc4c-64ad810bb382>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
Margin of Safety Margin of Safety (MOS): The excess of actual or budgeted sales over the break even volume of sales is called margin of safety. At break even point costs are equal to sales revenue and profit is zero. Margin of safety, therefore, tells us the amount of sales that can be dropped before losses begin to be incurred. With a high margin of safety business have low risk of not breaking even and with a low margin of safety business have high risk of not breaking even. The formula or equation for the calculation of margin of safety is as follows: Margin of Safety = Total budgeted or actual sales − Break even sales Margin of Safety Ratio: The margin of safety can also be expressed in percentage form (Margin of safety ratio). This percentage is obtained by dividing the margin of safety in dollar terms by total sales. Following equation is used for this purpose. Margin of Safety = Margin of safety in dollars / Total budgeted or actual sales Sales(400 units @ $250) $100,000 Break even sales $87,500 Calculate margin of safety. Sales(400units @$250) $100,000 Break even sales $ 87,500 Margin of safety in dollars $ 12,500 Margin of safety as a percentage of sales: 12,500 / 100,000 = 12.5% It means that at the current level of sales and with the company's current prices and cost structure, a reduction in sales of $12,500, or 12.5%, would result in just breaking even. In a single product firm, the margin of safety can also be expressed in terms of the number of units sold by dividing the margin of safety in dollars by the selling price per unit. In this case, the margin of safety is 50 units ($12,500 ÷ $ 250 units = 50 units). Voltar company manufactures and sells a telephone answering machine. The company's contribution margin income statement for the most recent year is given below: Description Total Per unit Percent of Sales Sales (20,000 units) $ 1,200,000 $60 100% Less variable expenses 900,000 $45 ?% Contribution margin 300,000 $15 ?% Less fixed expenses 240,000 Net operating income 60,000 Required: margin of safety both in dollars and percentage form. Solution to Review Problem: Margin of safety = Total sales – Break even sales* = $1,200,000 – $960,000 = $240,000 Margin of safety percentage (Margin of safety ratio) = Margin of safety in dollars / Total sales = $240,000 / $1,200,000 = 20% *The break even sales have been calculated as follows: Sales = Variable expenses + Fixed expenses + Profit $60Q = $45Q + $240,000 + $0** $15Q = $240,000 Q = $240,000 / $15 per unit Q = 16,000 units; or at $60 per unit. $960,000 **We know that break even is the level of sales where profit is zero. Case Study (A Real Business Example): Pak Melwani and Kumar Hathiramani, former silk merchants from Bombay, opened a soup store in Manhattan after watching a Seinfeld episode featuring the "soup Nazi." The episode parodied a real life soup vendor. Ali Yeganeh, whose loyal customers put up with hour-long lines and "snarling customer service." Melwani and Hathiramani approached Yeganeh about turning his soup kitchen into a chain, but they were gruffly rebuffed. Instead of giving up, the two hired a French chef with a repertoire of 500 soups and opened a store called Soup Nutsy. For $6 per serving, Soup Nutsy offers 12 homemade soups each day, such as sherry crab bisque and Thai coconut shrimp. Melwani and Hathiramani report that in their first year of operation, they netted $210,000 on sales of $700,000. They report that it costs about $2 per serving to make the soup. So their variable expenses ratio is one-third ($2 cost / 6$ selling price). If so, what are their fixed expenses? We can answer that question using the equation approach as follows: Sales = Variable expenses + Fixed expenses + Profits $700,000 = (1/3) × 700,000 + Fixed expenses + $210,000 Fixed expenses = $700,000 – (1/3 of $700,000) – $210,000 = $256,667 With this information we can determine that the break even point is about $385,000 of sales. This gives the store a comfortable margin of safety of 45%. Source: Silva Sansoni, "The Starbucks of Soup?" Forbes, July7, 19997, pp.90-91. Relevant Articles: » Contribution Margin » Contribution Margin Ratio (CM Ratio) » Contribution Margin Income Statement » Break-even Point Analysis » Target Profit Analysis » Margin of Safety (MOS) » Operating Leverage » Break even Analysis with Multiple Products » CVP Consideration in Cost Structure » Importance of Cost Volume Profit (CVP) Analysis
{"url":"http://accountingexplanation.com/margin_of_safety.htm","timestamp":"2014-04-16T10:18:41Z","content_type":null,"content_length":"53967","record_id":"<urn:uuid:f114fa05-fefd-4bd5-bc46-1e84fccb7188>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
James has opted for a 20-payment life insurance policy of $200,000 • 1James has opted for a 20-payment life insurance policy of $200,000 James has opted for a 20-payment life insurance policy of $200,000. James is 42 years old. What is his annual premium? • 2n insurance company wants to offer a new 5-year, level-term life insurance policy to recent college graduates n insurance company wants to offer a new 5-year, level-term life insurance policy to recent college graduates. The policy will have a face value (the amount paid in case of death) of $50,000. Normally, the company charges different premiums depending on the age, gender, tobacco habits and health of the person to be insured*. However, for this simple policy, the company plans to charge a flat $60/year for every eligible customer, and avoid the underwriting costs normally associated with new policies. It costs the company an average of $30 per policy for advertising, sales and administration. In order to estimate the profitability of the new policy, you have been asked to simulate expected policy income (premium payments received) compared to costs (administrative/sale cost plus the occasional death benefit) for an anticipated 100,000 policies. Background Information: In each of the 5 policy years, there are 3 possible outcomes. • The insured may die, and $50,000 will be paid out (there is a 0.1% chance of death in any one year for persons in this age group) • The insured may decide to drop the policy, with no payout, and all premiums paid so far are forfeited – (there is a 4% chance of this happening during any year) • The insured may continue to pay the annual premium and the policy remains in force. At the end of the fifth year, the insurance terminates, with no further costs to either party. Use a Monte Carlo simulation to solve this problem. Rather than a closed-form solution, Monte Carlo techniques depend on setting up the problem parameters and then inputting a series of random values. The combination of all the random solutions is taken as the problem solution. Hints: Set up the calculations for cash flow each year based on the costs and probabilities given above. Then, the actual cash flow for each policy (each simulation) depends on random numbers which determine which of the possible events actually occurs for that policy holder during that year. Sum the company’s total expected income for all the policies sold, compare this to the cost of selling the policies, and determine how profitable this will be for the company. (You can ignore the cost of money and treat all income as the same, no matter what year it happens.) To generate random numbers, use the rand() function found in . It does not take any arguments, and it returns an integer between 0 and RAND_MAX (a constant already defined in the stdlib header). You will want to initialize, or “seed”, the random number function once, at the beginning of your program, by calling srand(time(NULL)). To use the time() function you must also include . Test your program for small numbers of random inputs, and hand-verify proper operation before you scale it up to the full 100,000 policies. Run the program multiple times and observe any differences in the results. If there are differences, what do you think caused them and what does that mean for your analysis? • 3n insurance company wants to offer a new 5-year, level-term life insurance policy to recent college graduates n insurance company wants to offer a new 5-year, level-term life insurance policy to recent college graduates. The policy will have a face value (the amount paid in case of death) of $50,000. Normally, the company charges different premiums depending on the age, gender, tobacco habits and health of the person to be insured*. However, for this simple policy, the company plans to charge a flat $60/year for every eligible customer, and avoid the underwriting costs normally associated with new policies. It costs the company an average of $30 per policy for advertising, sales and administration. In order to estimate the profitability of the new policy, you have been asked to simulate expected policy income (premium payments received) compared to costs (administrative/sale cost plus the occasional death benefit) for an anticipated 100,000 policies.Background Information:In each of the 5 policy years, there are 3 possible outcomes.• The insured may die, and $50,000 will be paid out (there is a 0.1% chance of death in any one year for persons in this age group)• The insured may decide to drop the policy, with no payout, and all premiums paid so far are forfeited – (there is a 4% chance of this happening during any year)• The insured may continue to pay the annual premium and the policy remains in force. At the end of the fifth year, the insurance terminates, with no further costs to either party.Use a Monte Carlo simulation to solve this problem. Rather than a closed-form solution, Monte Carlo techniques depend on setting up the problem parameters and then inputting a series of random values. The combination of all the random solutions is taken as the problem solution.Hints:Set up the calculations for cash flow each year based on the costs and probabilities given above. Then, the actual cash flow for each policy (each simulation) depends on random numbers which determine which of the possible events actually occurs for that policy holder during that year. Sum the company’s total expected income for all the policies sold, compare this to the cost of selling the policies, and determine how profitable this will be for the company. (You can ignore the cost of money and treat all income as the same, no matter what year it happens.)To generate random numbers, use the rand() function found in . It does not take any arguments, and it returns an integer between 0 and RAND_MAX (a constant already defined in the stdlib header). You will want to initialize, or “seed”, the random number function once, at the beginning of your program, by calling srand(time(NULL)). To use the time() function you must also include . Test your program for small numbers of random inputs, and hand-verify proper operation before you scale it up to the full 100,000 policies. Run the program multiple times and observe any differences in the results. If there are differences, what do you think caused them and what does that mean for your analysis? • 4Joe Steam may elect to take a lump-sum payment of $50,000 from his insurance policy or an annuity of $5,650 annually as long as he lives Mr. Joe Steam may elect to take a lump-sum payment of $50,000 from his insurance policy or an annuity of $5,650 annually as long as he lives. How long must Mr. Steam anticipate living for the annuity to be preferable to the lump sum if his opportunity rate is 8 percent? • 5An insurance company charges a 20-year old male a premium of $250 for a one year $100,000 life insurance policy An insurance company charges a 20-year old male a premium of $250 for a one year $100,000 life insurance policy. A 20 year old male has a .9985 probability of living for a year. i.)draw the probability distribution table that illustrates the probability of living or dying in one year ii.)what is the expected value in dollars? iii.)find the standard deviation • 6formula for combination term and whole life insurance policy We are being asked to come up with a formula for a combinationwhole life and term insurance policy that will insuarance thecheapest premium for $1000 worth ofinsurance over 40 years. Anyone out there no where to look to help me? • 7Last year the annual premium on a certain hospitalization insurance policy was $408, and the policy paid 80 percent of any hospital expenses incurred Last year the annual premium on a certain hospitalization insurance policy was $408, and the policy paid 80 percent of any hospital expenses incurred. If the amount paid by the insurance policy last year was equal to the annual premium plus the amount of hospital expenses not paid by the policy, what was the total amount of hospital expenses last year? please help me I have tried hard for this problem but in vain. I'll be thankful to you • 8Last year the annual premium on a certain hospitalization insurance policy was $408, and the policy paid 80 percent of any hospital expenses incurred Last year the annual premium on a certain hospitalization insurance policy was $408, and the policy paid 80 percent of any hospital expenses incurred. If the amount paid by the insurance policy last year was equal to the annual premium plus the amount of hospital expenses not paid by the policy, what was the total amount of hospital expenses last year? (A) $850.00 (B) $680.00 (C) $640.00 (D) $510.00 (E) $326.40 • 9insurance matrix, Insurance MatrixType of Insurance Functions Example of Company Coverage CharacteristicsAuto Home Health Disability Life what do they mean by the functions insurance matrix, Insurance MatrixType of Insurance Functions Example of Company Coverage CharacteristicsAuto Home Health Disability Life what do they mean by the functions • 10True Or False Questions:The term insurance has no value as an investment and is the least expensive type of life insurance True Or False Questions:The term insurance has no value as an investment and is the least expensive type of life insurance.Liability insurance protects a homeowner against injury to other people on his property.After these my math is done. Someone please help me. Thank You. • 11Information from the American Institute of Insurance indicates the mean amount of life insurance per household in the United States is $110,000 Information from the American Institute of Insurance indicates the mean amount of life insurance per household in the United States is $110,000. This distribution follows the normal distribution with a standard deviation of $40,000. a. If we select a random sample of 50 households, what is the standard error of the mean? 5,656 b. What is the expected shape of the distribution of the sample mean? c. What is the likelihood of selecting a sample with a mean of at least $112,000? d. What is the likelihood of selecting a sample with a mean of more than $100,000? e. Find the likelihood of selecting a sample with a mean of more than $100,000 but less than $112,000. • 12dividend payment policy A corporation has been paying out 1 million per year in dividends for the past several years. This year, the company wants to pay the 1 million dollar dividend butcant. All of the following are reasons the company cannot continue its dividend policy EXCEPT:A)the company's cash balance is less than 1 millionB)the company's liabilities exceed its assetsC)the company's net income this year is less than 1 millionD)the company's retained earnings balance at year end is less than one million
{"url":"http://homework.boodom.com/q63895-s-James_has_opted_for_a_20-payment_life_insurance_policy_of_$200,000","timestamp":"2014-04-21T04:32:42Z","content_type":null,"content_length":"27791","record_id":"<urn:uuid:81cba17c-c31d-4a65-8f66-fb6abe640469>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: how do you solve 25^(-3/4) without a calculator? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5037d563e4b026c0470605b6","timestamp":"2014-04-19T02:18:55Z","content_type":null,"content_length":"220940","record_id":"<urn:uuid:8238f5a4-6c98-4ce3-ac1a-5cc98a812e5d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Why do logit model coefficients produce signs opposite to those [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Why do logit model coefficients produce signs opposite to those obtained from OLS? From Maarten buis <maartenbuis@yahoo.co.uk> To statalist@hsphsun2.harvard.edu Subject Re: st: Why do logit model coefficients produce signs opposite to those obtained from OLS? Date Thu, 4 Feb 2010 07:26:56 -0800 (PST) --- On Thu, 4/2/10, Jitian Sheu wrote: > I am running a very very simple binary model, y=a+bX+e, > where y is a dummy variable. > Before performing -logit- command, I estimate the above > model by a traditional OLS, i.e. linear probability model > (regress y x1 x2...) > I knew that OLS is not a good model for fitting this model. > I just want to get some direction from results obtained > from traditional OLS > However, after I perform -regress- and -logit-, I found > signs of estimated coefficients from these two models are > not the same. > I am just wondering whether this is "normal"? or I am doing > anything wrong? This is not normal. One possiblity I could imagine is that you specified the -or- option (or used -logistic-) and forgot to interpret coefficients less than 1 as negative effects. In both case you will get odds ratios, that is, the ratio by which the odds of "success" on y changes for a unit change in x. An odds ratio less than 1 thus means that a unit increase in x leads to a smaller odds, i.e. a negative effect, even though the numberical value of the odds ratio is positive. I wrote a small text about odds, odds ratios, and marginal effects (in the context of interaction effects), which is available here: <http://www.maartenbuis.nl/ Hope this helps, Maarten L. Buis Institut fuer Soziologie Universitaet Tuebingen Wilhelmstrasse 36 72074 Tuebingen * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2010-02/msg00232.html","timestamp":"2014-04-18T03:03:07Z","content_type":null,"content_length":"7713","record_id":"<urn:uuid:8a652771-7bef-40ef-aa1c-c6fd1f7ec21b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] Optimization in Several Variables December 9th 2008, 07:25 PM #1 [SOLVED] Optimization in Several Variables F(x,y) = (2*y+1)*e^(x^2-y) Find critical point and prove there is only one. Use second derivative test to determine nature of crit. pt. I know the procedure in solving it: set partial derivatives to zero and solve resulting equations. And by second derivative test, if D>0, f(a,b) is local min/max; D<0, (a,b) is saddle point. if f_xx(a,b)>0, f(a,b) is min where D=D(a,b)=f_xx(a,b)f_yy(a,b)-f_xy(a,b)^2 I have no idea how to get the partial derivatives and start the problem. Any help will be appreciated, thanks. F(x,y) = (2*y+1)*e^(x^2-y) Find critical point and prove there is only one. Use second derivative test to determine nature of crit. pt. I know the procedure in solving it: set partial derivatives to zero and solve resulting equations. And by second derivative test, if D>0, f(a,b) is local min/max; D<0, (a,b) is saddle point. if f_xx(a,b)>0, f(a,b) is min where D=D(a,b)=f_xx(a,b)f_yy(a,b)-f_xy(a,b)^2 I have no idea how to get the partial derivatives and start the problem. Any help will be appreciated, thanks. $f(x,y) = 2y.exp(x^2-y) + exp(x^2-y)$ $\frac{\partial f}{\partial x} = 2x.2y.exp(x^2-y) + 2x.exp(x^2-y) = exp(x^2-y)(4xy+2x) = 0$ The exponential function is never zero, hence $4xy+2x = 0$ $\frac{\partial f}{\partial y} = 2exp(x^2-y) + 2y(-1)exp(x^2-y) -exp(x^2-y) = exp(x^2-y)(2-2y-1) = 0$ Again exponential never zero, so: $2-2y-1 = 0$ Surely ou can continue from there? yup, that simplified things a lot. Thank you December 11th 2008, 12:26 PM #2 Super Member Dec 2008 December 11th 2008, 06:57 PM #3
{"url":"http://mathhelpforum.com/calculus/64259-solved-optimization-several-variables.html","timestamp":"2014-04-19T04:45:09Z","content_type":null,"content_length":"37350","record_id":"<urn:uuid:48e64dec-3552-42e6-ad1b-ef5b55f6cfc9>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
The Collatz conjecture is safe (for now) A few days ago John Cook reported a draft paper claiming to solve the Collatz conjecture. Of course, since the Collatz conjecture is so simple to state, it constantly attracts tons of would-be solvers, and most of the purported “proofs” they generate are not even worth mathematicians’ time to look at. So why should this one be different? Well, on the surface, it seemed to be a much more serious attempt than the vast majority. Of Scott Aaronson’s Ten Signs a Claimed Mathematical Breakthrough is Wrong, this paper exhibited only #6 (jumping straight into technical material without presenting a new idea) and #10 (using techniques that seem too wimpy). But #6 could just be due to poor organization of the paper, and #10 is not obvious until you get to the end. But — and you knew this was coming — it is wrong. I’ve spent several hours over the past few days reading over the paper and came to this conclusion independenly — and then found several other people who had come to the same conclusion, for the same reason. Consider the following “proof” of the Collatz conjecture. We consider running the Collatz function (call it $c$) “backwards”, to find out which number(s) could have preceded a given number in a Collatz sequence. We always have $c(2k) = k$, so from $k$ we can go backwards to $2k$. Also, $c((k-1)/3) = 3*(k-1)/3 + 1 = k$ when $(k-1)/3$ is an odd integer, that is, when $k$ is one more than an odd multiple of $3$. These are the only two possibilities. So, starting from $1$, we can go backwards to $2$; from $2$ we can go backwards to $4$; from $4$ we can go to $8$ or $1$; from $8$ we get $16$; from $16$ we get $32$ or $5$; and so on. In this way we can build up an infinite tree of numbers. But this tree contains every natural number since we can always work up the tree from $1$ until we find any number we want. Hence every number must reach $1$ when iterating the Collatz function. Do you see the problem with this “proof”? Well, this is essentially what Gerhard Opfer’s “proof” boils down to as well (the details are not exactly the same, but the form is pretty much identical). There is a lot of stuff first about linear operators over complex functions, but for this part he is just relying on work that someone else already did anyway, and in the end he just ends up looking at coefficients of power series, which just boils down to some number theory. He builds a tree by inverting a certain function (not the Collatz function but a closely related one), and the whole argument rests on the fact that this tree contains every natural number — which he states but does not prove! And it seems clear to me that proving this would be no easier than proving the Collatz conjecture itself. In the end, for all the detailed argument and contortion, he has come back precisely to where he started. (By the way, I am not interested in reading your supposed proof of the Collatz conjecture, so please do not post a link to it in the comments!) 43 Responses to The Collatz conjecture is safe (for now) 1. Pingback: Conjectura lui Collatz | Isarlâk 2. Your post sounds very arrogant in the beginning. Sayings like “Of course, since the Collatz conjecture is so simple to state, it constantly attracts tons of would-be solvers, and most of the purported “proofs” they generate are not even worth mathematicians’ time to look at. So why should this one be different?” are just that, arrogant from the start to the beginning. □ I’m sorry if it came off sounding arrogant, but it is simply the truth. ☆ That is a killer phrase which is heavily used by (religious) fundamentalists. Here is some others that you might want to use for future posts. http://www.whatagreatidea.com/ ○ Great idea, but not for me. ;-) ○ One thing that the web has shown is how easy it is to do maths, physics and cosmology: amateur mathematicians knock off proofs of the Collatz conjecture; web pages routinely demonstrate flaws in General Relativity that professionals have missed, and these flaws are rarely of the subtle kind that arise in quantum-gravitational regimes. And the only serious work in cosmology is being done in amateur web forums. Furthermore, most of the truly invaluable work in climate science, especially regarding anthropogenic global warming, has been done by lay “climate skeptics.” Thank goodness that alert members of the public were available to correct the horrific blunders in the IPCC reports! Nor let us forget the superb work done (largely by the computer programmers, rather than biologists or geneticists) at the Institute for Creation Research and the Discovery Institute. Without these amateur contributions, we’d all still believe that the neo-Darwinian synthesis is a serious scientific theory! I plan to back legislation that would allow anyone to apply for academic posts, fly an aeroplane, perform surgery or work as a structural or electrical engineer, regardless of their formal qualifications. In the particular case of academic positions, I can see no reason why blog posts should not replace (say) an honours or masters thesis. The web has demonstrated that formal qualifications are an unnecessary impediment to lay-academics, lay-pilots, lay-surgeons and lay-engineers everywhere. Why should a lay-specialist be discriminated against, simply because they lack the (minimum) six-eight years of formal training that is currently required? ☆ hahaha, +1 for the KFC joke (Killer PHrase Central) ! ;-D And I’m sure even Brent will sooner or later use some of them (like #15 … ;-P !) □ Hans says arrogant from the start to the beginning. So that’s “arrogant in only one point” ! More seriously, the Collatz conjecture is not only easy to state, it is easy to explore with most programming languages and it has been popularized widely, notably by Martin Gardner and A. Dewdney, several decades ago, then by many others. 3. > He builds a tree by inverting a certain function (not the Collatz function > but a closely related one), and the whole argument rests on the fact that > this tree contains every natural number — which he states but does not > prove! And it seems clear to me that proving this would be no easier > than proving the Collatz conjecture itself. Is there a “simple” way to describe the construction of the “Opfer tree”? □ Yes, you can find a description here. 4. Nr 8 of Aaronson’s list is also met: Opfer takes time for trivial things. - examples for Collatz sequences are given, with starting values 1, 2 and 7 - Theorem 2.1 is very simple. It should be called a lemma only. It seems strange that the first proven fact in a milestone work is called theorem, and is so very simple. - in Lemma 2.2 he proves that the kernel of a linear map is a vector space. A real milestone. - why these 18 pages of senseless tables of numbers in the end of the paper, without really explaining why they are important? I mean, such a list gives no real information. □ Yes, good point. □ also some mixed-up notations about sets {4,2,1} and sequences (4,2,1,4,2,1,….). Well, I admit that other authors use {…} to denote sequences (bad idea IMO, when not strictly monotonic), but if this is the author’s choice I can’t see why some end in {…4,2,1} and some in 5. I might clear up one of Martini’s points: Often when papers are published within Germany, a “Satz” (theorem) is about something new to the paper, while a “Lemma” will be a (more or less direct) result of previously proven points. Thus. the first proven point will regularily be a “Satz” although most often it will be a comparatively small preparation point, while the main topic of the paper will often be reduced to “Lemma” status, regardless of importance, since it directly follows from 4.7 in conjecture with 5.4 and 5.5 or suchlike. □ I have never seen anybody from any nation calling the main point in a paper a “lemma”, and the first thing to prove a “theorem”, only because it is the first thing that is proved. ☆ The usage is the same in German and English – a this paper is written in English anyway. The problem is a general one in math: there is much confusion what a lemma is and what a theorem – in part because the import of something only becomes clear afterwards (think e.g. of Ito’s lemma). 6. I’m also intrigued by this; you’re obviously right. It is clear that there is an equivalence between the Collatz theorem and this: ‘Every natural number can be formed by starting at 1, multiply it by 2 or if it’s minus 1 divisible by 3, you can add both this number and the multiplication by 2 to the list. In this way you can form any natural number’. Formulated like that, it feels a natural candidate as an ‘unprovable’ statement -> Godel-incompleteness. Have there been efforts to prove the fact that the Collatz theorem is not provable? □ @Jasper: See http://arxiv.org/abs/math/0312309 ☆ @herojoker: That preprint does not contain a proof of undecidability. @Jasper: As mentioned in DS’s comment above, Conway has found some Collatz-like problems that are undecidable, but Collatz itself is not known to be undecidable. Both the MathWorld and Wikipedia pages on Collatz contain references to Conway’s paper and to more recent work along similar lines. 7. Jasper, yes, there have even been published “proofs” of its unprovability such as 8. I cannot contribute some reasonable argument although I’ve tried to understand Opfer’s article; but I had some sceptic feeling for which you provided now more concrete words (#6 and #10). So “having” nothing on my own it seems, that at least my mathematical instincts are not completely off… at least: likely. Good to know… :-) 9. You really should watch your arrogant tone. The man who drafted the paper is a reputable mathematician who was a student of Collatz, and not just some PhD student looking for attention. There may be flaws in the suggested proof, which is exactly why such papers are submitted for peer review. 10. Pingback: Collatz-Vermutung vermutlich bewiesen 11. @ Hamburger: that’s not right, the words “Lemma” and “Theorem” (Satz) are used in Germany as they are used everywhere else in the world. I’m a mathematician here. □ @Hamburger: You confuse Lemma and Corollary. ☆ Lemma=Lemma Theorem=Theorem or Satz So basically its the same in German and English (being a Germanic language) ○ For the sake of “mathematical precision” (I’m afraid this is almost a pleonasm) please note or remember that in an international community – thinking and reasoning as it is in different mother tongues – confusion will readily ensue without adequate linguistic means of communication. For this reason, European academics used Latin (remember also that learning institutions were in the hands of the churches), from which the words quoted are – both in English and German – almost transliterations… and certainly non-Germanic! This point deserves stressing, since sharing a foreign language will at times imply a common adaptation of an originally different pattern of thought – a ‘feature’ usually welcomed by professional mathematicians, interested as they are in received standards of proof (though the criterion is obviously only necessary, not sufficient). In this frame of mind, I personally find that remembering the original (foreign) meanings of these words helps me keeping confusion at bay: the traditional “theorem”/”Theorem” comes to us from Greek (over Latin), meaning “matter of study/inquiry” in both Greek and Latin; “lemma”/”Lemma” comes from Greek (“proposition”) through Latin (there meaning “issue, subject, matter”); finally, “corollary”/”Korollar” comes from Latin (“little crown”, ‘crownlet’). I find the latter an especially helpful (and sweet) description of the originally intended function and position of a “lemma”/”Korollar” within a mathematical chain of reasoning. Of course, both the English and the German scientific/technological terminology originates (mostly) from Latin/Greek contexts, which according to certain estimates makes up an estimated 70 to 80% of the total vocabulary of modern English (you will find that even the sentence structure of modern English is decidedly Latin/French-oriented, not to mention the loss of inflexion – s. cases and conjugation patterns). Thinking ‘Germanic’ instead of Latin/French I often find rather detrimental to the ease of ‘navigation’ within the scientific and technological lingua franca that English(= the modern Vulgar Latin with Germanic loan words?) has become – an example from programming: the confusion arising from the German translation of the object-oriented terms “instantiate” and “instance” – “an “Instanz” is an “institution”, an (often governmental) “authority”, the last thing that comes up in your mind is an “example”… Could the confusion around Satz/theorem etc have originated within this context? ○ Well, I think the german word “Satz” is a little bit weaker than “Theorem”. Therefore some german authors also use “Theorem” as a german word. Satz is maybe more like “Proposition” 12. Pingback: Anonymous 13. The paper by Opfer might possibly also satisfy Aaronson’s criterion number 3: the same methods imply an impossibly stronger theorem. In the case of the Collatz problem, several notorious generalizations have been discussed: the 5n+1 problem (or in fact, the An+1 problem) where A is any odd integer greater than 3. In this case, the heuristics predict that there are integers that interate off to infinity (not proven though), and it’s not clear to me how the given approach distinguishes this from the 3n+1 problem. A further generalization has been proposed by Conway that is algorithmically undecidable, and again one wonders how the approach would distinguish such problems from the one it proposes to prove. The more general a result, the more it is in danger to be 14. Pingback: Top Posts — WordPress.com 15. It seems nowadays that it is not enough to find a problem in someone’s proof, there must be the subsequent ritual of humiliation of the author of the proof. Some blog posts about attempts at difficult problems (like the not-so-recent one about P and NP, that attracted much attention), and some of the comments left by readers just disgust me. Why not write a big sign: “I wrote a wrong proof of Collatz’s conjecture”, and force the guy to carry it through his home town? I imagine that would be enough then. Nobody who breaks some of Scott Aaronson’s rules on how to write a breatkthrough paper should be spared public humiliation, I think! □ There’s a difference between “I broke a couple of guidelines for the format of writing up a proof formally” (if you actually read Brent’s post, you’ll see that he’s actually surprised how few of Aaronson’s rules were broken, and attacks the proof instead based on it, you know, not actually being a proof) and “My attempted proof is completely nonsensical and relies on asserting the main point without proof, and I threw it out for the world to see claiming that I was the only one clever enough to see this brilliant solution”. I’m also not seeing the excessive humiliation that you claim is present – if you throw a proof out there and your proof fundamentally doesn’t work, people will say, “Hey, this proof doesn’t work”. That’s kind of how things are supposed to work. In fact, that’s the whole basis of peer review. And no, the proof in the paper isn’t just a mostly valid proof that needs a bit of peer review to plug a couple of minor errors and make it perfect. It just rephrases the problem and then asserts that the conjecture is true in the new phrasing without proving this – in other words, it doesn’t make any noteworthy progress whatsoever. ☆ Hey S! “There’s a difference between (blablabla) and (blablabla).” (cit. “S”) “I’m also not seeing the excessive humiliation that you claim is present – if you throw a proof out there and your proof fundamentally doesn’t work, people will say, “Hey, this proof doesn’t work”. That’s kind of how things are supposed to work. In fact, that’s the whole basis of peer review.” (cit. “S”) Just to stick with your way of argumentation: There’s also a difference between saying “Hey, this proof doesn’t work” (citated from “S”) and saying “it constantly attracts tons of would-be solvers, and most of the purported “proofs” they generate are not even worth mathematicians’ time to look at. So why should this one be different?” (citated from Brent). BTW, to answer the question: This one COULD be different, because it’s done by a mathematician (additionally, this is probably why Brent actually DID look at it). I didn’t read the original paper, I am not even a mathematician (not even as a hobby), and the paper might actually be utter rubbish. Still (just from a very personal and non-scientific point of view, and just in case it interests you): Brent (and similar also “S”), if you have ever asked yourself the question (which I am sure you didn’t) why the ordinary mortal (i.e. the non-mathematician) tends to sometimes view mathematicians as a non-social nerds… Now you know why (in case you don’t: because of texts like Brents above). Did you ever read “Fermats last Theorem” by Simon Singh? Very readable, but it depicts mathematicians as very social people, who sit together during teatime and very openly discuss mathematical problems (which I have to admit A. Wiles did NOT do, although probably just because the matter is SO complex, that no-one could’ve helped him anyway and would instead just have distracted him). I REALLY hope(d) reality is this way, and that Brent is the dropout. Also: Thanks Fernando and “Hans Wurst” and others for helping me keep up my romantic image of the mathematical community. Oh, and don’t bother to reply, I will certainly never return to this page (considering that Brent is probably not going to miss me, since I cannot follow Andrew Wiles’ proof for Fermats last Theorem). Nerd-on, just as you like. 16. The 3n+1 problem cannot be PROVABLY undecidable, since if it’s undecidable, then it is TRUE. Of course you might be able to prove that 3n+1 is undecidable in some theory T, but the proof would be carried out in another theory T2, and you’ll have also proved (in T2) that the conjecture is true in all models of T2. If it helps understand the situation, the person who wrote that “undecidability” proof also has released a number of proofs that P=NP (or maybe it was P!=NP) and other such important theorems. □ I am intrigued by this. Why should this be the case? Clearly if there is another cycle, then this is provable, but what about sequences that simply run to infinity? Is it not a priori conceivable that there exists a number n for which it is not provable that the sequence starting with n converges to infinity? (I am fairly ignorant about work on Collatz beyond its definition, so forgive me if there is some nontrivial mathematics here I am not aware of.) Perhaps I also misunderstand your post, since “undecidable” usually means algorithmically undecidable in the usage I am familiar with, but I assume you mean “independent of some standard set of axioms such as ZFC”. 17. Your blog article (as well as a few comments to it) has been mentioned in an article of the online flavor of the popular german “Spiegel” magazine: http://www.spiegel.de/wissenschaft/mensch/ 18. News. When you look at page 2 of Opfer’s preprint, you find an addendum, added on June 17, 2011: The text of the addendum reads: > Author’s note: > The reasoning on p. 11, that \The set of all vertices (2n; l) in all levels will > contain all even numbers 2n 6 exactly once.” has turned out to be incomplete. > Thus, the statement \that the Collatz conjecture is true” has to be withdrawn, > at least temporarily. > June 17, 2011 Congratulations to all those who had pointed just on this step in Opfer’s try for a proof. Ingo Althoefer. This entry was posted in open problems, proof. Bookmark the permalink.
{"url":"http://mathlesstraveled.com/2011/06/04/the-collatz-conjecture-is-safe-for-now/","timestamp":"2014-04-18T15:45:18Z","content_type":null,"content_length":"120532","record_id":"<urn:uuid:e2a62e3d-a434-4d05-add7-525429c3a98d>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: one more.... • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50f8c062e4b007c4a2ebd9e3","timestamp":"2014-04-17T03:59:26Z","content_type":null,"content_length":"54737","record_id":"<urn:uuid:a2fe5d42-9237-4afb-9ac0-82fbb0add08c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
Figure 1: Dynamic logic operation example, finding cognitively related events in noise, in EEG signals. The searched processes are shown in Figure 1 at the bottom row. These events are “phase cones,” circular events expanding or contracting in time (horizontal direction t, each time step is 5ms); in this case, two expanding and one contracting events are simulated as measured by an array of sensors. Direct search through all combinations of models and data leads to complexity of approximately = 10^10,000, a prohibitive computational complexity. The models and conditional similarities for this case are described in details in [44], a uniform model for noise (not shown), expanding and contracting cones for the cognitive events. The first 5 rows illustrate dynamic logic convergence from a single vague blob at iteration 2 (row 1, top) to closely estimated cone events at iteration 200 (row 5); we did not attempt to reduce the number of iterations in this example; the number of computer operations was about 10^10. Thus, a problem that was not solvable due to CC becomes solvable using dynamic logic.
{"url":"http://www.hindawi.com/journals/cin/2011/454587/fig1/","timestamp":"2014-04-20T06:43:24Z","content_type":null,"content_length":"2330","record_id":"<urn:uuid:15295462-3703-41f6-bfb8-b163b16fc6bf>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics Describing the Real World: Precalculus and Trigonometry University of FloridaPh.D., Dartmouth College Video or Audio? While this course works well in both formats, the video version features graphics to enhance your learning experience, including illustrations, images of people and events, and on-screen text. Video Exclusive This course is richly illustrated with animations and hundreds of images to enhance your comprehension of the material. Audio Exclusive This course is available exclusively for audio. Which Format Should I Choose? Video Download Audio Download DVD CD CD Soundtrack Soundtrack Download Watch or listen immediately with FREE streaming Watch Stream using apps on your iPad, iPhone, Android, or Kindle Fire Preview Stream to your internet connected PC or laptop Download files for offline viewing or listening Receive DVDs or CDs for your library Play as many times as you want What are CD Soundtracks? CD Soundtracks are the entire audio portion of this video course. They contain some references to visual images, animations, graphics and content designed for the video experience. What are Digital Soundtracks? Digital Soundtracks are the entire audio portion of this video course. They contain some references to visual images, animations, graphics and content designed for the video experience. Mathematics Describing the Real World: Precalculus and Trigonometry This course includes FREE digital streaming Enjoy instantly on your computer, laptop, tablet or smartphone. What's the sure road to success in calculus? The answer is simple: Precalculus. Traditionally studied after Algebra II, this mathematical field covers advanced algebra, trigonometry, exponents, logarithms, and much more. These interrelated topics are essential for solving calculus problems, and by themselves Show More 36 Lectures Precalculus is important preparation for calculus, but it’s also a useful set of skills in its own right, drawing on algebra, trigonometry, and other topics. As an introduction, review the essential concept of the function, try your hand at simple problems, and hear Professor Edwards’s recommendations for approaching the course. The most common type of algebraic function is a polynomial function. As examples, investigate linear and quadratic functions, probing different techniques for finding roots, or “zeros.” A valuable tool in this search is the intermediate value theorem, which identifies real-number roots for polynomial functions. Step into the strange and fascinating world of complex numbers, also known as imaginary numbers, where i is defined as the square root of -1. Learn how to calculate and find roots of polynomials using complex numbers, and how certain complex expressions produce beautiful fractal patterns when graphed. Investigate rational functions, which are quotients of polynomials. First, find the domain of the function. Then, learn how to recognize the vertical and horizontal asymptotes, both by graphing and comparing the values of the numerator and denominator. Finally, look at some applications of rational functions. Discover how functions can be combined in various ways, including addition, multiplication, and composition. A special case of composition is the inverse function, which has important applications. One way to recognize inverse functions is on a graph, where the function and its inverse form mirror images across the line y = x. You have already used inequalities to express the set of values in the domain of a function. Now study the notation for inequalities, how to represent inequalities on graphs, and techniques for solving inequalities, including those involving absolute value, which occur frequently in calculus. Explore exponential functions—functions that have a base greater than 1 and a variable as the exponent. Survey the properties of exponents, the graphs of exponential functions, and the unique properties of the natural base e. Then sample a typical problem in compound interest. A logarithmic function is the inverse of the exponential function, with all the characteristics of inverse functions covered in Lecture 5. Examine common logarithms (those with base 10) and natural logarithms (those with base e), and study such applications as the “rule of 70” in banking. Learn the secret of converting logarithms to any base. Then review the three major properties of logarithms, which allow simplification or expansion of logarithmic expressions—methods widely used in calculus. Close by focusing on applications, including the pH system in chemistry and the Richter scale in geology. Practice solving a range of equations involving logarithms and exponents, seeing how logarithms are used to bring exponents “down to earth” for easier calculation. Then try your hand at a problem that models the heights of males and females, analyzing how the models are put together. Finish the algebra portion of the course by delving deeper into exponential and logarithmic equations, using them to model real-life phenomena, including population growth, radioactive decay, SAT math scores, the spread of a virus, and the cooling rate of a cup of coffee. Trigonometry is a key topic in applied math and calculus with uses in a wide range of applications. Begin your investigation with the two techniques for measuring angles: degrees and radians. Typically used in calculus, the radian system makes calculations with angles easier. The Pythagorean theorem, which deals with the relationship of the sides of a right triangle, is the starting point for the six trigonometric functions. Discover the close connection of sine, cosine, tangent, cosecant, secant, and cotangent, and focus on some simple formulas that are well worth memorizing. Trigonometric functions need not be confined to acute angles in right triangles; they apply to virtually any angle. Using the coordinate plane, learn to calculate trigonometric values for arbitrary angles. Also see how a table of common angles and their trigonometric values has wide application. The graphs of sine and cosine functions form a distinctive wave-like pattern. Experiment with functions that have additional terms, and see how these change the period, amplitude, and phase of the waves. Such behavior occurs throughout nature and led to the discovery of rapidly rotating stars called pulsars in 1967. Continue your study of the graphs of trigonometric functions by looking at the curves made by tangent, cosecant, secant, and cotangent expressions. Then bring several precalculus skills together by using a decaying exponential term in a sine function to model damped harmonic motion. For a given trigonometric function, only a small part of its graph qualifies as an inverse function as defined in Lecture 5. However, these inverse trigonometric functions are very important in calculus. Test your skill at identifying and working with them, and try a problem involving a rocket launch. An equation that is true for every possible value of a variable is called an identity. Review several trigonometric identities, seeing how they can be proved by choosing one side of the equation and then simplifying it until a true statement remains. Such identities are crucial for solving complicated trigonometric equations. In calculus, the difficult part is often not the steps of a problem that use calculus but the equation that’s left when you’re finished, which takes precalculus to solve. Hone your skills for this challenge by identifying all the values of the variable that satisfy a given trigonometric equation. Study the important formulas for the sum and difference of sines, cosines, and tangents. Then use these tools to get a preview of calculus by finding the slope of a tangent line on the cosine graph. In the process, you discover the derivative of the cosine function. Return to the subject of triangles to investigate the law of sines, which allows the sides and angles of any triangle to be determined, given the value of two angles and one side, or two sides and one opposite angle. Also learn a sine-based formula for the area of a triangle. Given three sides of a triangle, can you find the three angles? Use a generalized form of the Pythagorean theorem called the law of cosines to succeed. This formula also allows the determination of all sides and angles of a triangle when you know any two sides and their included angle. Vectors symbolize quantities that have both magnitude and direction, such as force, velocity, and acceleration. They are depicted by a directed line segment on a graph. Experiment with finding equivalent vectors, adding vectors, and multiplying vectors by scalars. Apply your trigonometric skills to the abstract realm of complex numbers, seeing how to represent complex numbers in a trigonometric form that allows easy multiplication and division. Also investigate De Moivre’s theorem, a shortcut for raising complex numbers to any power. Embark on the first of four lectures on systems of linear equations and matrices. Begin by using the method of substitution to solve a simple system of two equations and two unknowns. Then practice the technique of Gaussian elimination, and get a taste of matrix representation of a linear system. Deepen your understanding of matrices by learning how to do simple operations: addition, scalar multiplication, and matrix multiplication. After looking at several examples, apply matrix arithmetic to a commonly encountered problem by finding the parabola that passes through three given points. Get ready for applications involving matrices by exploring two additional concepts: the inverse of a matrix and the determinant. The algorithm for calculating the inverse of a matrix relies on Gaussian elimination, while the determinant is a scalar value associated with every square matrix. Use linear systems and matrices to analyze such questions as these: How can the stopping distance of a car be estimated based on three data points? How does computer graphics perform transformations and rotations? How can traffic flow along a network of roads be modeled? In the first of two lectures on conic sections, examine the properties of circles and parabolas. Learn the formal definition and standard equation for each, and solve a real-life problem involving the reflector found in a typical car headlight. Continue your survey of conic sections by looking at ellipses and hyperbolas, studying their standard equations and probing a few of their many applications. For example, calculate the dimensions of the U.S. Capitol’s “whispering gallery,” an ellipse-shaped room with fascinating acoustical properties. How do you model a situation involving three variables, such as a motion problem that introduces time as a third variable in addition to position and velocity? Discover that parametric equations are an efficient technique for solving such problems. In one application, you calculate whether a baseball hit at a certain angle and speed will be a home run. Take a different mathematical approach to graphing: polar coordinates. With this system, a point’s location is specified by its distance from the origin and the angle it makes with the positive x axis. Polar coordinates are surprisingly useful for many applications, including writing the formula for a valentine heart! Get a taste of calculus by probing infinite sequences and series—topics that lead to the concept of limits, the summation notation using the Greek letter sigma, and the solution to such problems as Zeno’s famous paradox. Also investigate Fibonacci numbers and an infinite series that produces the number e. Counting problems occur frequently in real life, from the possible batting lineups on a baseball team to the different ways of organizing a committee. Use concepts you’ve learned in the course to distinguish between permutations and combinations and provide precise counts for each. What are your chances of winning the lottery? Of rolling a seven with two dice? Of guessing your ATM PIN number when you’ve forgotten it? Delve into the rudiments of probability, learning basic vocabulary and formulas so that you know the odds. In a final application, locate a position on the surface of the earth with a two-dimensional version of GPS technology. Then close by finding the tangent line to a parabola, thereby solving a problem in differential calculus and witnessing how precalculus paves the way for the next big mathematical adventure. Dr. Bruce H. Edwards is Professor of Mathematics at the University of Florida. Professor Edwards received his B.S. in Mathematics from Stanford University and his Ph.D. in Mathematics from Dartmouth College. After his years at Stanford, he taught mathematics at a university near Bogotá, Colombia, as a Peace Corps volunteer. Professor Edwards has won many teaching awards at the University of Florida, including Teacher of the Year in the College of Liberal Arts and Sciences, Liberal Arts and Sciences Student Council Teacher of the Year, and the University of Florida Honors Program Teacher of the Year. He was selected by the Office of Alumni Affairs to be the Distinguished Alumni Professor for 1991–1993. Professor Edwards has taught a variety of mathematics courses at the University of Florida, from first-year calculus to graduate-level classes in algebra and numerical analysis. He has been a frequent speaker at research conferences and meetings of the National Council of Teachers of Mathematics. He has also coauthored a wide range of mathematics textbooks with Professor Ron Larson. Their textbooks have been honored with various awards from the Text and Academic Authors Association. Because of the highly visual nature of the subject matter, this course is available exclusively on video.
{"url":"http://www.thegreatcourses.com/tgc/courses/course_detail.aspx?cid=1005","timestamp":"2014-04-17T00:59:52Z","content_type":null,"content_length":"238777","record_id":"<urn:uuid:15004d7f-c7df-4b2a-a24c-3d57b09e1171>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
There's been a huge kerfuffle in the quantum gravity community since this summer, when some people here at UCSB published a paper arguing that (old enough) black holes may actually be surrounded by a wall of fire which burns people up when they cross the event horizon. This is huge, because if it were true it would upset everything we thought we knew about black holes. General relativity is our best theory of gravity to date, discovered by Einstein. This is a classical theory. (In the secret code that we physicists use, classical is our code-word for "doesn't take into account quantum mechanics". Don't tell anyone I told you.) In my other posts on physics, I've been trying to explain the fundamentals of physics in the minimum number of blog posts. This post is out of sequence, since I haven't described general relativity yet! But I wanted to say something about exciting current events. In classical general relativity, a black hole is a region of space where the gravity is so strong that not even light can escape. They tend to form at the center of galaxies, and from the collapse of sufficiently large stars when they run out of fuel to hold them up. A black hole has an event horizon, which is the surface beyond which if you fall in, you can't ever escape without travelling faster than light. The information of anything falling into the black hole is lost forever, at least in classical physics. In the case of a non-rotating black hole, without anything falling into it, the event horizon is a perfect sphere. (If the black hole is rotating, it bulges out at the equator.) If you fall past the event horizon, you will inevitably fall towards the center, just as in ordinary places you inevitably move towards the future. At the center is the singularity. As you approach the singularity, you get stretched out infinitely in one direction of space, and squashed to zero size in the other two directions of space, and then at the singularity time comes to an end! Actually, just before time comes to an end, we know that the theory is wrong, since things get compressed to such tiny distances that we really ought to take quantum mechanics into account. Since we don't have a satisfactory theory of quantum gravity yet, we don't really know for sure what happens. Now it's important to realize that the event horizon is not a physical object. Nothing strange happens there. It's just an imaginary line between the place where you can get out by accelerating really hard, and the place where you can never get out. Someone falling into the black hole just sees a vacuum. If the black hole was formed from the collapse of a star, the matter from the star quickly falls into the singularity and disappears. The black hole is empty inside, except for the gravitational field itself. We don't know how to describe full-blown quantum gravity, but we have something called semiclassical gravity which is supposed to work well when the gravitational effects of the quantum fields are small. In semiclassical gravity, one finds that black holes slowly lose energy from thermal "Hawking" radiation. This radiation looks exactly like the random "blackbody radiation" coming from an ordinary object when you heat it up. Here's the important fact: You can prove that the radiation is thermal (i.e. random) just using the fact that someone falling across the horizon sees a vacuum (i.e. empty space) there. The Hawking radiation comes from just outside the event horizon. It does not come from inside the black hole, so in Hawking's original calculation it doesn't carry any information out from the inside. Nevertheless, for various reasons I can't go into right now, most black hole physicists have convinced themselves that the information eventually does come out. As the black hole radiates into space, it slowly evaporates, and eventually probably disappears entirely (although knowing what happens at the very end requires full-blown quantum gravity). If the outgoing Hawking radiation carries all the radiation out, then for a black hole at a late enough stage in its evaporation, the radiation must not be completely random, because it actually encodes all the information about what fell in. The gist of what Almheiri, Marolf, Polchinski, and Sully argued, is that if we take both of these statements in bold seriously, then it follows that the black holes are NOT in the vacuum state from the perspective of someone who falls in. Instead you would get incinerated by a "firewall" as you cross the horizon. (It's not clear yet whether this is only for really old black holes, or if it applies to younger ones too.) That's if we still believe there is an "inside" at all. The argument shows that semiclassical gravity is completely wrong in situations where we would have expected it to work great. If this is right, then it's devastating to the ideas of many of us who have been thinking about black holes for a long time. As a reluctant convert to the idea that information is not lost, I'm wondering if I should reconsider. At the end of this month, I'm going to Stanford for a weekend, since Lenny Susskind has invited a bunch of us to try to get this worked out. Exciting times! 6 Responses to Firewalls 1. that's so great that you're invited to work on such an interesting problem with Leonard Susskind!! I find his lectures on youtube to be very instructive... But anyway I have a physics question. What in the theory determines whether someone falling into a black hole would see a vacuum or a firewall? What's the debate, whether the black hole is in the vacuum state or not? I can't see why anyone crossing the event horizon would "see a vacuum"; as long as other stuff is falling in (which is generally true for galactic black holes, anyway); wouldn't light from other stuff falling in still reach you as you fall in with it? Is it that stuff falling in that would create the firewall, or is the firewall from the Hawking radiation, or something else? I guess that was more than one question :) 2. Good questions, Luke. You're quite right that if other stuff is falling into the black hole along with you, then you don't see a vacuum state. However, that's not important since we can consider a hypothetical situation in which no stuff is falling in from outside, except you. The AMPS argument says there should be a firewall even then. (Alternatively, if stuff is falling in, we could just ask about really short distance sclaes, since in quantum field theory all states look like the vacuum at sufficiently short distances.) Since the stuff falling in is in the vacuum state, the question really concerns the fields travelling outwards near the horizon. (If the outgoing modes are just outside the event horizon, they can escape as Hawking radiation, while if they are just inside, even the "outgoing" modes, which are trying to escape, get inexorably sucked in.) These are the modes that AMPS say have to be in a non-vacuum state. Quantum field theory says that if you take a vacuum state, and restrict your attention to one side of a plane dividing space into two halves, then the fields outside the plane are in a thermal state, including quanta of arbitrarily high energies. However, these thermal fields just outside are quantum mechanically entangled with the fields just inside. The reason you don't get burnt to a crisp whenever you cross the door to your bedroom, is that the adverse effects of the fields just outside are almost exactly cancelled out by the effects of the fields just inside. In semiclassical gravity, the exact same story applies for an infalling observer crossing the event horizon. The AMPS argument says that if black holes don't permanently lose information, then as time passes, this exact cancellation must eventually break down. Technically speaking, that's because the Hawking radiation at late times can't be simultaneously quantum mechanically entangled with both the radiation inside the event horizon, and the radiation emitted at earlier times. You ask what in "the theory" determines whether you see a vacuum or the firewall. If the theory is general relativity, or semiclassical gravity, the answer is that you see a vacuum, not a firewall. If the theory is quantum gravity, then AMPS claim that given certain assumptions, logical consistency requires that there be a firewall. However, we don't know what the mechanism would be, since we don't even know how to formulate the theory! 3. Why did you (once) think that information is lost? 4. Mitchell, Because that's what happens in semiclassical general relativity: the Hawking radiation is completely uncorrelated with anything that fell in. One expects that quantum gravity effects should only make a difference when the amount of spacetime curvature is large. Curvatures are large at the singularity, but not near the event horizon. Also, it seems like the information could only escape from the singularity if the laws of physics are nonlocal. Since general relativity and quantum field theory are each local (in the relevant sense), this seemed implausible. I was convinced that the information does come out by a very simple argument by Don Marolf, here at UC Santa Barbara. The basic idea is that in general relativity, the mass (= energy) of a system is measurable at spatial infinity, because of the long-range nature of the gravitational field. For example, you could put a planet in orbit around the system and measure the gravitational mass that way. But in quantum mechanics, if you can measure the energy $E$ and some other observable $\mathcal{O}$, you can also measure the same observable $\mathcal{O}(t)$ at any past or future time. (For those who want to see the math, the formula is $\mathcal{O}(t) = e^{iEt} \mathcal{O} e^{-iEt}$.) That means that if you create an object very far from the black hole, and then drop it into the black hole, the information must still be encoded somehow in the information at infinity. This is related to something called the "holographic principle". 5. Unless you can test it. Who cares. Math models are usually just approximations to reality, so all this brain twisting logic may not correspond to it anyways. Or if there is a dual between Black Holes and super conducting physics, maybe you can device a test. Oh and yes there are some duals from ST math, so go to it. This entry was posted in Physics. Bookmark the permalink.
{"url":"http://www.wall.org/~aron/blog/firewalls/","timestamp":"2014-04-18T16:12:21Z","content_type":null,"content_length":"39968","record_id":"<urn:uuid:d7278c7b-15f2-43ac-83d6-2a23147814c9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
Data-Driven Risk Models Could Help Target Pipeline Safety Inspections by Rick Kowalewski, Pipeline and Hazardous Materials Safety Administration, and Peg Young, Ph.D., Bureau of Transportation Statistics Federal safety agencies share a common problem—the need to target resources effectively to reduce risk. One way this targeting is commonly done is with a risk model that uses safety data along with expert judgment to identify and weight risk factors. In a joint effort, the U.S. Department of Transportation's Bureau of Transportation Statistics (BTS) and Pipeline and Hazardous Materials Safety Administration (PHMSA) sought to develop a new statistical approach for modeling risk by letting the data weight the data—by using the statistical relationships among the data, not expert opinion, to develop the weights. Some key findings: • Weighting data through statistical procedures was superior to judgment-weighting in predicting (targeting) relative risk. • Statistical modeling can help not only target which operators to inspect but also focus what to inspect based on a set of risk factors. • Pipeline infrastructure, operator performance, and incident history appear to be about equally useful in predicting future risk. Program Background PHMSA's mission is to protect people and the environment from the risks inherent in the transportation of hazardous materials by pipeline and other modes of transportation. Each year the pipeline safety program inspects several hundred thousand miles of interstate pipelines carrying natural gas and hazardous liquids across the United States. These pipelines are operated by over 1,000 operators who manage systems ranging from a few miles to tens of thousands of miles. While a pipeline might seem to be a very simple system, in fact these systems are very complex, and each system has some unique characteristics. The general approach for conducting standard inspections until now has been to inspect each major part of each system every 3 years. In 2006, PHMSA initiated a research/pilot project to integrate the various kinds of inspections it conducted, to re-examine the 3-year inspection interval for standard inspections, and to focus the scope of its inspections based on operator risk. Changing inspection intervals from a periodic-basis to a risk-basis and changing from comprehensive to focused inspections reflect a significant change in approach. Program managers understood from the outset that the new approach would require a better risk model. The Current Risk Model For more than a decade, PHMSA has used the Pipeline Inspection Prioritization Program (PIPP) to schedule inspections and allocate resources. PIPP is a data-based model using 10 to 12 data variables (depending on type of pipeline) that are transformed into 9 indexes, which are added together for an overall risk score. The data variables for both hazardous liquid and gas transmission pipelines are listed in table 1. Beginning with these input variables, each one is transformed into another variable (the individual PIPP scores) ranging from 0 to 9 points, depending on the input variable, and then combined into the final total PIPP score. The variables were selected using expert judgment, and the transformations that determine the weight for each variable also used expert judgment. PIPP results are used with other information to help set scheduling priorities for inspections. PIPP has been shown to be 3 to 4 times better than random selection in identifying ("predicting") future risk as reflected in the number of pipeline incidents.^1 However, PIPP tends to underestimate risk (substantially) where the actual number of incidents is high, and overestimate risk (somewhat) where the number of incidents is low. This difference is illustrated in the the two PIPP score scatterplots in figure 1 for hazardous liquid pipelines and for natural gas pipelines, respectively. The New Model The new model predicts the number of pipeline incidents and the incident rate per mile of pipeline for each pipeline operator. To develop predictions, researchers took several years of historical data to run simulations—using, for example, data from 2002 to 2004 to "predict" 2005. The data were organized conceptually into three sets, each using different data; the results are reflected in the six remaining "risk" scatterplots in figure 1: 1. The inherent risk associated with the pipeline—represented by physical and operating characteristics such as age, materials and coatings, diameter, location, and throughput—is estimated using annual reports submitted by each pipeline operator.^2 Inherent risk should be independent of how the pipeline is managed and maintained. 2. The performance risk associated with the operator (i.e, the company)—represented by safety deficiencies—is estimated using the results of past safety inspections—particularly those with the broadest scope, known as Integrity Management (or IM) inspections.^3 Performance risk should be independent of the pipeline characteristics. 3. The historical risk associated with past incidents is estimated from incident data reported to PHMSA by operators.^4 Historical risk is assumed to reflect the combination of both inherent risk of the pipe and performance risk of the operator. Each set of data generated separate predictions of future incidents that were also combined into a single prediction for each operator. The diagonal line in each graph in figure 1 represents perfect prediction in which the predicted number of incidents equals the actual number of incidents. The further the data points are from the diagonal line, the poorer the performance of the predictive model. Gas transmission operators were separated from hazardous liquid operators, as they are in PIPP, because they present very different system profiles, different risks, different data, and different numbers of incidents (see table 2). Other breakouts might also make sense (e.g., by product for liquid pipelines, or onshore v. offshore pipeline) but the research has not explored these. For presentation purposes, small operators (with less than 500 miles of pipeline) were separated from large operators because their operating environment tends to be different and the relatively lower number of incidents makes the results somewhat less reliable. The analysis behind all the models were performed in the statistical software package SAS 9.1. Statistical approaches Three key characteristics of the data influenced the choice of statistical models: 1. Incidents occur infrequently, so the models would have to deal well with small numbers. 2. The number of incidents is a count value, with no fractional or negative values. 3. The number of incidents per operator is highly skewed, with a large number of operators having zero incidents in any given year. Traditional linear regression, which relies on the assumption of normally distributed data, is inappropriate for count data that are highly skewed towards zero. Two other models—the Poisson distribution and negative binomial regression^5—can handle such data. Another important quality of these two models is their ability to control for exposure variables, such as miles of pipeline. The negative binomial is the more general model, and this was used to detect and weight risk variables for both inherent risk and performance risk.^6 The analysis of the historical risk associated with past incidents presented a different set of conditions. The past 3 years of incidents and the next (to-be-predicted) year of incidents most likely are not independent from one another, so the data were transformed to create an "orthogonal" regression model that would allow modeling the 3 years of incidents together to estimate future risk. ^7 Each of these major outputs—inherent risk, performance risk, and historical risk—provide a separate prediction of risk, but they can also be combined to present a single estimate. The approach taken here was to take the average of the three results.^8 Other possibilities not examined here might use another model to weight these three as inputs to an overall risk score, again letting the data weight the data, or developing an equation that might relate any one output to the other two. Figure 1 provides a graphical synopsis of the predictive accuracy for estimating the number of accidents per operator based on PIPP scores, inherent risk, operator risk, and historical risk. The predictive quality of each model tested was compared using a standard statistical measure of error—the mean absolute deviation (MAD)—which averages the absolute difference between the predicted value and the actual value for each operator (see table 3). For example, when the model predicts 7.5 incidents and 5 actually occur, the error is 2.5; when the model predicts 4 incidents and 5 actually occur, the error is 1. MAD provides a sense of "how far off" the model predictions are from the actual values. Testing Inputs to the Model A key indicator for the effectiveness of any new model was its ability to predict risk better than the existing judgment-weighted model (PIPP ranking). In practice, this should be fairly easy because a statistical model could simply reweight the 10 input variables in PIPP or the 9 transformed variables for a better prediction using data-weighting. Other obvious inputs to test included: • the nave model (which says that what happened last year is likely to happen again next year); • mileage alone (which suggests that the extent of the system might be the most important indicator of the risk of incidents); • the input variables into PIPP—reweighted using the new statistical procedures; • the output variables (L-scores) from PIPP before the PIPP ranking is calculated—reweighted using the new statistical procedures; and • each of the new indicators of risk—estimating inherent risk associated with the pipeline, performance risk associated with the operator, and historical risk associated with past incidents. The results demonstrate that PIPP performs the worst in targeting risk, and that reweighting the PIPP variables can improve the predictive quality (reduce the error). Surprisingly, mileage alone and the nave model both were better (smaller error) than PIPP in predicting future risk, but such simple models offer little guidance in selecting appropriate sites to inspect. The new model performed well (with a MAD of 1.0), although the analysis indicated noticeable differences between gas transmission operators and hazardous liquid operators. Hazardous liquid pipeline incidents are more prevalent and more concentrated (fewer operators), so the data provide a better basis for prediction. The three main components of the new model—inherent risk, performance risk, and historical risk—performed about equally well in predicting future incidents. Findings From the Modeling Research Modeling inherent risk associated with the pipeline demonstrated that mileage, throughput (barrel-miles per year), date of installation, and pipeline diameter were significant risk factors. Six variables were significant in predicting future incidents for gas transmission systems, and 14 variables were significant for hazardous liquid systems. About half of these variables were negatively correlated with risk, meaning that they had a "protective effect." (Table 4 provides the listing of the significant variables for both models.) Modeling performance risk associated with the operator demonstrated that a few key inspection areas from Integrity Management^9 inspections were most highly correlated with future risk. One area ( integrity assessment review) was negatively correlated, suggesting that finding deficiencies in this area helped an operator rapidly improve its safety program. The most significant risk factor was in the area of continual evaluation and assessment—which inspection staff have suggested might be a critical indicator of an operator's safety program. Modeling historical risk associated with past incidents demonstrated that the passage of time rapidly degrades the utility of the data. After 2 years, past incidents do not appear to be useful in predicting future risk. The most recent year is most important, and the model weights this year most heavily. Significant Data and Modeling Issues While the model demonstrates the general effectiveness of statistical tools as an alternative to judgment-weighting, several important data limitations and modeling issues remain to be addressed. Some of the more important issues are listed here: • Data on operators' systems and operator relationships reflect a snapshot in time; changes might not be captured for up to a year, so some data are outdated. • Deficiency data from inspections are largely limited to one major type of inspection—Integrity Management inspections—representing only a small portion of the inspections conducted. • The model does not differentiate more serious incidents (the focus of the agency's performance goals) from those with less severe consequences (actual or potential). • The model introduces an exponential function that can dramatically over-predict incidents when new data are outside the historical range. • Small numbers of incidents each year limit the ability to isolate combinations of factors that might be statistically significant. Continuing Research The first line of research, currently underway, is to refine the incident measures to reflect the consequences of incidents—to weight incidents by potential severity in terms of harm to people and/or the environment. Using conditional probabilities, we have found so far that three variables help explain whether an incident is likely to be serious: fire/explosion (indicating a violent incident), whether the incident occurred in a high consequence area (indicating proximity to people), and incident cause (e.g., corrosion or excavation damage). Some general model improvements are planned as well. These would separate out onshore v. offshore systems, interstate v. intrastate operators, and certain commodities that have special risk characteristics. The relationship between inherent risk, performance risk, and historical risk needs to be further explored and modeled. The issue of total number of incidents v. the rate of incidents per mile needs to be addressed; it is not clear which is more important in targeting inspections. And operator relationships—where some operators are part of a larger group of operators that share certain plans and management—need to be addressed because some inspections are targeted at this higher corporate level. There are several areas where the measures for inherent risk, performance risk, and historical risk could be enhanced. Improvement would include targeted analyses of certain key variables to better understand why they are or aren't significant risk factors, adding more inspection data, and testing the time-sensitivity of inspection data. After refinements are made, the model needs to be validated with data from other years, uncertainty should be incorporated into the results, and PHMSA program staff need to be involved in formulating the best presentation of results for the intended use—targeting and focusing inspections. A parallel effort will extend the concepts from this modeling effort to another safety program—hazardous materials transportation safety—which cuts across four other modes of transportation. The model might be more generally applicable in other federal safety programs as well. ^1 By scaling PIPP scores to the number of actual incidents, predictive quality was measured by the correct "hits" to determine the percent correct. This was compared to a random selection model where each operator was simply assigned an equal share of points. ^2 See www.phmsa.dot.gov for access to annual reports filed by pipeline operators. ^3 Deficiency data are captured at the point of inspection for Integrity Management (IM) inspections of pipeline operators. Where deficiencies are serious, PHMSA pursues enforcement action. Data on these actions are available at www.phmsa.dot.gov. ^4 Incident data are available at www.phmsa.dot.gov. ^5 In a recent review of the Motor Carrier Safety Status Measurement System, or SAFESTAT, model used by the Federal Motor Carrier Safety Administration, the Government Accountability Office (GAO) recommended a negative binomial regression in place of expert opinion to weight the risk factors used in targeting motor carrier safety inspections. This work by GAO was a strong factor in the risk modeling effort by BTS and PHMSA. See Motor Carrier Safety: A Statistical Approach Will Better Identify Commercial Carriers That Pose High Crash Risks Than Does the Current Federal Approach, June 2007 (GAO-07-585). ^6 For a good explanation of the Poisson and negative binomial models and how they are estimated in SAS, see Logistic Regression Using SAS: Theory and Application, by Paul D. Allison, 1999 (SAS Institute Inc.). ^7 "Orthogonal variables" are linearly independent. For details on orthogonal regression, see A. Stuart, J.K. Ord, and S.F. Arnold. 1999. Kendall's Advanced Theory of Statistics, 6^th ed. London: Edward Arnold, pp. 764-766. ^8 Although historical risk—using incident data—might reflect the nexus of the inherent risk associated with the pipeline and the performance risk associated with the operator, using equal weights to average provides a simple approximation of overall risk. Other statistical methods might provide a better way to combine these factors. ^9 The Integrity Management program was introduced over the last several years, first for hazardous liquid pipelines then later for gas transmission pipelines. This program requires pipeline operators to identify and understand the risks in their systems, identify high consequence geographic areas, establish programs for inspecting and repairing pipelines, and continuously monitoring their systems. About this Report This report is the result of joint research by Rick Kowalewski, Senior Advisor of the Pipeline and Hazardous Materials Safety Administration (PHMSA), and Peg Young, Statistician for the Bureau of Transportation Statistics (BTS). For related BTS data and publications: www.bts.gov For questions about this or other BTS reports, call 1-800-853-1351, email answers@bts.gov, or visit www.bts.gov.
{"url":"http://www.rita.dot.gov/bts/sites/rita.dot.gov.bts/files/publications/special_reports_and_issue_briefs/special_report/2008_010/html/entire.html","timestamp":"2014-04-20T13:28:23Z","content_type":null,"content_length":"54974","record_id":"<urn:uuid:1eabef6d-c157-42ca-8931-7f82e7c1c9b9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Size Functions for the Morphological Analysis of Melanocytic Lesions International Journal of Biomedical Imaging Volume 2010 (2010), Article ID 621357, 5 pages Research Article Size Functions for the Morphological Analysis of Melanocytic Lesions ^1Department of Mathematics, The Advanced Research Center on Electronic Systems for Information and Communication Technologies E. De Castro (ARCES), University of Bologna, 40126 Bologna, Italy ^2Dermatologia Oncologica, Istituto Tumori Romagna (IRST), 47014 Meldola, FC, Italy ^3Ospedale Niguarda, 20162 Milano, Italy Received 1 October 2009; Accepted 20 December 2009 Academic Editor: Guo Wei Wei Copyright © 2010 Massimo Ferri and Ignazio Stanganelli. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Size Functions and Support Vector Machines are used to implement a new automatic classifier of melanocytic lesions. This is mainly based on a qualitative assessment of asymmetry, performed by halving images by several lines through the center of mass, and comparing the two halves in terms of color, mass distribution, and boundary. The program is used, at clinical level, with two thresholds, so that comparison of the two outputs produces a report of low-middle-high risk. Experimental results on 977 images, with cross-validation, are reported. 1. Introduction The incidence of malignant melanoma in fair-skinned patients has increased dramatically in most parts of the world over the past few decades. Because the prognosis of melanoma depends almost entirely on tumor thickness, early detection of thin melanoma is important for the survival of patients [1, 2]. The diagnostic accuracy of the clinical examination of pigmented skin lesions, however, is still rather poor. Literature results arise the evidence that (i)the ability of general practitioners to early diagnose CMM with the naked eye is very low; (ii)the ability of dermatologists to early diagnose CMM with the naked eye ranges from 50% to 75%; (iii) there is a high rate of false positive (causing unneeded surgical excision). In the last decade dermoscopy has changed the evaluation of the diagnosis of pigmented skin lesions. Dermoscopy is a noninvasive technique that enables the clinician to perform direct microscopic examination of diagnostic features, not seen by the naked eye, in pigmented skin lesions. This technique is more accurate than naked eye examination for the diagnosis of cutaneous melanoma, in suspicious skin lesions when performed in the clinical setting [3]. A complementary effort is in the automatization of the diagnostic process. Several rather successful computer programs have been implemented to the aim of an automatic analysis of melanocytic lesions and their discrimination between naevi and melanomas (see, e.g., [4–8]; see also [9, 10] for a comparison between automatic and human performance). Most of them keep into account the traditional ABCDE parameters used by dermatologists: Asymmetry (of boundary, texture, and color), Boundary (irregularity and dishomogeneity), Color (presence of several colors), Dimension, and Evolution. In particular, asymmetry is generally based on quantitative comparison of the two parts into which a lesion image is split by its principal axes. Here we focus on asymmetry, perhaps the most important cue. We have developed a new method for comparing in a qualitative, yet precise way the two parts of a lesion at the sides of a splitting line. The mathematical tool for comparison is the theory of Size Functions, applied to three features: boundary shape, mass, and color distribution. For each splitting line of a pencil we get an asymmetry measure, so forming a map (two for each of the three features). Some characteristic numbers of the six maps are finally fed to a Support Vector Machine. A classification experiment has been led on data set of 977 lesions with very good results. The whole research is a follow-up of the ADAM project of the European Union. We are well aware that “qualitative measure” reads like an oxymoron; of course, we mean that we compute a precise, objective, repeatable measure of the difference between the two half images; yet, this difference is of a qualitative kind, in that it is not bound to geometric deformations, superimpositions or the like. This is actually the great advantage of using topological and not just geometrical tools. 2. Size Functions Size Functions (SFs) are modular invariants of whatever signal the user is interested in [10]; in the present case, the concerned features are boundary shape, mass, and color distribution. Size functions are maps from the plane to the (extended) natural numbers. They depend on two inputs: an object (e.g., a lesion boundary) and a real map, called measuring function defined on it (e.g., distance from the center of mass). Essentially, the SF registers the behavior of the measuring function by using Morse theory (see [11]). SFs are “qualitative” not only in that they are topological in nature, but also in that a “similarity” based on them depends on the user's choice of a measuring function and of a distance between SFs adapted to the context. Let us recall the definition of an SF, adapted from the more general setting of [12], where measuring functions are allowed a multidimensional range. Consider a continuous real-valued function , defined on a subset of a Euclidean space. The Size Function of the pair is a function . For each pair , consider the set . The value is defined to be the number of the connected components of which contain at least one point in . The discrete version of the theory substitutes the subsets of the plane with a graph , the function with a function and the concept of topological connectedness with the usual connectedness notion for graphs. Figure 1 shows the size function obtained from a curve with the ordinate as measuring function. 3. Classification SFs have a standard structure, the one of superimposed triangles already apparent in Figure 1. This has an important outcome, in that the relevant information can be condensed in the vertices of those triangles [13]. Comparison of two images (as far as the criterion intrinsic to the measuring function is concerned) can then be carried out by comparing the sets of these points. Several distances can be defined on the set of SFs; one which is very successful is the matching distance (see Figure 2). Distance from templates generally produces numbers of some significance with respect to a classification. Unfortunately, there do not exist archetypal naevi or melanomas, so the task is harder than for classical classification problems. We use distances for measuring asymmetries, as we shall see further on. These distances produce other characteristic numbers. At this point, Statistical Learning comes into play; Vectors of characteristic numbers are the input of a Support Vector Machine. 4. Segmentation The first processing step is segmentation, that is, the isolation of the skin lesion from its background. (See Figure 3; the separating curve is drawn green). This is carried out with well-tested methods depending on several parameters, most of which have been fixed by experiment. Tuning of one of the remaining, permits the removal of most hairs. This is notoriously a serious problem in the processing of dermatological images, and has been solved by the operations of erosion and dilation coming from mathematical morphology. 5. Asymmetries The experience of dermatologists suggests that a major criterion for suspecting malignancy is the asymmetry of various aspects of the lesion. We have followed this suggestion by splitting each lesion in two halves by a straight line passing through the center of mass. Comparison of the two halves is then performed by computing the distance between their Size Functions. This represents a definite progress with respect to classical methods for detecting asymmetry; these detected only geometrical asymmetry, while distances of Size Functions determine also qualitative asymmetry. We repeat the splitting for 45 equally spaced radial lines, so getting distance as a function of angle (see Figure 4). From this curve the software extracts a set of characteristic numbers: min, max, average, min plus the value at from min, integral, first moment, variation, min derivative, max derivative, integral of absolute value of derivative, and variation of absolute value of derivative. A Support Vector Machine with a third-order kernel is fed with these numbers, computed for each measuring function. Actually, the vectors also contain three more parameters: area, perimeter, and a bumpiness measure coming from the SF of the whole lesion, with distance from center of mass as the measuring function. An initial set of experiments had been carried out with 90 lines instead of 45, but the hit ratio was just slightly higher, while almost doubling computing time. We have used six measuring functions to distil the structure of boundary, mass distribution, and color distribution, respectively. The first is the distance (of boundary points) from the splitting line. The second sums grey levels along segments orthogonal to the splitting line. The third sums distances of colors (in RGB space) of consecutive pixels along segments orthogonal to the splitting line. Our initial experiments used just these three measuring functions. Adding their three opposite functions improved the hit ratios of 2 to 5 percentage points. 6. Experimental Results The present method has been tested on well-controlled lesion images. The acquisition setup consists of an LEICA 650M stereomicroscope and a Sony 3CCD-930 color video camera. The illumination of the stereomicroscope consists of a 12V/50W halogen lamp that creates a bundle of light perpendicular to the area of interest. The digital images have been archived by means of the DBDERMO Mips software package (Dell'Eva-Burroni, Siena). Over half of the data set used in the present research, had already been the subject of a formal study of clinical diagnostic validation using also the local population-based cancer registry (i.e., Registro Tumori Romagna) to cross-check for possible false negatives, published on [14]. The data set comes from the daily practice of one of us (Stanganelli); of course, only “interesting” naevi had been acquired. All melanomas and several naevi have been subjected to histological test; all remaining naevi have been subjected to follow-up. We have selected 977 images of melanocytic lesions (melanomas and naevi) acquired in epiluminescence microscopy with a fixed 16-fold magnification. The only selection criterion was that the lesion be entirely visible. The data set contains 50 melanomas (28 of them with thickness less than 0.75mm) and 927 naevi. Cross-validation has been performed in three ways. In test H, every second image was assigned to the training set (melanomas were listed consecutively). In tests R1 and R2, a training set of 25 melanomas and 500 naevi was randomized from the data set. The test set was formed by the complement (the remaining 25 melanomas and 427 naevi). A fourth test (S) was performed without cross-validation, with the whole data set both as training and test set; we interpret the not much higher scores of test S as a proof of stability. In Table 1 we report, for each of tests H, R1, R2, and S, the specificity and sensitivity of what we judge to be the best performances. As a further information, in test S a 100% specificity was attained only at cost of 4% sensitivity, but the decrease of specificity to 93.64% yielt a jump to 70% sensitivity. 100% sensitivity was reached at 63.65% specificity. We also report the ROC curve of test S in Figure 5. Our system is not intended to be provided to the public as a yes/no diagnostic tool; it yields a risk index in the following way. Two classifiers, one tuned at high sensitivity, the other at fairly good specificity, give their response; if they agree to classify the lesion as a naevus (resp., a melanoma) then a low (resp., high) risk is stated; if they disagree, the output is of middle risk. A comparison has been done between the output of this compound classifier and the judgement of an expert dermatologist, who had classified the lesions as sure melanomas, sure naevi and uncertain. The percentages reported in Table 2 refer to the fractions of the three classes (as classified by the human expert) labeled by the machine with the three risk levels. 7. Comparison A true comparison with other research group is problematic. As stressed in [5], there are quite different selection criteria, melanomas/naevi ratios, data set sizes, analysis methods. Instead of reporting selected results of competitors, we refer to Table 1 of that thorough paper. We just would like to comment on very high sensitivity scores (over 95%). With the noticeable exception of Seidenari et al. [4], such scores seem to have been attained either with very small data sets, or with high melanoma percentages, so in situations which appear to be rather far from real-world ones. Even counting them, the result of our cross-validated test R1 is placed in the top third of the reported scores. Of course, the single-set test S places us at an even higher rank. It would be interesting to compare—as suggested by a referee—the asymmetry assessment given by our method with the one given by an expert dermatologist. This is unfortunately not possible, since our evaluation does not consist of a single measure, but of 66 (see Section 5), what compelled us to use Support Vector Machines for classification. In [15] a comparison of the performance of our system and of human operators (three Dermatologists and three General Practictioners) was carried out on a smaller data set of 31 melanomas and 103 naevi. We report the results in Table 3. 8. Conclusions The true novelty of the presented method consists in the use of a qualitative but objective mathematical tool, the Size Functions, to evaluate asymmetry (of boundary, color, and mass distribution). Three experiments with 977 lesions, carried out under cross-validation, show very good performances. Are the results sufficient to make our method definitely preferable to others? No! But its good hit ratio, together with the complete independence from the competitors' tools, make our method a tempting candidate for integration. In this line of thought, comparison aimed to integration should maybe prevail over competition. The work performed within the activity of ARCES, of CIRAM, and of INdAM-GNSAGA. The authors wish to thank the colleagues of the VM Group (Bologna) and of CPO (Ravenna) for their help. 1. A. W. Kopf, T. G. Salopek, J. Slade, A. A. Marghoob, and R. S. Bart, “Techniques of cutaneous examination for the detection of skin cancer,” Cancer, vol. 75, no. 2, pp. 684–690, 1994. 2. I. Stanganelli, C. Clemente, and M. C. Mihm Jr., “CD Melanoma cutaneo—Atlante multimediale interattivo per la prevenzione, la diagnosi e la terapia del Melanoma e delle lesioni pigmentate cutanee,” Istituto Oncologico Romagnolo Ed., MAF Torino, Italy, 2001. 3. M. Hintz-Madsen, L. Kai Hansen, J. Larsen, E. Olesen, and K. T. Drzewiecki, “Detection of malignant melanoma using neural classifiers,” in Solving Engineering Problems with Neural Networks, Proceedings of the International Conference Engineerings Applications of Neural Networks (EANN '96), vol. 96, pp. 395–398, 1996. 4. S. Seidenari, G. Pellacani, and A. Giannetti, “Digital videomicroscopy and image analysis with automatic classification for detection of thin melanomas,” Melanoma Research, vol. 9, no. 2, pp. 163–171, 1999. 5. B. Rosado, S. Menzies, A. Harbauer, et al., “Accuracy of computer diagnosis of melanoma: a quantitative meta-analysis,” Archives of Dermatology, vol. 139, no. 3, pp. 361–367, 2003. View at Publisher · View at Google Scholar 6. A. Fueyo-Casado, F. Vázquez-López, J. Sanchez-Martin, B. Garcia-Garcia, and N. Pérez-Oliva, “Evaluation of a program for the automatic dermoscopic diagnosis of melanoma in a general dermatology setting,” Dermatologic Surgery, vol. 35, no. 2, pp. 257–259, 2009. View at Publisher · View at Google Scholar · View at PubMed 7. S. M. Rajpara, A. P. Botello, J. Townend, and A. D. Ormerod, “Systematic review of dermoscopy and digital dermoscopy/artificial intelligence for the diagnosis of melanoma,” British Journal of Dermatology, vol. 161, no. 3, pp. 591–604, 2009. View at Publisher · View at Google Scholar · View at PubMed 8. R. J. Friedman, D. Gutkowicz-Krusin, M. J. Farber, et al., “The diagnostic performance of expert dermoscopists vs a computer-vision system on small-diameter melanomas,” Archives of Dermatology, vol. 144, no. 4, pp. 476–482, 2008. View at Publisher · View at Google Scholar · View at PubMed 9. S. Dreiseitl, M. Binder, K. Hable, and H. Kittler, “Computer versus human diagnosis of melanoma: evaluation of the feasibility of an automated diagnostic system in a prospective clinical trial,” Melanoma Research, vol. 19, no. 3, pp. 180–184, 2009. View at Publisher · View at Google Scholar · View at PubMed 10. P. Frosini and C. Landi, “Size theory as a topological tool for computer vision,” Pattern Recognition and Image Analysis, vol. 9, pp. 596–603, 1999. 11. P. Frosini, “Connections between size fonctions and critical points,” Mathematical Methods in the Applied Sciences, vol. 19, no. 7, pp. 555–569, 1996. 12. S. Biasotti, A. Cerri, P. Frosini, D. Giorgi, and C. Landi, “Multidimensional size functions for shape comparison,” Journal of Mathematical Imaging and Vision, vol. 32, no. 2, pp. 161–179, 2008. View at Publisher · View at Google Scholar 13. P. Frosini and C. Landi, “Size functions and formal series,” Applicable Algebra in Engineering, Communications and Computing, vol. 12, no. 4, pp. 327–349, 2001. View at Publisher · View at Google Scholar · View at MathSciNet 14. I. Stanganelli, M. Serafini, and L. Bucchi, “A cancer-registry-assisted evaluation of the accuracy of digital epiluminescence microscopy associated with clinical examination of pigmented skin lesions,” Dermatology, vol. 200, no. 1, pp. 11–16, 2000. 15. I. Stanganelli, A. Brucale, L. Calori, et al., “Computer-aided diagnosis of melanocytic lesions,” Anticancer Research, vol. 25, no. 6, pp. 4577–4582, 2005.
{"url":"http://www.hindawi.com/journals/ijbi/2010/621357/","timestamp":"2014-04-19T21:24:31Z","content_type":null,"content_length":"57162","record_id":"<urn:uuid:d128577f-6f81-4296-a739-b147ccaa85d6>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Given the triangle below, what is cot∡B? Right triangle ABC with AC measuring 4 and BC measuring 7 square root of 33 over 4 7 times square root of 33 over 33 square root of 33 over 7 4 times square root of 33 over 33 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/511525f4e4b0e554778b66d7","timestamp":"2014-04-20T06:36:13Z","content_type":null,"content_length":"52463","record_id":"<urn:uuid:b33194c6-fb0d-4b3c-b342-979ea358aefc>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
Few statistical measures for an image December 11th 2010, 04:06 PM Few statistical measures for an image I'm not sure which section it belongs to, so I thought I should post in "others". The thing is I have a 3x3 matrix which represents a nine pixel square from grayscale image, so let's say the matrix can have values from 0 to 255. Now I need to make few statistical measures for the square based on this matrix, but whenever I tried to google any of them it always showed solutions that use some dedicated programs (like Matlab) or C++ libraries etc., while I only want pure mathematical formulas so I could calculate everything by myself based on the said matrix. The measures I need to calculate are: - measure of smoothness - measure of uniformity - average gray level - average contrast - thrid moment - entropy Is there anyone who knows how to get these from the matrix? Any help will be apreciated. December 11th 2010, 05:58 PM I would imagine that average grey level is quite simple (take the average). I suppose the third moment is derived once you know the mean ( is it the cube root of the sum of absolute values of the cubes of the differences of each element from the mean? ) -- sorry, I would write it as a formula rather than words if I was fluent in the code. I don't know the formula for other variables. December 18th 2010, 05:49 PM Thank you very much, does anyone else know other formulas? December 21st 2010, 08:49 AM May be you could give us the source code in C or C++ of these functions and we could write it in pure math.
{"url":"http://mathhelpforum.com/advanced-math-topics/165978-few-statistical-measures-image-print.html","timestamp":"2014-04-17T19:39:17Z","content_type":null,"content_length":"5171","record_id":"<urn:uuid:3913ef34-69ef-4f98-8700-41a3b943f73c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] Calculating residues of an even function, spot the mistake please May 14th 2009, 05:12 AM [SOLVED] Calculating residues of an even function, spot the mistake please I'm having trouble with the following exercise: let $U = - U$ be symmetric and $f: U \to \mathbb{C}$ meromorphic. Let $f$ be even, i.e. $f(z) = f(-z)$. Show that $\text{res}_z(f) = - \text{res}_{-z}f$. What I did was the following: let $r>0$ such that $z$ is the only singularity in $\overline{D_r(z)}$. Then let $\kappa:[0,2\pi] \to \mathbb{C}, t\mapsto z + e^{it}$. Then $\text{res}_z(f) = \frac {1}{2\pi i} \int_\kappa f(\xi) d\xi$. Then define $\tilde{\kappa}:= - \kappa(\pi + t)$. Then $\text{res}_{-z}(f) = \frac{1}{2\pi i} \int_{\tilde{\kappa}} f(\xi) d\xi$$= \frac{1}{2\pi i}\int_0^{2\pi}r\cdot f(-\kappa(\pi+t))dt = <br /> \frac{1}{2\pi i}\int_0^{2\pi}r\cdot f(\kappa(\pi+t))dt = \frac{1}{2\pi i} \int_{\kappa} f(\xi) d\xi = \text{res}_z f$ But this is not what i was supposed to show. Where is the mistake? Thank you! May 14th 2009, 10:49 AM I'm having trouble with the following exercise: let $U = - U$ be symmetric and $f: U \to \mathbb{C}$ meromorphic. Let $f$ be even, i.e. $f(z) = f(-z)$. Show that $\text{res}_z(f) = - \text{res}_{-z}f$. What I did was the following: let $r>0$ such that $z$ is the only singularity in $\overline{D_r(z)}$. Then let $\kappa:[0,2\pi] \to \mathbb{C}, t\mapsto z + {\color{red}r}e^{it}$. Then $\text {res}_z(f) = \frac{1}{2\pi i} \oint_\kappa f(\xi) d\xi$. Then define $\tilde{\kappa}:= - \kappa(\pi + t)$. Then $\text{res}_{-z}(f) = \frac{1}{2\pi i} \oint_{\tilde{\kappa}} f(\xi) d\xi$$= \frac{1}{2\pi i}\int_0^{2\pi}r\cdot f(-\kappa(\pi+t))dt = <br /> \frac{1}{2\pi i}\int_0^{2\pi}r\cdot f(\kappa(\pi+t))dt = \frac{1}{2\pi i} \oint_{\kappa} f(\xi) d\xi = \text{res}_z f$ But this is not what i was supposed to show. Where is the mistake? When you substitute $\xi = \kappa(t)$ in the integral $\text{res}_{z}(f) = \frac{1}{2\pi i} \oint_{\kappa} f(\xi)\, d\xi$, you have to remember to replace $d\xi$ by $\kappa'(t)dt = ire^{it}dt$. In the integral for the singularity at –z, the corresponding calculation is $d\xi = \tilde{\kappa}'(t)dt = -ire^{it}dt$. That accounts for the change of sign.
{"url":"http://mathhelpforum.com/calculus/88983-solved-calculating-residues-even-function-spot-mistake-please-print.html","timestamp":"2014-04-21T03:19:00Z","content_type":null,"content_length":"12120","record_id":"<urn:uuid:fadf4770-ca18-4aa4-ad0b-24dba6b76bba>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Hilbert transforms of measures up vote 10 down vote favorite Given a finite measure $\mu$ on the real line $\mathbb R$, one definition of its Hilbert transform is $(H\mu)(y) =\frac{1}{\pi}(PV)\int \frac{d\mu(x)}{x-y}$ which is known to exist almost everywhere on $\mathbb R$. Another way is to define the Borel transform ${\mathcal H}(z) = \frac{1}{\pi}\int\frac{d\mu(x)}{x-z}$ for $z\in \mathbb C\setminus \mathbb R$, and then take the limit of $\Re({\ mathcal H}\mu)(x+iy)$ as $y\downarrow 0$, such limit existing almost everywhere. In the case that $\mu$ is absolutely continuous it is stated $explicitly$ in the literature that these agree (almost everywhere), and it is $implicit$ in the use of the term `Hilbert transform' in the literature that the two definitions agree (almost everywhere) for general finite $\mu$. Does anyone know where one can find explicit proof(s)? (Proofs for the $L^p$ case are easy to find.) ca.analysis-and-odes fa.functional-analysis reference-request Split $\mu$ into a smooth density measure $\mu_1$, a small absolutely continuous measure $\mu_2$ and a singular measure $\mu_3$. Since the definitions agree on $\mu_1$, it is enough to check that 10 the difference is small on $\mu_2$ and $\mu_3$ outside a small measure set. But it is dominated by the (restricted to $r\le r_0$) Hardy-Littlewood maximal function, which is small outside a set of small measure (due to small total mass for $\mu_2$ and due to being supported on a zero measure set for $\mu_3$). In short: just take the classical $L^1$ proof and modify it a tiny bit. – fedja Mar 25 '11 at 12:45 @Rick: Could you explain why $H(\mu)$ is well defined for arbitrary finite measure $\mu$? – Syang Chen Mar 27 '11 at 9:04 @Xianghong: this is precisely Rick's question, and fedja's comment sketches why this works. – Yemon Choi Mar 27 '11 at 19:09 Hi Rick. I've taken the liberty of adding a "reference-request" tag to your question, since I got the impression that you were looking as much for a place where these facts are written down, as for a sketch of why they are true. – Yemon Choi Mar 30 '11 at 23:16 add comment 2 Answers active oldest votes The Hilbert transform of $\mu$ is the inverse Fourier transform of the product $$ -i\hat\mu(\xi){\pi \text{sign}\xi}, $$ using the definition $\hat u(\xi)=\int e^{-2i\pi x\cdot \xi} u(x) up vote 2 dx$ so that the inverse Fourier transform is $u(x)=\int e^{2i\pi x\cdot \xi} \hat u(\xi) d\xi$. This product makes sense since $\hat\mu\in L^\infty$. down vote add comment Showing that the two definitions agree almost everywhere is easy! Using the truncated transform $$ \mathcal{H}\_\epsilon\mu(x)=\frac1\pi\int_{\lvert y-x\rvert > \epsilon}\frac{d\mu(y)}{x-y} $$ then, by definition, $\mathcal{H}\mu(x)=\lim_{\epsilon\to0}\mathcal{H}\_\epsilon\mu(x)$ for all $x$ at which the limit exists. Convolve the identity $$ \Re\left(\frac{1}{x+ih}\right)=\frac {x}{x^2+h^2}=\int_0^11_{\left\lbrace\lvert x\rvert > h\sqrt{t/(1-t)}\right\rbrace}\frac1x\\,dt $$ with $\frac1\pi d\mu$ to obtain, $$ \Re\left(\mathcal{H}\mu\right)(x+ih)=\int_0^1\mathcal{H}_ {h\sqrt{t/(1-t)}}\mu(x)\\,dt. $$ The integrand on the right hand side tends to $\mathcal{H}\mu(x)$ as $h\to0$, whenever this is defined, so bounded convergence gives $\Re(\mathcal{H}\mu) The question also asks how to show that $\mathcal{H}\mu$ is defined almost everywhere. I don't have a reference for this, but the standard proof can be modified without too much difficulty. The maximal operator $\mathcal{H}^*\mu(x)\equiv\sup_{\epsilon > 0}\lvert\mathcal{H}_\epsilon\mu(x)\rvert$ is weak (1,1) continuous, $$ \left\lvert\left\lbrace x\colon\mathcal{H}^*\mu(x) > \ lambda\right\rbrace\right\rvert\le C\lVert\mu\rVert/\lambda, $$ for all $\lambda > 0$ and a fixed constant $C$ (I'm using $\lvert\cdot\rvert$ to denote the Lebesgue measure). I'm working from up vote the notes Interpolation, Maximal Operators, and the Hilbert Transform, by Michael Wong (Theorem 8.7). These notes look at the case where $\mu$ is absolutely continuous, but the proof carries 2 down across to the general case with no big changes. By weak continuity, if there exists measures $\mu_n$ with $\lVert\mu-\mu_n\rVert\to0$ such that $\mathcal{H}\mu_n$ all exist almost everywhere, then $\mathcal{H}\mu$ exists almost everywhere. If $\mu$ is absolutely continuous with differentiable density, then $\mathcal{H}\mu$ exists everywhere in the standard way. As the differentiable functions are dense in $L^1$, this extends to all absolutely continuous measures. Only the case for singular measures $\mu$ remain. So, there exists a measurable $S\subseteq\mathbb{R}$ with zero Lebesgue measure with $\mu(S^c)=0$. Then we can choose compact sets $K_n\subseteq S$ with $\mu(S\setminus K_n)\to0$. Note that the measures $\mu_n\equiv 1_{K_n}\cdot\mu$ are supported on the compact sets $K_n$ and $\lVert\mu-\mu_n\ rVert\to0$. It follows that $\mathcal{H}\mu_n(x)$ is defined everywhere outside of $K_n$ (in fact, $\mathcal{H}\_\epsilon\mu_n(x)$ is a constant function of $\epsilon$ for $\epsilon$ small enough that $B_\epsilon(x)\cap K_n=\emptyset$). So, $\mathcal{H}\mu_n$ is defined almost everywhere and, hence, so is $\mathcal{H}\mu$. add comment Not the answer you're looking for? Browse other questions tagged ca.analysis-and-odes fa.functional-analysis reference-request or ask your own question.
{"url":"http://mathoverflow.net/questions/59544/hilbert-transforms-of-measures?sort=oldest","timestamp":"2014-04-16T07:42:47Z","content_type":null,"content_length":"62022","record_id":"<urn:uuid:3f82c1ae-2b6a-48f2-b3e1-bef746fcc89e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Quotes, Sayings about Mathematics Math Quotes, Sayings about Mathematics - Page 2 One can always reason with reason. What is algebra exactly; is it those three- cornered things? - James Matthew Barrie “Obvious” is the most dangerous word in mathematics. - Eric Temple Bell Young man, in mathematics you don’t understand things. You just get used to them. - John von Neumann Math may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true. - Bertrand Russell The things around us is math.. we are using numbers. luv it!!!! Submitted by: Math is just simply following rules. The simple and complicated one. Just obey every details of it. Submitted by: I don’t believe in mathematics. - Albert Einstein Math is the tool specially suited for dealing with abstract concepts of any kind and there is no limit to its power in this field. - Paul Dirac Mathematics is the handwriting on the human consciousness of the very spirit of life itself. - Claude Bragdon Rule of math: If it seems easy, you’re doing it wrong. Mighty is geometry; joined with art, resistless. - Eurípides The essence of mathematics is its freedom. - Georg Cantor It is as if mathematics were the vegetables of the academic dinner: Everyone knows that they are good for you, but no one forces you to eat them. Each problem that I solved became a rule which served afterwards to solve other problems. - René Descartes That awkward moment when you finish a math problem and your answer isn’t even one of the choices. A thing is obvious mathematically after you see it. - Robert Daniel Carmichael Dear math, I am not a psychiatrist, please solve your own problems. Submitted by: Math is like love : a simple idea but it can get complicated. Every time I see a math word problem it looks like this: If I have 10 ice cubes and you have 11 apples. How many pancakes will fit on the roof? Purple because aliens don’t wear hats. Submitted by: The essence of math is not to make simple things complicated, but to make complicated things simple. - S. Gudder Twelve for 23… It doesn’t take a genius to see that’s under 50 percent. - Dick Vitale M- mental A- abuse T- to H- humans Submitted by: MAth Is Like A lOve THat So COMpLICAted. MAth iS lIKE A REalITY tHAT sO MANy PrOBlEmS tO sOLve. Submitted by:
{"url":"http://www.coolnsmart.com/math_quotes/page/2/?sortby=popularity","timestamp":"2014-04-24T03:00:44Z","content_type":null,"content_length":"36548","record_id":"<urn:uuid:feeaaf68-03f5-4f8a-af7f-3b52ad9ce7b1>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
The left shift operator accept plain or long integers as arguments. The arguments are converted to a common type. It shifts the first argument to the left by the number of bits given by the second A left shift by n bits is defined as multiplication with pow(2,n); for plain integers there is no overflow check so in that case the operation drops bits and flips the sign if the result is not less than pow(2,31) in absolute value. Negative shift counts raise a ValueError exception. To support this operator in your own classes, implement the __lshift__ method.
{"url":"http://www.effbot.org/pyref/operator-lshift.htm","timestamp":"2014-04-18T23:33:04Z","content_type":null,"content_length":"2111","record_id":"<urn:uuid:dea3b55d-a3f3-4229-993a-1eb70ca4f47e>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Games Statistics show that over the summer break, most students lose an average of two to three months of math computational skills they learned during the previous school year. This loss of learning can mean an academic setback for some children that can take weeks, and in some cases months, to remedy when the school bells ring in the fall. “U.S. Secretary of Education Arne Duncan has characterized the effects of “summer learning loss” as “devastating” and “well-documented.” And according to a 2009 report by McKinsey and Company, this backsliding represents a cost of as much as $670 billion to the nation’s economy.” For educators in Florida and Texas, the concerns over losing ground academically over the summer were critical. They decided to try a particular, specialized math video game. “As they compete, students build upon basic skills like multiplication, division, and fractions, which in later years will lead to mastery of everything from proportions, number lines, and adding and subtracting integers; to order of operations, evaluating expressions, employing function tables, and solving complex equations.” The video game that the educators in Texas and Florida were using is good but expensive. There are many online sites that host free math games, most of which are challenging, exciting, fun, and age-appropriate. That’s all well and good. But above all else, children crave time spent with their parents. Because learning is a social process, children learn best through fun games and activities that involve interaction with other There are plenty of fun math games that you and your children can play to help them retain their math skills. Seize this opportunity to teach them your values, and indulge them with your own undivided attention. A price cannot be put on the quality of the time you will have spent with your children. They will have fun while learning, and they will remember those times with greater fondness than the times they spent playing the educational computer game. And lastly but of great importance, among the obvious benefits of sitting down and playing a good game with your children is the opportunity that games provide to apply and solidify the mathematical reasoning and calculating skills your children are learning in school. When children play on-line or video games, parents may know how the child scores, but do they know where they made mistakes and why? Playing games with your child offers you, as a parent, a greater opportunity to know what your child’s strengths and weaknesses in mathematics are. Get a jump start on the coming school year! Sit down and play some math games with your children. Math Games = Learning + Summer Fun Mathematics is a subject that is absolutely necessary for functioning adequately in society. More than that, mathematics is a subject that should be more enjoyable than it sometimes is. The National Council of Teachers of Mathematics (NCTM) has identified the appreciation and enjoyment of mathematics as one of the national goals for mathematics education. This goal, coupled with the task of nurturing children’s confidence in their ability to apply their mathematical knowledge to solve real-life problems, is a challenge facing every parent today. Parents’ attitudes toward mathematics have an impact on children’s attitudes. Children whose parents show an interest in and enthusiasm for mathematics around the home will be more likely to develop that enthusiasm themselves. You Can Help Your Young Child Learn Mathematics, available in both English and Spanish, helps parents communicate the importance of mathematics to their children and become more involved in their children’s mathematical education. This pamphlet discusses ways that parents can help their children develop good study habits, and it presents activities through which families can make mathematics a part of their daily lives as they travel, cook, garden, and play games. It is essential that, over the summer vacation, parents create active and memorable learning experiences for their children in math. Playing math games with their children is an effective way for parents to fulfill part of their responsibility for developing their children’s abilities to do mathematics, while at the same time having fun together, and encouraging more positive attitudes toward Teachers know that the several months off for summer vacation sees considerable slippage in their students’ math skills. Kids who practice summer math will have an easier time transitioning back to school, while kids who don’t may lose a couple months of learning. Don’t let this summer be a math-avoidance time. Who says math has to be something your child dreads? It should, instead, be something the child looks forward to and thrives on. The trick is to teach your kids math by combining it with fun activities such as math games. All you need is a deck of cards and/or two dice! Math Games Develop Strategic Thinking Strategic thinking is one of the most important skills for children to develop. It requires the ability to observe, take in different pieces of information, analyze information, plan and analyze possible solutions, and choose the appropriate action. Strategic thinking is a way to solve problems. Every day we have to solve the problems. Every day, we need solutions. Problem solving is an essential skill in our professional, family, and social Games like bridge, chess, and backgammon are ideal for teaching strategic thinking. But learning bridge is more than fun and games; students who play, practice math and reasoning skills and show improvements on standardized tests. However, games such as bridge have complex rules that can take time to learn and master. Instead of using complicated games, there are many math games at every grade level that are much easier for children to learn and play. All of the math games are focused on providing engaging activities to entertain strategic mathematical thinking both inside and outside of the classroom. If you are a teacher or parent, I encourage you to have a look at the assortment of games. You will find many that will pique your interest and and help you develop strategic thinking and problem solving abilities in your students/children while having fun! Math Games and Math Homework The finding by the National Mathematics Advisory Panel declared math education in the United States “broken” and called on schools to focus on teaching fundamental math skills that provide the underpinning for success in high tech jobs. The panel said that students must be able to add and subtract whole numbers by the end of third grade and be skilled at adding and subtracting fractions and decimals by the end of fifth grade. One of the ways that we, as teachers, have traditionally given students more practice on their math skills is homework, and yet, eighty-four percent of kids would rather take out the trash, clean their rooms, or go to the dentist than do their math homework. So how can we help our students with their math skills and make math homework more engaging? Math games! More and more in my teaching career, I see that children no longer memorize their addition facts or multiplication tables. With the math curriculum as extensive as it is, teachers cannot afford to take the time to ensure that students learn the basic facts (sad, but true!). Parents are partners in the process and will offer greater opportunities for their children to succeed in math if they support the learning of the basics at home. Games fit the bill wonderfully! Games offer a pleasant way for parents to get involved in their children’s education. Parents don’t have to be math geniuses to play a game. They don’t have to worry about pushing or pressuring their children. All that parents have to do is propose a game to their child and start to play. Math games for kids and families are the perfect way to reinforce and extend the skills children learn at school. They are one of the most effective ways that parents can develop their child’s math skills without lecturing or applying pressure. When studying math, there’s an element of repetition that’s an important part of learning new concepts and developing automatic recall of math facts. Number facts can be boring and tedious to learn and practice. A game can generate an enormous amount of practice – practice that does not have kids complaining about how much work they are having to do. What better way can there be than an interesting game as a way of mastering them? Math Games and the Last Few Weeks of School The Big Test is over. Yeah! The long Memorial Day weekend is past, or soon will be. Sigh! You’re way beyond burned out and thinking mostly about summer. You can’t figure out how you’re going to get through the next few weeks. I have a great idea! Give a math game a try! Games can help children learn important mathematical skills and processes with understanding. Besides that they: • support concept development in math • meet math standards • offer multiple assessment opportunities which will help with report cards • are great for diverse learners such as English-language learners • encourage mathematical reasoning • are easy to prepare • are easy to vary for extended use and differentiated instruction • improve basic skills • enhance basic number and operation sense • encourage strategic thinking • promote mathematical communication • promote positive attitudes towards math Pick a skill set you know your students need to practice, and then find the right game that will offer practice with that skill set. The students will be engaged and quite willing to involve themselves in the repetitive practice needed to hone their skills. The What and Why of Math Games As a veteran elementary teacher and math specialist, I’m a big believer in using math games to teach math in the classroom. What is a math game? The most effective math games are those in which the structure and rules of the games are based on mathematical ideas, and winning a game is directly related to understanding the Why play math games? I will classify the intrinsic advantages of math games under three categories: Much of mathematics teaching revolves around giving students practice in newly acquired skills and reinforcing and revisiting already introduced skills. Games provide a way of taking the drudgery out of this practice of skills, and making that practice more effective. A game can generate much more practice than a workbook page, ditto, or flashcards. When playing a game, students don’t mind repeating certain facts or procedures over and over. In terms of gains in test results, research indicates that games are an effective way to retrain and reinforce children’s skills with basic number facts. Playing games demands involvement. Successful mathematics teaching depends on the active involvement of the learner. Piaget, Bruner, and Dienes suggest that games have a very important part to play in learning mathematics. Dienes even suggests that all mathematics teaching should begin with a game. Ways of Working Children need to talk about the math as they are learning it. Math games demand mathematical communication. This can be encouraged by having students work with a partner against two other students. Not only will there be meaningful conversation between the two partners, but between the four players. To play effectively the partners must co-operate. Thus playing games provides opportunities for children to work co-operatively – an important life skill. Games put pressure on students to work mentally. The ability to do math in your head is a skill that I don’t believe we spend enough time on with students. If you think about it, when presented with a mathematical situation, most people would first try to do it in their head. It takes awhile for students to gain confidence in their ability to do math in their head. You must show them a variety of ways to do math mentally, which will give them tools they can use, but it also builds their desire to think more creatively on their own. Within the normal classroom situation there few opportunities and little incentive for students to check and justify their work. Games offer a strong incentive for players to check each other’s mathematics, challenging moves which they think are unjustified. I encourage children to ask their opponents to “convince me” or “prove it”. Probability is one of the mathematics standards that is only slightly addressed. Games bring probability to the forefront. Students are offered many opportunities to think about probability through Probably the most powerful reason for introducting games into the mathematics classroom is the enthusiasm, excitement, and total involvement and enjoyment that children experience when playing math games. Students are highly motivated and totally immerse themselves in the games, and, in the end, their attitude toward math grows increasingly more positive. Games offer children the opportunity to experience success, satisfaction, enjoyment, excitement, enthusiasm, active involvement, and gain confidence in their mathematical abilities. Games teach or reinforce many of the skills that a formal curriculum teaches, plus a skill that formal learning sometimes, mistakenly, leaves out – the skill of having fun with math, of thinking hard and enjoying it. Fun and (Math) Games! Saturday School A Success At Lincoln Elementary reads the headline from Madison, Wisconsin. Even on a Saturday, and even on a day that felt like summer, dozens of students at one elementary school spent the morning in class. Every Saturday since the end of January, about 100 students have gathered for about two hours a week to get a little extra work done and to do so while having a little bit of fun. It is easy to assume that kids would want to be anywhere but school on a weekend morning, but this program is proving to be different. Instead of traditional instruction, students learn through playing games. It seems somehow sad to me that kids are allowed to have fun with math only on Saturdays. Why isn’t math engaging, challenging, and fun all the time? As a veteran elementary teacher, I do understand that teachers feel like they don’t have enough time to teach all of the content within the course of a school year. Why on earth would they ever want to add more material in the form of math games when they can’t seem to finish the assigned math textbook? Turns out that making time to incorporate math games in the classroom can lead to rich results. I’ve been using games to teach mathematics for many years, and here are some of the significant benefits of doing so: Benefits of Using Math Games in the Classroom • Meets Mathematics Standards • Easily Linked to Any Mathematics Textbook • Offers Multiple Assessment Opportunities • Meets the Needs of Diverse Learners (UA) • Supports Concept Development in Math • Encourages Mathematical Reasoning • Engaging (maintains interest) • Repeatable (reuse often & sustain involvement • Open-Ended (allows for multiple approaches & solutions) • Easy to Prepare • Easy to Vary for Extended Use & Differentiated Instruction • Improves Basic Skills • Enhances Number and Operation Sense • Encourages Strategic Thinking • Promotes Mathematical Communication • Promotes Positive Attitudes Toward Math • Encourages Parent Involvement Pick a skill that your students need to practice. One of the big ones is subtraction at any level. Kindergarteners through 6th graders find subtraction to be a challenge. Here’s a great double-digit subtraction game: 500 Shakedown What you need: 2 players 2 dice paper and pencil for each Each player starts with 500 points. Player #1 rolls the dice and makes the biggest two-digit number he/she can. Now he/she subtracts this number from 500. Example: Player #1 rolls a 2 and a 4 and makes 42. Now he/she subtracts 42 from 500. Player #2 rolls the dice and does the same. Players continue to alternate turns. The first person to reach 0 wins. There’s only one complication! When you throw a 1, the rules change. You don’t subtract. Instead you make the smallest two-digit number you can and add. Example: If the player throws a 1 and a 5, the smallest two-digit number is 15. So he/she adds 15 to the total. Variation: Start with 5,000 points and use three dice or start with 50,000 and use 4 dice. Teachers Taking Time for Math Games As an elementary school teacher, you probably feel like you don’t have enough time to teach all of your content within the course of a school year. Why on earth would you ever want to add more material in the form of math games when you can’t seem to finish your assigned math textbook? Turns out that making time to incorporate math games in your classroom can lead to rich results. One of the most immediate benefits of using math games is increasing student engagement. Games are engaging and maintain interest. Dittos or workbook pages rarely are. Teaching methods that stress rote memorization of basic number facts or algorithmic procedures are usually boring and do not require learners to participate actively in thought and reflection. Research has demonstrated that students learn more if they are actively engaged with the math they are studying. Contrast this with the reaction that many students have toward the textbook: either a lack of interest or an assumption that the assigned math/problems will be too difficult. Incorporating math games also allows you to differentiate instruction. Using math games which better match students’ abilities can help them build content knowledge and interact more successfully with the required text. Because math games require active involvement, use concrete objects and manipulatives, and are hands-on, they are ideal for all learners, particulary English language learners. Games provide opportunities for children to work in small groups, practice teamwork, cooperation, and effective communication. Children learn from each other as they talk, share, and reflect throughout game times. Language acquisition is meaningful and understandable. Your state’s mathematics standards are intended as a statement of what students should learn, or what they should have accomplished, at particular stages of their schooling. The goal of every state’s math standards is to engage students in meaningful mathematical problem-solving experiences, build math knowledge and skills, increase students’ ability to communicate mathematically, and increase their desire to learn mathematics. Those are the goals for math games, too! Specific content knowledge will vary according to the game students play and the connection to school-day learning and the state standards. A major goal for students in the elementary grades is to develop an understanding of the properties of and the relationships among numbers. One of the very effective ways teachers can reinforce the development and practice of number concepts, logical reasoning, and mathematical communication is by using math games. They are great for targeted practice on whatever standard the children need to meet. You will meet significantly more of your state’s grade- level mathematics standards by having your children play a game than will have been met by having them complete a ditto or a workbook page. No matter which textbook your district uses, games can easily be incorporated into instruction. Some textbook companies are “seeing the light” and have begun to implement games as a part of each Even if your textbook does not incorporate games, identify a skills need almost all your students have, and give a game a try. I guarantee it will be more of a learning experience for the students and more informative to you of what your students know and can do than a workbook page. Teachers, Students, and Math Games An Indiana math project focuses on helping kindergarten through sixth-grade teachers learn new techniques for teaching math. Neill, along with partner Tara Sparks, a first-grade teacher at Eastern Greene Elementary School, demonstrated at a concluding session how they have “excited kids through games created with playing cards and dice.” “The kids in my class have been taking them out at recess to play with them,” Sparks said in the press release. “They don’t want to put them down.” As a veteran elementary teacher, I have used math games for many years to engage children in math they do not want to stop doing – even if it means skipping recess!! Many times they begged to take the game out to the playground, and they were always excited to take it home and teach it to their families. When was the last time that happened to you with math or math homework? Math games in the classroom have many benefits: • meets mathematics standards • easily linked to any mathematics textbook • offers multiple assessment opportunities • meets the needs of diverse learners (Universal Access) • supports concept development in math • encourages mathematical reasoning • engaging (maintains interest) • repeatable (reuse often & sustain involvement • open-ended (allows for multiple approaches & solutions) • easy to prepare • easy to vary for extended use & differentiated instruction • improves basic skills • enhances number and operation sense • encourages strategic thinking • promotes mathematical communication • Promotes Positive Attitudes Toward Math • encourages parent involvement Children throw themselves into playing games the way they never throw themselves into filling out workbook pages or dittos. And games can, if you select the right ones, help children learn almost everything they need to master in elementary math. Good, child-centered games are designed to take the boredom and frustration out of the repetitive practice necessary for children to master important math skills and concepts. Games teach or reinforce many of the skills that a formal curriculum teaches, plus a skill that formal learning sometimes, mistakenly, leaves out – the skill of having fun with math, of thinking hard and enjoying it. Summer Math for the Fun of It! Summer is coming. What are you going to do to keep your child’s math skills from losing ground? Research has shown that there is clearly a case for use it or lose it with math. Teachers know that students return to school in the fall with a 1 to 2 month loss in math skills. Not good, and definitely not necessary. Carrie Launius, a veteran teacher, has this to say in her article titled, “Keeping Kids Busy During Summer”, “Card games like solitaire are very good for kids to practice mental math and math thinking as well as Gin, Rummy, or Spades”. It is essential that, over the summer vacation, parents create active and memorable learning experiences for their children in math. “Children learn more effectively when information is presented through the use of active learning experiences instead of passive ones”, reports Marilyn Curtian-Phillps, M. Ed. Parents often get caught up in having their child do workbook pages from some expensive book that they order or buy from a teacher store. Just give them authentic, real world experiences where learning can take place naturally. Math games are much more appropriate and engaging than workbooks, dittos, or even flashcards. Children throw themselves into playing games the way they never throw themselves into filling out workbook pages or dittos. And games can help children learn almost everything they need to master in elementary math. Good, child-centered games are designed to take the boredom and frustration out of the repetitive practice necessary for children to master important math skills and concepts. Playing math games is even more beneficial than spending the same amount of time drilling basic facts using flash cards. Not only are games a lot more fun, but the potential for learning and reasoning about mathematics is much greater, as well. In a non-threatening game format, children will be more focused and retention will be greater. Math games for kids and families are the perfect way to reinforce, sharpen, and extend math skills over the summer. They are one of the most effective ways that parents can develop their child’s math skills without lecturing or applying pressure. When studying math, there’s an element of repetition that’s an important part of learning new concepts and developing automatic recall of math facts. Number facts (remember those times tables?) can be boring and tedious to learn and practice. A game can generate an enormous amount of practice – practice that does not have kids complaining about how much work they are having to do. What better way can there be than an interesting game as a way of mastering them? Here’s an example of a great game for children who need to sharpen their multiplication skills: Salute Multiplication What you need: 2 players deck of cards, face cards removed Shuffle deck and place face down in a pile. Player #1 turns over the top card and places it face up on the table for all to see. Player #2 draws a card and does not look at it. Player 2 holds the card above his or her eyes so that player #1 can see it, but he can’t. Player #1 multiplies the 2 cards mentally and says the product out loud. Player #2 listens and decides what his or her card must be and says that number out loud. Example: Player #1 turns over a 6 for all to see. Without looking at it, player #2 puts a 4 on his forehead. Player #1 mentally multiplies 6 x 4 and says, “24”. Player #2 must figure out 6 x ? = 24. Both players decide if the response is correct. If it is, player #1 gets 1 point. Players reverse roles and play continues until one player has 10 points.
{"url":"http://www.mathgamesandactivities.com/tag/benefits-of-math-games/","timestamp":"2014-04-18T23:15:19Z","content_type":null,"content_length":"61806","record_id":"<urn:uuid:0d4c2bdb-6315-4900-af47-9b372db1b898>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
El Segundo Algebra Tutor ...It is my goal to not only teach my students the material, but to give them the tools needed to succeed in all their classes. With the right tools and encouragement from myself, teachers and parents, students are able to achieve great things.I earned my MBA from UCLA Anderson School of Management... 30 Subjects: including algebra 2, calculus, algebra 1, physics ...I also studied abroad at Beijing University (China's first college) and East China Normal University. My undergraduate curriculum was extremely heavy in advanced mathematics, English, applied sciences and of course my fields of study (Econ / Mandarin). I am new to Wyzant, but have been tutorin... 30 Subjects: including algebra 2, English, Spanish, algebra 1 ...I would describe myself as patient and dedicated to my students - I'm not afraid to stay a little late to make sure the student is ready for that math test the next day! My goal for each tutoring session is for the student to feel familiar with and confident about the material being covered. It's such a joy for me to see improvement in the students I work with. 10 Subjects: including algebra 1, Spanish, elementary math, grammar ...I am not someone who backs out of commitments and I have pursued my hobbies for years. I have been playing baseball since I was 8 and it is still something I make time for every week. There is nothing I love more than being out in this beautiful southern California weather and being part of a team. 11 Subjects: including algebra 2, algebra 1, physics, geometry ...I love teaching students the "easier way" to do problems. Many of the SATs problems aren't necessarily hard - they just require you to remember information you learned in the 5th or 6th grade. That being said, many of them are hard, and that's what I'm here for! 18 Subjects: including algebra 1, reading, geometry, algebra 2
{"url":"http://www.purplemath.com/El_Segundo_Algebra_tutors.php","timestamp":"2014-04-16T22:20:52Z","content_type":null,"content_length":"24051","record_id":"<urn:uuid:83e2908b-0875-4c6f-b237-cd1e892849a2>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
Fourier Transform Help August 4th 2011, 11:29 PM Fourier Transform Help I just need some clarification. I have been asked how to find the inverse rectangular pulse signal x(t) but I am not sure how to go about finding it. I guess what I have trouble understanding is the "inverse rectangular pulse signal". I know how to solve for just for the "rectangular pulse signal" through the Fourier Transform, but how does it change for the inverse? Many thanks.(Wait) August 5th 2011, 01:08 AM Re: Fourier Transform Help There are at least three definitions of the Fourier Transform. Which one are you using? I would interpret "inverse rectangular pulse signal" as "find the inverse Fourier Transform of the rectangular pulse signal". The definition of the inverse Fourier Transform depends on your definition of the Fourier Transform. August 5th 2011, 04:08 AM Re: Fourier Transform Help There are at least three definitions of the Fourier Transform. Which one are you using? I would interpret "inverse rectangular pulse signal" as "find the inverse Fourier Transform of the rectangular pulse signal". The definition of the inverse Fourier Transform depends on your definition of the Fourier Transform. I have been given several definitions of the Fourier Transform, but the one I think I am suppose to use is: Attachment 21970 So must use the following to solve my problem? Attachment 21971 August 5th 2011, 05:33 AM Re: Fourier Transform Help I have been given several definitions of the Fourier Transform, but the one I think I am suppose to use is: Attachment 21970 So must use the following to solve my problem? Attachment 21971 That is a fine Fourier Transform/Inverse Fourier Transform pair to use, and is one of the standard definitions. So what is the "rectangular pulse signal"? And I mean, what exactly is it? Can you write down a formula for it? August 5th 2011, 05:36 AM Re: Fourier Transform Help Oh yes, sorry. The formula for the inverse rectangular pulse signal x(t) is defined by: x(t)={ 0 |t|<a ={ 1 |t|>=a August 5th 2011, 05:40 AM Re: Fourier Transform Help Ok, great. Next question: are you allowed to use tables to compute your inverse Fourier Transform, or are you supposed to compute from the definition? August 5th 2011, 05:43 AM Re: Fourier Transform Help I don't think I am suppose to use the tables, so it going to have to be from the definition. :) August 5th 2011, 05:46 AM Re: Fourier Transform Help Right. So, can you write down the integral you must compute? August 5th 2011, 05:54 AM Re: Fourier Transform Help Okay, this is the bit where I am a little unsure of what I am doing, but here it goes: Attachment 21976 But this doesn't seem right... August 5th 2011, 06:04 AM Re: Fourier Transform Help I don't see any problem with it. You're using the definition of the Inverse Fourier Transform. The rectangular function eliminates the tails of your integration region, because it's zero there. So, crank through. What do you get? August 5th 2011, 06:14 AM Re: Fourier Transform Help Skipping a couple steps of working out, I got: Attachment 21978 I am also required to sketch it. Is there anything I need to look out for when sketching it? August 5th 2011, 06:21 AM Re: Fourier Transform Help Hmm. That's not what I get. Your final answer should definitely have an a in it. Can you show your steps? August 5th 2011, 06:30 AM Re: Fourier Transform Help Sorry, I realised that I got my omegas and 'a's mixed up, as well as leaving out an 'i.t' in the previous working out, but it should be fixed now: Attachment 21979 August 5th 2011, 06:37 AM Re: Fourier Transform Help I agree with everything except the last line. There shouldn't be an 'a' multiplying the sine. So, here's the next question: now that you've got this expression, can you write it in a more compact format? Hint: examine the sync function. August 5th 2011, 06:45 AM Re: Fourier Transform Help Oh yes, how silly of me! That 'a' shouldn't be there. Thanks for picking that up. Simplifying would get: Attachment 21980 whereby sinc is the sync function. Is this right? And then how do I sketch it?
{"url":"http://mathhelpforum.com/advanced-applied-math/185639-fourier-transform-help-print.html","timestamp":"2014-04-18T10:07:02Z","content_type":null,"content_length":"17815","record_id":"<urn:uuid:c8ab6271-ef25-4ba3-9d98-65f8ec46bb6a>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Plausible values for Factor Score Estimates Jonathon Little posted on Friday, December 03, 2010 - 4:47 pm Given the following factor model: CATEGORICAL = ANSWER1 ANSWER2 ANSWER3 ANSWER5 ANSWER6 ANSWER7 ANSWER8; CLUSTER = id; F1 BY answer1* answer2 answer3; F2 BY answer4* answer5 answer6; F1@1 F2@1; F1 WITH F2; F1 WITH F3; F2 WITH F3; answer1-answer5 with answer6; answer1-answer3 with answer5; answer1-answer2 with answer3; answer1 with answer2; I generated 5 multiple impuations using Bayesian estimation with plausible factor scores saved. Using 5 imputed data sets I ran a multilevel CFA using WSLMV repeating the same model that was used for the imputation. I want to examine how these plausible factor scores correspond to the behaviour of the True scores from my factor model. Where I am confused is in how to go about reproducing the within-level factor correlations from the CFA model using my plausible factor scores. I would like to see how closely the plausible factor score correlations correspond to the correlations between my latents in my within-level CFA. Any adivce on how to proceed would be very welcome? Jonathon Little posted on Sunday, December 05, 2010 - 8:01 am The line of syntax reading F2 WITH F3; is an error - please ignore Bengt O. Muthen posted on Sunday, December 05, 2010 - 10:53 am It seems like there are at least 3 ways to find the correlation between the 2 factors. (1) The 2-level Bayes run that you show gives a Bayes factor correlation estimate. (2) The WLSMV run based on the imputed data gives a WLSMV factor correlation estimate. (3) And the plausible values that your Bayes run generates can be used to compute the factor correlation. As for (3) you get N*n factor scores for each factor, where N is the number of plausible draws and n is your samle size. You get the factor correlation from those N*n factor scores. Also, take a look at this paper on our website: Asparouhov, T. & Muthén, B. (2010). Plausible values for latent variables using Mplus. Technical Report. Jonathon Little posted on Sunday, December 05, 2010 - 6:21 pm Thanks Bengt, I think I didnt explain myself clearly. I was interested in examining how the plausible factor score estimates correlated with their true scores, that is, the within-level factors from my CFA. Is there any reason why I could not enter the plausible factor scores into my multilevel CFA as covariates (as below) in order to see how well they correlate with their respective true factor scores? F1 BY answer1* answer2 answer3; F2 BY answer4* answer5 answer6; F1@1 F2@1 PlausibleF1@1 PlausibleF2@1; F1 WITH F2; F1 WITH PlausibleF1; (Correlation of Plausible factor socre with true factor score) F2 WITH PlausibleF2;(Correlation of Plausible factor socre with true factor score) PlausibleF1 WITH PlausibleF2;(correlation of plausible factor scores) answer1-answer5 with answer6; answer1-answer3 with answer5; answer1-answer2 with answer3; answer1 with answer2; PlausibleF1 WITH answer1 to answer6; PlausibleF2 WITH answer1 to answer6; Any adivce on how to proceed would be very welcome? Bengt O. Muthen posted on Monday, December 06, 2010 - 9:26 am I am not sure that a high correlation from that approach is indicative of high-quality plausible values. I would instead look at how the plausible values for the two factors relate to each other and to other variables in comparison to estimating those quantities directly in the model. If you are doing a Monte Carlo study you could you see how true scores (generated factor values) compare to plausible values. Jonathon Little posted on Monday, December 06, 2010 - 11:28 am I agree that looking at how the plausible values for the two factors relate to other variables in comparison to estimating those quantities directly in the model is useful. Inspite of having a good fitting model I need to provide day-to-day users of the instrument (eg,clinicians) with confidence that a scoring method has correlational accuracy, univocality and that the scoring method (factor score estimates or plausible values) correlate with their true scores. This is because even for highly determinate factors (which mine are) I could still end up chosing a poor set of factor score estimates - these need to be evaluated somehow. Factor score estimates - Nunnally: “If the multiple correlation [the proportion of determinacy in the factor] is less than .70, one is in trouble. In that instance the error variance in estimating the factor would be approximately the same as the valid variance. At a very minimum, one should be quite suspicious of factor estimates obtained with a multiple correlation of less than .50, because in that case less than 25 percent of the variance of factor scores can be predicted from the variables. Then one could not trust the variables as actually representing the factor.” (1978, p. 426). Is it that the method that I suggested is technically incorrect to answer the question, or is it that you think it is not a useful test or not as useful as the other method you proposed? Bengt O. Muthen posted on Monday, December 06, 2010 - 6:04 pm Are you interested in the quality of the scores in terms of the quality of individuals' scores or in terms of the quality of summary measures and relationships with other variables? Section 4.2 of this paper relates to the latter: I recommend reading von Davier M., Gonzalez E. & Mislevy R. (2009) What are plausible values and why are they useful? IERI Monograph Series Issues and Methodologies in Large-Scale Assessments. IER Institute. Educational Testing Service. If you can't find it, I am happy to send it. Jonathon Little posted on Monday, December 06, 2010 - 6:50 pm Both actually but for the moment, the former, the quality of scores in terms of individual's scores. I tried using regression weighting methods using the factor score coefficient matrix generated from a within-level polychoric correlation matrix with ML estimation and several unit weighted and course factor score methods but these all seem to perform very poorly using the method I pasted as syntax. The factor score estimates all correlate with their true scores no higher and often a lot lower than r=.65. (that's r not r-square). I take it by your question, plausible values may not be suitable for individual use? I suppose this would make sense if we're using imputations as these are designed to estimate population parameters and standard errors and not individual's responses. I will continue to read on the subject of plausible values but I'm still stuck with the problem of having a good approximate fit but no method for users to score the instrument with confidence. few people ever seem to bother evaluating their factor scoring methods for instruments so I wanted to make an extra effort to learn about it. Bengt O. Muthen posted on Tuesday, December 07, 2010 - 6:05 pm I think plausible values have an advantage over regular estimated factor scores in that each individual gets a distribution of values so that the uncertainty is clearly presented and the shape of the distribution is clear (an estimated factor score for an individual at most gets a SE). This is useful to have when you compute the variance over people and when you compute the relationship to other variables (see the section 4.2 of our imputation paper that I referred to). I don't know if I have seen that the average plausible value for a certain individual is any better than the estimated factor score for that individual. Factor scores (and plausible values) have different strengths and weaknesses for different uses as was discussed already by Tucker in the 50's. For example, ranking individuals is one use, regression another. For a recent discussion, see Skrondal, A. and Laake, P. (2001). Regression among factor scores. Psychometrika 66, 563-575. I think the evaluation approach you suggest may have a variation on the Heisenberger (?) problem - in evaluating the plausible values you alter the factor meaning. The factors become determined also by the F WITH Plausible; statements, which is not what you want. Again, to really see how well the factor scores or plausible values work in the model and intended use situation, you want to do a Monte Carlo study so that you can compare the true, generated scores with your estimated ones. Jonathon Little posted on Wednesday, December 08, 2010 - 11:11 pm Thank you Bengt, I appreciate your thoughts Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=12&page=6392","timestamp":"2014-04-17T18:37:16Z","content_type":null,"content_length":"34180","record_id":"<urn:uuid:14973957-d297-4666-a183-f6d902ab69e2>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
My phraseology may not be the best here, but you get the idea. The proof is easy indeed, I could do it in my head (which is my excuse for not having it as polished as possible here). It is obvious that for any given natural numbers m and n, either is true, since √2 is irrational and therefore cannot be equal to a rational number, and thus the result follows from trichotomy. Now consider the inequality where the ? denotes the unknown direction of the inequality. Since m and n are natural numbers, their sum is nonzero and positive, and thus we may multiply each side my m + n without reversing the Once again, n is a natural number and thus is nonzero and positive, so we may divide each side by n and maintain the direction of the inequality: Now subtract m/n and √2 from each side to obtain which gives (after dividing through by 1 - √2, flipping the inequality, rationalizing, then flipping again so that we now have the original direction (these steps are left to the reader as an which gives the direction of the inequality as the opposite of the "initial condition" m/n ¿ √2, where ¿ denotes the original (and opposite) inequality direction, so that if m/n < √2 then (m + 2n)/(m + n) > √2 or if m/n > √2 then (m + 2n)/(m + n) < √2. Thus, either is true for natural numbers m and n.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=54254","timestamp":"2014-04-20T08:36:49Z","content_type":null,"content_length":"30390","record_id":"<urn:uuid:d4cfc270-015a-40e5-ab03-6c443105aaaf>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00565-ip-10-147-4-33.ec2.internal.warc.gz"}