content
stringlengths
86
994k
meta
stringlengths
288
619
angle which is made out of one of the pyramids' bases and the base OABC? January 23rd 2010, 10:48 AM #1 Dec 2009 angle which is made out of one of the pyramids' bases and the base OABC? The figure shows a pyramid OABCD in a coordinate system with the starting point O. Determine the angle which is made out of one of the pyramids' bases and the base OABC. I hope this is understandable, my english isn't really that good, due to teh fact that i'm from Denmark The points are A(4,0,0), B(4,4,0), C(0,4,0), D(2,2,10) and O(0,0,0) Can someone please help me? The figure shows a pyramid OABCD in a coordinate system with the starting point O. Determine the angle which is made out of one of the pyramids' bases and the base OABC. I hope this is understandable, my english isn't really that good, due to teh fact that i'm from Denmark The points are A(4,0,0), B(4,4,0), C(0,4,0), D(2,2,10) and O(0,0,0) Can someone please help me? I've attached a rough sketch of the pyramid. The angle you are looking for is coloured in blue. To determine the value of the angle use the indicated right triangle. You are kidding me right? Is it that simple? Onskyldt - but I'm never kidding anyone - ääähem ... mostly. Is it that simple? Yes. The pyramid is symmetric, the vertices have integer coordinates - so what did you expect? (To prove my result you can use vectors: The angle between 2 planes is as large as the angle between the normal vectors of the plane: normal vector of the base: $\overrightarrow{n_{base}}=(0,0,1)$ normal vector of OCD: $\overrightarrow{n_{OCD}}=(0,1,0) \times (2,2,10) = (10,0,2)$ $\cos(\alpha)=\dfrac{(0,0,1) \cdot (10,0,2)}{\sqrt{1} \cdot \sqrt{104}} = \dfrac2{\sqrt{104}}$ Now calculate $\alpha$ and you'll get exactly the same value as before. January 23rd 2010, 11:26 AM #2 January 23rd 2010, 11:51 AM #3 Dec 2009 January 23rd 2010, 12:01 PM #4 January 23rd 2010, 12:03 PM #5 Dec 2009 January 23rd 2010, 11:28 PM #6
{"url":"http://mathhelpforum.com/geometry/125056-angle-made-out-one-pyramids-bases-base-oabc.html","timestamp":"2014-04-24T10:15:14Z","content_type":null,"content_length":"51308","record_id":"<urn:uuid:24d3f5de-2c19-44c1-936e-c15cfd4456ce>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
Chevy Chase Prealgebra Tutor ...I thoroughly understand key terms, concepts, and the processes involved in genetics. I have an extensive educational background in science (B.A. in Biological Sciences, M.S. in Biological Sciences, and in the 2nd semester of another Master's program, Biotechnolgy with a specialty in Bioinformati... 64 Subjects: including prealgebra, reading, English, chemistry ...I understand all the concepts well and can explain them in a manner in which they make sense. I lived in Argentina for 14 Years. I am completely Bilingual and I have tutored students in the 23 Subjects: including prealgebra, Spanish, calculus, statistics ...Busy work solved! Let me know when you recognize this as a special case of... And let me know when you figure out who little Carl is... SAT math is delicious...like tiramisu! The questions are thoughtful and very well-crafted. 10 Subjects: including prealgebra, geometry, algebra 2, algebra 1 ...Tennis: I have taken six years of tennis lessons and played in junior tournaments when I was younger. I teach all the basic tennis strokes as well as how to put them together in an effective and strategic way.I love all four strokes and have put them to the test in my three years as a competitiv... 13 Subjects: including prealgebra, calculus, GRE, writing ...I enjoy working with people from all nations; I am very good at understanding students with heavy accents. I love helping students improve their skills in reading, writing, and speaking the English language. I have taken the ASVAB myself, and I achieved a very high score. 22 Subjects: including prealgebra, English, writing, reading
{"url":"http://www.purplemath.com/Chevy_Chase_Prealgebra_tutors.php","timestamp":"2014-04-16T13:41:02Z","content_type":null,"content_length":"24149","record_id":"<urn:uuid:443bbaec-ba6a-48b2-b3ac-b61ff63cc2b4>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
Brighton & Hove City Council Hub ::: Learning ::: Secondary ::: Networks ::: Mathematics ::: EMT - Excellent Maths Teaching ::: 5. Tak Tiles * Brighton & Hove City Council Hub > Learning > Secondary > Networks > Mathematics > EMT - Excellent Maths Teaching > 5. Tak Tiles Tak Tiles These are a really exciting approach to Algebra through Geometry written by the late Geoff Giles. Each secondary school will have a class set of 25 tak tiles by September. 1. Algebra through geometry Written in the mid nineties by Geoff Giles. Some great ideas in here. I would strongly advise that you try the exercises in full yourself beforeembarking on teaching using these materials. 2. Lesson plans Some year 7 and year 10 lesson plans that I found online. These might be a good starting point. There is a powerpoint to accompany these. 3. NCETM A short introduction to using Tak Tiles to introduce algbra togther with some more powerpoints. 4. Simultaneous Equations A departmental workshop from NCETM and some lesson materials from Zeb Friedman which use a similar approach toTak Tiles to explore simultaneous equations.
{"url":"http://www.school-portal.co.uk/GroupHomepage.asp?GroupID=1155407","timestamp":"2014-04-17T18:25:43Z","content_type":null,"content_length":"40328","record_id":"<urn:uuid:0d180754-96e0-415d-8cfa-77d44657bbe0>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
Branch-Circuit, Feeder and Service Calculations, Part XXIX The National Electrical Code (NEC) contains an introduction, nine chapters and eight annexes. Article 90 is the introduction to the NEC. This article contains specifications that are essential to all chapters and sections in the Code. The National Electrical Code states its purpose in 90.1(A). “The purpose of the Code is the practical safeguarding of persons and property from hazards arising from the use of electricity.” Section 90.1 continues by covering the Code’s adequacy, its intention and its relation to other international standards. While most articles have a scope that describes what the article covers, Article 90 explains what the entire Code covers. While 90.3 explains the arrangement of the Code, 90.4 specifies its enforcement. Article 90.5 explains mandatory rules, permissive rules and fine print notes. This section also explains the use of brackets in the NEC. Brackets containing section references to another NFPA document are for informational purposes only and are provided as a guide to indicate the source of the extracted text. These bracketed references immediately follow the extracted text. Formal interpretation procedures are discussed in 90.6. Section 90.7 covers examination of equipment for safety. Future expansion and convenience is mentioned in 90.8(A) and 90.8(B). These sections state that limiting the number of circuits in a single enclosure minimizes the effects from short circuit or ground fault in one circuit. The last section in Article 90 is 90.9. This section explains the units of measurement in the Code. Article 220 contains requirements for calculating branch-circuit, feeder and service loads. Last month’s Code in Focus covered electric cooking equipment in 220.55. This month, the discussion continues with calculating loads for electric ranges and other cooking appliances in dwelling units. The second note under Table 220.55 provides instructions for finding the maximum demand for ranges of unequal rating over 8¾ kW through 27 kW. In accordance with Note 2, find an average value of rating by adding together the ratings of all ranges to obtain the total connected load, and then divide by the number of ranges. After finding the average value, the maximum demand in Column C must be increased 5 percent for each additional kilowatt of rating or major fraction thereof by which the rating of individual ranges exceeds 12 kW. For example, what is the service demand load for five 13-kW, five 15-kW and five 17-kW household electric ranges? Start by adding together the ratings of all ranges to obtain the total connected load [(5 × 13) + (5 × 15) + (5 × 17) = 65 + 75 + 85 = 225]. Next, find the average value by dividing the total connected load by the total number of ranges (225 ÷ 15 = 15 kW). The average value for these 15 ranges is 15 kW (see Figure 1). Because we have found the average value for the 15 ranges, it is as if this is a new question: What is the service demand load for 15 15-kW household electric ranges? Find the percentage by which Column C must be increased. A 15-kW range exceeds 12 kW by 3 kW (15 – 12 = 3). Since Column C must be increased 5 percent for each additional kilowatt of rating above 12, the maximum demand listed in Column C for 15 ranges must be increased by 15 percent (3 × 5% = 15%). Find the demand in Column C for 15 ranges, and then multiply by 15 percent (Column C for 15 ranges is 30 kW). The increased amount is 4.5 kW (30 × 15% = 4.5 kW). This increased amount must be added to the Column C demand load for 15 ranges (30 + 4.5 = 34.5 kW). The service demand load for five 13-kW, five 15-kW and five 17-kW household electric ranges is 34.5 kW (see Figure 2). When applying Note 2, the range rating must not include a fraction of a kilowatt. The fraction must either be dropped or rounded up to the next whole kilowatt rating. For example, what is the service demand load for five 14-kW, five 16-kW and five 17-kW household electric ranges? Find an average value of rating by adding together the ratings of all ranges to obtain the total connected load. The total connected load is 235 kW (5 × 14) + (5 × 16) + (5 × 17) = 70 + 80 + 85 = 235. Now divide the total connected load by the number of ranges to find the average value of rating (235 ÷ 15 = 15.67 kW). The average rating of all 15 ranges is 15.67 kW. Notes 1 and 2 specify that the range rating must be increased for each kilowatt of rating or major fraction thereof by which the rating of the individual ranges exceeds 12 kW. A major fraction is .5 and larger. Since the .67 is a major fraction, round the average rating of 15.67 up to 16 kW (see Figure 3). Now find the service demand load for 15 16-kW ranges. Because Column C is based on 12-kW ranges, subtract 12 from 16 (16 – 12 = 4). Since 16 kW exceeds 12 kW by 4, multiply 4 by 5 percent to find the amount Column C must be increased (4 × 5% = 20%). The maximum demand listed in Column C for 15 ranges must be increased by 20 percent. The increased amount is 6 kW (30 × 20% = 6 kW). This increased amount must be added to the Column C demand load for 15 ranges (30 + 6 = 36 kW). The service demand load for five 14-kW, five 16-kW and five 17-kW household electric ranges is 36 kW (see Figure 4). Dropping the fraction or rounding the fraction up to the next whole kilowatt rating should be done only once: after finding the average range rating, just before finding the percent of increase. It is not necessary to round up or drop the fraction of each individual range. For example, what is the service demand load for three 13.6-kW, three 14.9-kW and four 16.6-kW household electric ranges? Although these ranges have fractions of kilowatt ratings, do not round the rating up or drop the fraction at this time. First, find an average value of rating by adding together the ratings of all ranges to obtain the total connected load: (3 × 13.6) + (3 × 14.9) + (4 × 16.6) = 40.8 + 44.7 + 66.4 = 151.9. Next, find the average value by dividing the total connected load by the total number of ranges (151.9 ÷ 10 = 15.19 kW). Since the .19 is not a major fraction, drop it (see Figure 5). Now, find the service demand load for 10 15-kW ranges. Subtract 12 from 15 (15 – 12 = 3). The maximum demand listed in Column C for 10 ranges must be increased by 15 percent (3 × 5% = 15%). Find the demand in Column C for 10 ranges, and then multiply by 15 percent (Column C for 10 ranges is 25 kW). The increased amount is 3.75 kW (25 × 15% = 3.75 kW). This increased amount must be added to the Column C demand load for 10 ranges (25 + 3.75 = 28.75 kW). The service demand load for three 13.6-kW, three 14.9-kW and four 16.6-kW household electric ranges is 28.75 kW (see Figure 6). Next month’s Code in Focus will continue the discussion of feeder and service load calculations. MILLER, owner of Lighthouse Educational Services, teaches classes and seminars on the electrical industry. He is the author of “Illustrated Guide to the National Electrical Code” and “The Electrician’s Exam Prep Manual.” He can be reached at 615.333.3336, charles@charlesRmiller.com or www.charlesRmiller.com.
{"url":"http://www.ecmag.com/print/section/codes-standards/branch-circuit-feeder-and-service-calculations-part-xxix?qt-issues_block=1","timestamp":"2014-04-16T19:58:40Z","content_type":null,"content_length":"14526","record_id":"<urn:uuid:59ab9442-480a-432b-ac85-272450b92526>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
Undergraduate Engineering Handbook Aeronautics and Astronautics Program From Undergraduate Engineering Handbook Program Requirements 2011-12 The principal purpose of the undergraduate interdisciplinary major in Aeronautics and Astronautics is to prepare students who are strongly interested in aerospace for subsequent graduate study in the field. In particular, it is expected that students completing this undergraduate curriculum can then satisfy the requirements for the degree of Master of Science in Aeronautics and Astronautics at Stanford University in one additional academic year or, alternatively, complete the B.S. in General Engineering and the M.S. in Aeronautics and Astronautics as a co-terminal program in five years. Another objective of the program is to provide an opportunity for interested undergraduates to become acquainted with the challenges of the aerospace field, with aeronautical and astronautical principles, and with the faculty who teach and do research in aeronautics and astronautics. Students interested in aerospace are also encouraged to consider the undergraduate minor in Aeronautics and Astronautics, which is described in the "Minors and Honors" section of this Handbook. The departmental requirements of this major include a core set of courses required of every Aeronautics and Astronautics major; a set of depth areas from which two areas (four courses) must be chosen; and an engineering elective. Students are expected to consult closely with an advisor about how best to satisfy these and all other requirements of the major, to submit a program planning sheet when declaring the major, and to have a final plan (program sheet) approved by the advisor and department at least one quarter prior to graduation. Mathematics: 24 units (Fr, So, Jr) Mathematics through ordinary differential equations is a prerequisite to depth courses. Some statistics is mandatory, such as STATS 110, STATS 116, CME 106, or CS 109. For a list of acceptable courses, see the Mathematics Requirement section of this handbook. Required: Ordinary Differential Equations, satisfied by MATH 53 or CME 102 (same as ENGR 155A). Science:18 units (Fr, So) For a list of courses approved by the School, see the Science Requirement section of this handbook. Aero/Astro depth courses rely on a strong foundation in classical physics, particularly mechanics. Chemistry is needed for students without high school chemistry and is recommended for others. Required: Physics 41 and 43, plus one more advanced physics course. Technology in Society: One course See Chapter 3, Figure 3-3 for a list of courses that fulfill the Technology in Society requirement. Engineering Fundamentals: Three courses minimum • ENGR 30. Engineering Thermodynamics (req'd), 3 units, A,W,Sum • CS 106A. Programming Methodology (recommended), 5 units, A,W,S,Sum • Fundamentals Elective ( may not use CS 106B or X if 106A is taken) Departmental Requirements: 39 units • AA 100. Introduction to Aeronautics & Astronautics, 3 units • ME 70. Introductory Fluids Engineering, 4 units • ME 131A. Heat Transfer, 3-4 units • ENGR 15. Dynamics 3 units • ME 161. Dynamic Systems 3-4 units OR PHYSICS 110. Intermediate Mechanics, 4 units • CEE 101A. Mechanics of Materials, 4 units OR ME 80. Mechanics of Materials, 4 units • AA 190. Directed Research & Writing in Aero/Astro*, 3 units • Depth Area I: Two courses from a department Depth Area (see Depth Area lists below), 6 units • Depth Area II: Two courses from a second Depth Area, 6 units • Additional engineering elective, 3 units *Students should discuss their AA190 (WIM) topic with their advisor & the Student Services Manager during their junior year. -- Depth Areas: Four courses; two from each of two topic areas + one elective Students should select four courses from the list below, two from each of two areas. One additional engineering elective (at least 3 units) should also be selected; this may be an additional course from any of the depth areas below, another course in Aeronautics and Astronautics, or an appropriate elective from another Engineering department. In any case, the choice of depth areas and engineering elective should be determined in consultation with the Aeronautics and Astronautics major advisor. Dynamics and Controls ENGR 105. Feedback Control Design, 3 units ENGR 205. Intro to Control Design Techniques, 3 units AA 242A. Classical Dynamics, 3 units AA 271A. Dynamics and Control of Spacecraft and Aircraft, 3 units AA 279. Spacecraft Mechanics, 3 units Systems Design AA 236A,B. Spacecraft Design, Spacecraft Design Laboratory, 3-5, 3 units AA 241A,B. Introduction to Aircraft Design, Synthesis, and Analysis, 3, 3 units Fluids and CFD AA 200. Applied Aerodynamics, 3 units AA 210A. Fundamentals of Compressible Flow, 3 units AA 214A/CME 206. Introduction to Numerical Methods for Engineering, 3 units AA 283. Aircraft & Rocket Propulsion, 3 ME 131B. Fluid Mechanics: Compressible Flow and Turbomachinery, 4 units AA 240A. Analysis of Structures I, 3 units AA 240B. Analysis of Structures II, 3 units AA 256. Mechanics of Composites, 3 units Plus free electives to bring total units to the 180 required for graduation. For AA 4-year plans and program sheets, go to the Navigation bar. Select from any year you are enrolled at Stanford. 1. Print your Stanford unofficial transcript from Axess. 2. Download the AA Program Sheet from the Program Sheets page. Complete the Program Sheet indicating how you plan to fulfill the major requirements – or do this when you meet with your advisor. Your program proposal may change as you progress in the program: submit revisions in consultation with your advisor. Submit a final Program Sheet at least two quarters before you graduate. 3. Complete the form below and take it, along with your transcript and Program Sheet, to the Aero/Astro Student Services Manager (Durand Building, room 250) for an academic advisor assignment. 4. Make an appointment with your advisor to discuss your program. Have your advisor sign the Program Sheet and the declaration form. 5. Return the signed forms to the Aero/Astro Student Services Manager. 6. Declare the Aero/Astro major on Axess!
{"url":"http://www.stanford.edu/group/ughb/cgi-bin/handbook/index.php?title=Aeronautics_and_Astronautics_Program&oldid=1035","timestamp":"2014-04-20T01:31:54Z","content_type":null,"content_length":"22638","record_id":"<urn:uuid:efe30009-d03e-47b1-b1b1-bbb3b79108ee>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
Dimension of the linear system of $\psi$-class on $\bar M_{0;n}$ up vote 2 down vote favorite Consider the (Deligne-Mumford compactification of the) moduli space of complex rational marked curves $\overline M_{0;n}$. For each $i\in \{1,\ldots,n\}$ we can construct a line bundle $L_i$ with a fiber given by cotangent space to the $i$-th marked point, and the divisor corresponding to $L_i$ is the $\psi$-class. Studying the tropical counterpart of this moduli space, I have managed to compute the dimension of linear system of $\psi_i^{trop}$. Namely $L (\psi_i^{trop}) = 2{n-1\choose 2} - 4.$ I was wondering, does this formula holds in the complex case as well? I.e. what is the dimension of the linear system of the $\psi$-class for the moduli space of complex rational marked curves? One can see, that the result agrees in the case $n=4$. The $\psi$-class for $n=4$ is just a degree 1 effective divisor, and Riemann-Roch formula gives $L(\psi_i) =2$. Thanks in advance. ag.algebraic-geometry moduli-spaces rational-curves add comment 1 Answer active oldest votes I think that this is done by Kapranov in "Veronese curves and Grothendiexk-Knudsen moduli space $M_{0,n}$". The projective dimension of this linear system should always be $n-3$ (so the up vote 1 linear dim is $n-2$) and the map is a birational morphism which is the inverse of a sequence of blow-ups. down vote add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry moduli-spaces rational-curves or ask your own question.
{"url":"http://mathoverflow.net/questions/133621/dimension-of-the-linear-system-of-psi-class-on-bar-m-0n","timestamp":"2014-04-19T12:04:01Z","content_type":null,"content_length":"50765","record_id":"<urn:uuid:90fce06d-d5f5-4374-8b2d-e1bb1bdf2b7c>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
The Electrician's Pocket-book The Electrician's Pocket-book: The English Edition of Hospitalier's "Formulaire Pratique de L'électricien;" (Google eBook) We haven't found any reviews in the usual places. Popular passages The force of attraction or repulsion between two point charges is directly proportional to the product of the charges and inversely proportional to the square of the distance between them. A current of unit intensity is such that when one centimetre length of its circuit is bent into an arc of one centimetre radius, it exerts a force of one dyne on a unit magnetic pole placed at its center. Electromotive Force. ... force which, acting on a mass of 1 gramme for 1 second, gives it a final velocity of 1 centimetre per second. Hence the radius of gyration may be defined as the distance from the axis of rotation, at which the whole mass of the body must be supposed concentrated, in order that the energy of rotation may be the same as it is actually. ... needle. The coil is then turned until it overtakes the needle, which once more lies parallel to the coil. Two forces are now acting on the needle and balancing each other, viz., the directive force of the earth's magnetism, and the deflecting force of the current flowing through the coil. At this moment, the strength of the current is proportional to the sine of the angle through which the coil has been turned. The values of the sines may be obtained from a table of natural sines. Such a table... The law that the strength or intensity of an unvarying electrical current is directly proportional to the electromotive force, and inversely proportional to the resistance of the circuit. The law does not hold for alternating currents unless modified so as to include the effects of counter electromotive force. The amount of an ion liberated at an electrode in one second is equal to the strength of the current multiplied by the " electro-chemical equivalent ... the mass of one cubic centimetre of distilled water at the temperature of 4°... Summing up, the heat produced in a conductor is proportional to the resistance of the conductor, to the square of the current, and to the time. Fig. 1133 illustrates a case of circular motion which differs in many features from the first two considered. Here the lines of force are parallel to each other and at right angles to the axis of rotation ; consequently, the angle between the direction of motion and the direction of the lines of force changes at every instant. From this it follows that the EMF also varies during successive instants. Although the direction of motion of the conductor changes, at any one point it may be considered to... Bibliographic information
{"url":"http://books.google.com/books?id=ih5IAAAAIAAJ&dq=related:UOM39015065172663","timestamp":"2014-04-17T17:02:39Z","content_type":null,"content_length":"151104","record_id":"<urn:uuid:4562f81a-16fe-4014-b5a3-ae41c1ac4268>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: January 2002 [00012] [Date Index] [Thread Index] [Author Index] Re: Front end problems! • To: mathgroup at smc.vnet.net • Subject: [mg32176] Re: Front end problems! • From: "Mirek Gruszkiewicz" <gruszkiewicz at ornl.gov> • Date: Fri, 4 Jan 2002 05:03:50 -0500 (EST) • Organization: Oak Ridge National Lab, Oak Ridge, TN • References: <a0jm3k$322$1@smc.vnet.net> • Sender: owner-wri-mathgroup at wolfram.com These bugs have been in Mathemathica front end as long as I remember. I have reported them to WRI with short explicit repeatable examples. Basically the answer was "yeah, we know, but Mathematica handling of carriage returns and newlines and spaces and comments etc. is so complex and delicate, that we really are not sure how to deal with it". I was always puzzled how people apparently manage to use Mathematica without having constant problems with unexpected syntax errors. I came to the conclusion that a huge majority use Mathematica interactively and only create small cells with relatively short and few expressions that don't extend over more than one line and don't need any white space or comments in between. The others just assume they 'corrupted' something, probably due to their inferior understanding of the superior 'programing paradigms'. :-) The fact is, whenever more complex cells and expression are used, as in many physicochemical science/engineering problems, this problem will earlier or later crop up. This is scary, because who knows what can happen to the results. Most often it is just a syntax error mesage where there is no syntax error, without any clues as to how to find and fix it. Sometimes some junk gets appended to the result. By now I can usually manage to rearrange the cells to get rid of the errors. It's basically where you put your carriage returns. Unfortunately, fixing the errors is done at the cost of legibility, because the format Mathematica can generally digest in StandardForm is a terrible condensed mess (best with no carriage returns at all). Blank lines and comments are unsafe. It's also best to avoid printing the resulting abomination because somebody could witness it and hold it against you. Besides, legible printing of larger Mathematica programs is another can of worms (forget pretty). Altogether it becomes an ugly struggle if you have substantial chunks that cannot be divided into smaller cells, such as large Modules. Because of this I would not recommend Mathematica for building large programs. By the way, in some cases, putting parentheses () around the whole cell content can serve as a fix. The related bug is that initialization cells which execute OK in the notebook produce broken Autosave package (*.m ) files. For example the cell (where the power is made by ctrl+^) , will produce a bad *.m file, while will produce a good *.m file. This is a simple example, there are others without any comments involved. Also a long time ago I tried to suggest to WRI compiling a bug list but they said 'NO'. They said it is impossible to create a bug list for Mathematica, because Mathematica is the one , uniquely complex piece of software with which such a crude and mundane concept as a bug list is not compatible, would not work and just cannot be done. One can make bug lists for this or that trivial software, but not for Mathematica. Can you capture the wind and put it in your pocket.?. this type of thing. "A.K." <koru at coe.neu.edu> wrote in message news:a0jm3k$322$1 at smc.vnet.net... > Dear Mr. Mason, > It was a great relief to read your experiences and experiments about the > front end. Somehow, to this day I was truly convinced that I was doing > something that shouldn't be done in a mathematica common sense. Therefore, > rewriting codes would be tormenting. Due to these errors I have lost days > thinking that there is an error in my calculations. At least from now on > when I get absurd outputs I'll be able to look for a front end error with > tad more confidence. I'm also glad to hear that my mathematica coding > rituals weren't useless. I agree with the white space problems. They would > account for a number of my troubles. Hence, when I paste some piece of > I try to handle them line by line with the minimum possible amount of > space. It requires some patience but beats retyping. Also another odd > problem I had to battle last night was with font colors. I'm a rather > programmer, so color coding helps keep track of things easier. However, > night mathematica would not recognize a bracket because it was blue. Well, > turned blue before I could find the error. So for me mathematica coding is > B&W occupation from now on. > Once again thank you very much for your kind and very helpful reply. > Despite all of its frustrating front end problems I still believe that it > an amazing product. I'm sure we agree on that too. > Best wishes, > Aybek Korugan > Ps: It might be a good idea to create some sort of a mathematica front end > suspected (fuzzy) bugs and bad coding experiences knowledge base, since > all problems are easily reportable to WRI. > "Alan Mason" <swt at austin.rr.com> wrote in message > news:a0h8bl$stt$1 at smc.vnet.net... > > > > "A.K." <koru at coe.neu.edu> wrote in message > news:a0enal$pge$1 at smc.vnet.net... > > > Hello all, > > > > > > I have been using mathematica for years now. I intensely use versions > > and > > > 4. While using mathematica I've encountered a mysterious > > problem -mysterious > > > to me at least- that's been recurring independent of the version. > > > Whenever I use the notebook, after a certain time and effort of > > programming > > > with correct intermediate results, I start getting peculiar outputs > > > following some more additional programming. At this point of course I > > start > > > deleting any additional material to be able to go back to the closest > > > functioning state. Alas, I end up finding this state corrupted, and > > > truly odd outputs. > > > > > > This problem usually occurs after pasting some part of another > previously > > > used program. A while back I was advised to open the notebook in > another > > > editor and delete or add a line or two. But this remedy doesn't work > > either. > > > Hence, I end up rewriting the code. > > > > > > My major question is that, is there any other individual suffering > > this > > > type of phenomena or are these only my omens? > > > > > > And also are there any patches, service packs or upgrades etc. that > > > missing maybe? Such tools would be useful in either of the two > > that > > > run on NT 4.0. > > > > > > Best Regards. > > > > > > Aybek Korugan > > > > > Hello, > > Alas, the problems you report are not unique to you. Sometimes, the > > is obvious -- you insert a comment into a Module, hit Shift-Enter, and > a > > syntax error because the Frontend has lost track of the semicolon > preceding > > the comment (looks like a typical off-by-one error). But things are not > > always this clear. Sometimes after long complicated sessions, I've > > suspected Frontend errors (with white space and comments) may be > corrupting > > the validity of my results, but it's hard to pin down the error because > it's > > usually invisible on the screen. And even though I'm very careful about > my > > Mathematica hygiene -- about clearing variables, rules, etc. -- it's > rarely > > possible to exclude user error. For instance, just giving CircleDot, > > the Attribute Flat somewhere in the code and then forgetting to clear it > can > > cause a pattern involving CircleDot to suddenly fail to match later. > > internal state of Mathematica gets very complicated and can be virtually > > impossible to understand; when this occurs, it's time to start a new > > session. > > > > As it happens, just a few days ago I was able to catch Mathematica > > red-handed, and I give the short notebook below. Here there can be no > > question of user error. Mathematica isn't handling white space > > There may be other errors as well in longer notebooks. For > > packages the situation is even worse than for notebooks; all too often, > > package generated from a master notebook that runs perfectly will > > syntax errors which persist even after all comments have been stripped > > (great for the documentation, needless to say). There are also bugs and > > maddening inconsistencies in the keyboard-to-screen-to-file > > that any finished software program should have down cold. That such > > should persist even at this late stage could be considered disgraceful > > can be tolerated only because of Mathematica's unique virtues; WRI > > needs to understand what's going on here and fix these problems once and > for > > all. > > > > In the following notebook, Out[2] is wrong because of a whitespace bug. > > Since the two rules in In[2] and In[3] look alike on the screen, this is > > pernicious. Apparently, Mathematica is attempting to record additional > > formatting information in the notebook, a laudable effort. But it needs > to > > be done correctly, in a way that permits cutting and pasting without > error. > > I believe that cutting and pasting, together with occasional mishandling > of > > comments, is the source of most if not all of these Frontend errors. > > Because of the Mathematica-centric approach that WRI has had to adopt > > its notebooks, the parsing and analysis are considerably more difficult > than > > with a standard Windows text editor, but the difficulties are presumably > not > > insuperable. > > In[1]:= > > \!\(test\ = \ \ D\_z\[SmallCircle]\((y\ D\_x)\)\) > > > > Out[1]= > > \!\(D\_z\[SmallCircle]\((y\ D\_x)\)\) > > > > In[2]:= > > \!\(\(\(\[IndentingNewLine]\)\(test\ //. \ \ \(D\_u\)\__\[SmallCircle]\ > > \((c_\ \ D\_v_)\)\ \[RuleDelayed] \ \ c\ sc[D\_u, \ > > D\_v]\ + \ \ \(\(CircleDot[D\_u, \ c]\) \(D\_v\)\(\ > \)\)\)\)\) > > > > Out[2]= > > \!\(D\_z\[SmallCircle]\((y\ D\_x)\)\) > > > > In[3]:= > > \!\(test\ //. \ > > D\_u_\[SmallCircle]\((c_\ D\_v_)\)\ \[RuleDelayed] \ > > c\ sc[D\_u, \ D\_v]\ + \ CircleDot[D\_u, \ c]\ D\_v\) > > > > Out[3]= > > \!\(y\ sc[D\_z, D\_x] + D\_z\[CircleDot]y\ D\_x\) > > > > Alan > > > > PS. Actually, the rules don't look *exactly* the same in the notebook -- > > the first is preceded by a newline, and there's an extra space before > > first D, for example. However, if I delete this newline, and all the > extra > > spaces, the result looks identical to In[3] but it still doesn't work! > It > > looks like some effort has been made to permit better control over the > > formatting of notebooks, but the details aren't quite right. In any > > it's normal for users to consider rules that differ only by white space > > be semantically identical. > > > > > > > >
{"url":"http://forums.wolfram.com/mathgroup/archive/2002/Jan/msg00012.html","timestamp":"2014-04-19T12:35:29Z","content_type":null,"content_length":"45896","record_id":"<urn:uuid:557ef9d8-ec4f-4555-95ac-1685eb466d76>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
Solve for x the following equation logx (x+1)=2; x is the base of logarithm. - Homework Help - eNotes.com Solve for x the following equation logx (x+1)=2; x is the base of logarithm. We'll impose the constraints oif existence of logarithm: x belongs to (0;+inf.)-{1} Now, we'll solve the equation taking antilogarithm: x + 1 = x^2 We'll use the symmetrical property and we'll shift all terms to one side: x^2 - x - 1 = 0 We'll apply quadratic formula: x1 = [1 + sqrt(1 + 4)]/2 x1 = (1+sqrt5)/2 x2 = (1-sqrt5)/2 Since the second value of x does not belong to the interval of admissible solutions, therefore we'll keep only a single value of x, namely x = (1+sqrt5)/2. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/solve-x-following-equation-logx-x-1-2-x-base-262709","timestamp":"2014-04-21T00:18:59Z","content_type":null,"content_length":"25078","record_id":"<urn:uuid:57d8315c-fdac-4a04-8818-6c1f86f56617>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: What is the slope and y-intercept of the graph of -x + 2y = 6? a. -1/2, 1 c. 1, 3 b. 1, 6 d. 1/2, 3 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50b420ebe4b061f4f8ff279b","timestamp":"2014-04-19T12:54:17Z","content_type":null,"content_length":"41717","record_id":"<urn:uuid:3e28b545-bb00-4661-aebf-839b9e84c2ec>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
2006.38: Structured Polynomial Eigenvalue Problems: Good Vibrations from Good Linearizations 2006.38: D. Steven Mackey, Niloufer Mackey, Christian Mehl and Volker Mehrmann (2006) Structured Polynomial Eigenvalue Problems: Good Vibrations from Good Linearizations. SIAM J. Matrix Anal. Appl., 28 (4). pp. 1029-1051. ISSN 0895-4798 This is the latest version of this eprint. Full text available as: PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader 235 Kb DOI: 10.1137/050628362 Many applications give rise to nonlinear eigenvalue problems with an underlying structured matrix polynomial. In this paper several useful classes of structured polynomial (e.g., palindromic, even, odd) are identified and the relationships between them explored. A special class of linearizations that reflect the structure of these polynomials, and therefore preserve symmetries in their spectra, is introduced and investigated. We analyze the existence and uniqueness of such linearizations, and show how they may be systematically constructed. Item Type: Article Uncontrolled nonlinear eigenvalue problem, palindromic matrix polynomial, even matrix polynomial, odd matrix polynomial, Cayley transformation, structured linearization, preservation of Keywords: eigenvalue symmetry Subjects: MSC 2000 > 15 Linear and multilinear algebra; matrix theory MSC 2000 > 65 Numerical analysis MIMS number: 2006.38 Deposited By: Nick Higham Deposited On: 19 December 2006 Available Versions of this Item • Structured Polynomial Eigenvalue Problems: Good Vibrations from Good Linearizations (deposited 19 December 2006) [Currently Displayed] Download Statistics: last 4 weeks Repository Staff Only: edit this item
{"url":"http://eprints.ma.man.ac.uk/671/","timestamp":"2014-04-18T18:25:03Z","content_type":null,"content_length":"9810","record_id":"<urn:uuid:4e7d4f66-80c5-4e7b-ae96-043470d3bbc7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
Stamford, CT Algebra 2 Tutor Find a Stamford, CT Algebra 2 Tutor ...Also, each student has their own capabilities and they have their own speed of learning so I am also skilled at recognizing the student's capability and I work accordingly. I always had fun tutoring and I use such tutoring techniques that keeps the student awake and lively.I have had experience ... 14 Subjects: including algebra 2, chemistry, reading, geometry ...I also tutor Science, History, Astronomy, and Grammar.I excelled at high school math, getting As. I excelled at high school Math, getting As. I scored a 100 on the former Sequential Math Course III Regents, which was the equivalent of the current Algebra 2 and Trigonometry Course. 29 Subjects: including algebra 2, chemistry, calculus, biology My name is Amarachi, and I enjoy teaching Math, Science and Business. Tutoring provides me with a continuous opportunity to improve in my learning and teaching skills because it constantly challenges me to adapt to new skill sets in order to best serve the needs of my student. I am well equipped to be Math, Science and Business Tutor. 21 Subjects: including algebra 2, chemistry, calculus, geometry ...In high school I took piano lessons at the San Francisco Conservatory of Music, and I continued intensive private study through college. In college, I taught piano lessons as a summer job. I was hired as the music director of the Playground Sessions software company for my extensive knowledge of piano pedagogy, technique, and repertoire. 30 Subjects: including algebra 2, reading, Spanish, writing ...I'm currently taking a break to pursue dance and theater in the Big Apple. I'm a lifelong learner, always seeking an opportunity to discover new aspects of the world and people. I love science but also excel in math and English. 26 Subjects: including algebra 2, English, reading, SAT math Related Stamford, CT Tutors Stamford, CT Accounting Tutors Stamford, CT ACT Tutors Stamford, CT Algebra Tutors Stamford, CT Algebra 2 Tutors Stamford, CT Calculus Tutors Stamford, CT Geometry Tutors Stamford, CT Math Tutors Stamford, CT Prealgebra Tutors Stamford, CT Precalculus Tutors Stamford, CT SAT Tutors Stamford, CT SAT Math Tutors Stamford, CT Science Tutors Stamford, CT Statistics Tutors Stamford, CT Trigonometry Tutors Nearby Cities With algebra 2 Tutor Astoria, NY algebra 2 Tutors Bridgeport, CT algebra 2 Tutors Bronx algebra 2 Tutors Cos Cob algebra 2 Tutors Darien, CT algebra 2 Tutors Flushing, NY algebra 2 Tutors Glenbrook, CT algebra 2 Tutors Greenwich, CT algebra 2 Tutors New Rochelle algebra 2 Tutors Norwalk, CT algebra 2 Tutors Old Greenwich algebra 2 Tutors Ridgeway, CT algebra 2 Tutors Riverside, CT algebra 2 Tutors White Plains, NY algebra 2 Tutors Yonkers algebra 2 Tutors
{"url":"http://www.purplemath.com/Stamford_CT_Algebra_2_tutors.php","timestamp":"2014-04-16T07:53:13Z","content_type":null,"content_length":"24142","record_id":"<urn:uuid:f4be73a9-af7e-4e48-8213-cc405a14203e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
Wee Teck Gan How to reach me Mathematics Department National University of Singapore Block S17, 10 Lower Kent Ridge Road Singapore 119076 E-mail: matgwt(AT)nus(\cdot)edu.sg Phone: +65-6516-2739 (office) Fax: +65-67795452 • Fall 2003: Math109: Mathematical Reasoning • Winter 2004: Math20C: Calculus and Analytic Geometry • Fall 2004: Math103A: Applied Modern Algebra • Winter 2005: Math207B: Elliptic Curves and Modular Forms • Spring 2006: Math104A: Number Theory and Math200C: (Graduate) Algebra • Fall 2006: Math104A: Number Theory and Math251A: Lie Groups • Winter 2007: Math251B: Lie Groups • Spring 2007: Math205: L-functions and Modular Forms • Winter and Spring 2008: Math203: Algebraic Geometry • Fall 2008: Math140A: Foundations of Analysis, and Math204: Analytic Number Theory • Winter 2010: Math20F: Linear Algebra, and Math202B: Applied Algebra (aka Representation theory of finite groups) • 2010-2011: Semester 1: MA3265: Introduction to Number Theory; Semester 2: MA6292 : Topics in Mathematics II (Trace formula for GL(2)) • 2011-2012: Semester 1: MA2202S: Algebra I S; Semester 2: MA 5204: Graduate Algebra II • 2012-2013: Semester 1: MA 4203: Galois theory; Semester 2: MA 5204: Graduate algebra II Papers and Preprints • The Gross-Prasad conjecture and local theta correspondence (with A. Ichino), pdf • Twisted Bhargava cubes (with G. Savin). pdf • Recent progress on the Gross-Prasad conjecture (a survey talk given at the annual meeting of VIASM, July 2013), pdf • A Langlands program for covering groups? (a talk given at the Sixth International Congress of Chinese Mathematicians, July 2013), pdf • The regularized Siegel-Weil formula (the second term identity) and the Rallis inner product formula (with Y. Qiu and S. Takeda), to appear in Inventiones. pdf • The local Langlands conjecture for GSp(4) II: the case of inner forms (with W. Tantono), to appear in American J. of Math pdf • The local Langlands conjecture for GSp(4) III: stability and twisted endoscopy (with P. S. Chan), to appear in J. of Number Theory (memorial issue for Steve Rallis) pdf • Arithmeticity for periods of automorphic forms (with A. Raghuram), to appear in the proceedings of 2012 International Colloquium ``Automorphic Representations and L-functions" at Tata Inst. pdf • On a conjecture of Sakellaridis-Venkatesh on the unitary spectrum of spherical vareities (with R. Gomez), to appear in a volume in honor of N. Wallach pdf • Formal degrees and local theta correspondence (with A. Ichino), to appear in Inventiones pdf • The Shimura correspondence a la Waldspurger (Notes of a short course given at the Postech Theta Festival) pdf • Representation of metaplectic groups, Fifth International Congress of Chinese Mathematicians. Part 1, 2, 155-1“170, AMS/IP Stud. Adv. Math., 51, pt. 1, 2, Amer. Math. Soc., Providence, RI, 2012 • Doubling zeta integrals and local factors for metaplectic groups, Nagoya Math. Journal 208 (2012), 67-95 (a volume in memory of Hiroshi Saito) pdf • Representations of metaplectic groups II: Hecke algebra correspondences (with G. Savin), Represent. Theory 16 (2012), 513-539 pdf • Representations of metaplectic groups I: epsilon dichotomy and local Langlands correspondence (with G. Savin), Compositio Mathematica, volume 148, issue 06, pp. 1655-1694 pdf • Restriction of representations of classical groups: examples (with B. H. Gross and D. Prasad), Asterisque 346, 111-170. pdf • Symplectic local root numbers, central critical L-values and restriction problems in the representation theory of classical groups (with B. H. Gross and D. Prasad), Asterisque 346, 1-110. pdf • Bessel and Fourier-Jacobi Models of the Weil Representation, in preparation. • A regularized Siegel-Weil formula for exceptional groups, in Arithmetic geometry and automorphic forms, 155-185 (a volume in honor of Steve Kudla),2, Adv. Lect. Math. (ALM), 19, Int. Press, Somerville, MA, 2011 pdf • On endoscopy and the refined Gross-Prasad conjecture for (SO_5,,SO_4) (with A. Ichino), J. Inst. Math. Jussieu 10 (2011), no. 2, 235-324 pdf • The regularized Siegel-Weil formula: second term identity and non-vanishiing of theta lifts from orthogonal groups (with S. Takeda), J. Reine Angew. Math. 659 (2011), 175-244. pdf • Theta Correspondences for GSp(4) (with S. Takeda), Represent. Theory 15 (2011), 670–718. pdf • The local Langlands conjecture for GSp(4) (with S. Takeda), Ann. of Math. (2) 173 (2011), no. 3, 1841-1882 pdf • The local Langlands conjecture for Sp(4) (with S. Takeda), Int. Math. Res. Not. IMRN 2010, no. 15, 2987-3038. pdf • On Shalika periods and a theorem of Jacquet-Martin (with S. Takeda), American J. of Math. 132 (2010), 475-528, pdf • Trilinear forms and triple product epsilon factor, International Mathematics Research Notices 2008 2008: rnn058-15, pdf • Restrictions of Saito-Kurokawa representations (with N. Gurevich), With an appendix by Gordan Savin. Contemp. Math., 488, Automorphic forms and L-functions I. Global aspects, 95–124, Amer. Math. Soc., Providence, RI, 2009. pdf • CAP representations of G_2 and the Spin L-function of PGSp_6 (with N. Gurevich), Israel J. Math. 170 (2009), 1–52. pdf • A Siegel-Weil formula for automorphic characters: cubic variation of a theme of Snitz, J. Reine Angew. Math. 625 (2008), 155-185. pdf • The Spin L-function of quasi-split D_4 (with J. Hundley), in International Math Research Papers Vol. 2006 (Article ID 68213), 1-74.. pdf • The Saito-Kurokawa space of $PGSp_4$ and its transfer to inner forms, in Eisenstein series and applications, 87–123, Progr. Math., 258, Birkhäuser Boston, Boston, MA, 2008. pdf • Non-tempered Arthur packets of $G_2$: liftings from $\tilde{SL}_2$ (with N. Gurevich), American Journal of Math 128, No. 5 (2006), 1105-1185. pdf • The Rallis-Schiffmann lifting and Arthur packets of $G_2$ (with N. Gurevich, an announcement of the results of the above long paper), Quarterly Journal of Pure and Applied Math Vol 1, No 1 (2005), 109-126. pdf • Multiplicity formula for cubic unipotent Arthur packets, Duke Math Journal 130, no. 2 (2005), 297-320. pdf • The mass of unimodular lattices (with M. Belolipetsky), J. of Number Theory 114 (2005),221-237. pdf • Non-tempered Arthur packets of $G_2$ (with N. Gurevich), in proceedings of Rallis' 60th birthday conference: "Automorphic Representations, L-functions and Applications:Progress and Prospects", 129-155. pdf • Uniqueness of Joseph ideal (with G. Savin), Math. Research Letters 11 (2004), No. 5-6, 589-598. pdf • On minimal representations: definitions and properties (with Gordan Savin), Representation Theory Vol 9 (2005), 46-93. pdf • Real and global lifts from $PGL_3$ to $G_2$ (with G. Savin), IMRN 2003, Vol. 50 (2003). pdf • Endoscopic lifts from $PGL_3$ to $G_2$ (with Gordan Savin), Compositio Math 140, No. 3 (2004), 793-808. pdf • Cubic unipotent Arthur parameters and multiplicities of square-integrable automorphic forms (with N. Gurevich and D.H. Jiang), Invent. Math. 149 (2002), 225-265. pdf • Schemas en groupes et immeubles des groupes exceptionels sur un corp locale. Deuxieme partie: les groupes $F_4$ et $E_6$ (with J.-K. Yu), Bulletin Math. Soc. France 133 (2005), no. 2, 159--197. • Schemas en groupes et immeubles des groupes exceptionels sur un corp locale. Premiere partie: le groupe $G_2$ (with J.-K. Yu), Bull. Math. Soc. France 131 (2003), 307-358. ps • Equidistribution of integer points on a family of homogeeneous varieties: a problem of Linnik (with H. Oh), Compositio Math. 323 (2003), 323-352. pdf • Fourier coefficients of modular forms on $G_2$ (with B. H. Gross and G. Savin), Duke Math. Journal 115 (2002), 105-169. ps • On an exact mass formula of Shimura, (with J. Hanke and J.-K. Yu), Duke Math. J. 107 (2001), 101-131. pdf • A Siegel-Weil formula for exceptional groups, Crelle 528 (2000), 149-181. pdf • Group schemes and local densities (with J.-K. Yu), Duke Math. J. 105 (2000), 497-524. pdf • Integral embeddings of cubic norm structures (with B. H. Gross), J. Algebra 233 (2000), no. 1, 363--397. pdf • Commutative subrings of certain non-associative rings (with B. H. Gross), Math. Ann. 314 (1999), no. 2, 265--283. pdf • An automorphic theta module for quaternionic exceptional groups. Canad. J. Math. 52 (2000), no. 4, 737--756. pdf • The dual pair $G\sb 2\times {\rm PU}\sb 3(D)$ ($p$-adic case) (with G. Savin), Canad. J. Math. 51 (1999), no. 1, 130--146. pdf • Exceptional Howe correspondences over finite fields. Compositio Math. 118 (1999), no. 3, 323--344. pdf • Haar measure and the Artin conductor (with B. H. Gross), Trans. Amer. Math. Soc. 351 (1999), no. 4, 1691--1704. pdf • A note on Kottwitz's invariant $e(G)$. J. Algebra 208 (1998), no. 1, 372--377. pdf • Modular forms of level $p$ on the exceptional tube domain (with H. Y. Loke), J. Ramanujan Math. Soc. 12 (1997), no. 2, 161--202. pdf Talks and Course Notes
{"url":"http://www.math.nus.edu.sg/~matgwt/","timestamp":"2014-04-17T06:40:55Z","content_type":null,"content_length":"14352","record_id":"<urn:uuid:aa319cda-9571-4d54-99eb-daf89ebe001a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics Reading List for High School Students Via Slashdot, I came across the following question Troy writes: “I’m a high school math teacher who is trying to assemble an extra-credit reading list. I want to give my students (ages 16-18) the opportunity/motivation to learn about stimulating mathematical ideas that fall outside of the curriculum I’m bound to teach. I already do this somewhat with special lessons given throughout the year, but I would like my students to explore a particular concept in depth. I am looking for books that are well-written, engaging, and accessible to someone who doesn’t have a lot of college-level mathematical training. I already have a handful of books on my list, but I want my students to be able to choose from a variety of topics. Many thanks for all suggestions!” There are some good suggestions in the comments, and some not so good ones. Surely our wise and mathematically sophisticated readers will be able to help. Add what you can there, and in the comments here if you like. I’d suggest “Introduction to Mathematical Reasoning” by Peter Ecceles. It’s a textbook that helps bridge the gap between high school and college level mathematics. It’s a great book that teaches student how to write up arguments in a logically rigorous manner; another book in the same vein would be “Tools of the Trade” by Paul Sally Jr. Also, I’d suggest anything by HMS Coexter. Another book, also focusing on geometry/topology, would be “Intuitive Topology” by Prasolov. It’s a book used for students at Moscow Math School 57, and the first couple of sections are pretty readable–just playing around with knots and links. I think it’s a nice book, but it might be a little “unstructured” for students. Also, there are two classics I’d recommend: The Moscow Puzzles (Kordemsky) and Mathematical Circles: Russian Experience (Fomin). Both books are excellent. They cover a wide array of mathematical topics disguised as brain teasers, puzzles and riddles. 1. After algebra II, the high school math track moves toward calculus. Fine. You need calculus for any kind of engineering or science. But during the Fifteenth Century, mathematics moved in the direction of more advanced algebra. People figured out how to solve cubic and then quartic equations. When no one could crack general fifth degree equations, Abel and Galois investigated the roots of such equations and determined the impossibility of solution by radicals. I don’t know of any general book, but I bet a good student could trace this development by Googling. 2. Diophantine Equations. 3. There are some good books of hard-to-prove geometry theorems. 4. Fermat’s Last Theorem – among other things this book makes students aware that there are unsolved problems. Oh yeah. The Code Book, by Simon Singh. Maybe it’s not all strictly math, but it is a good read about codes and ciphers. The list may vary depending on the goal. The age group (16-18) doesn’t really mean much without knowing their goals and at what level they are being taught. Are they the kind who would be satisfied by merely passing remedial type math courses? Or are they future engineers and scientists? Few high school students get intrigued by pure maths, though they actually get intrigued by something like physics when the right buttons get pushed. For many it’s easier to “get it” when there are intuitive contexts, just like dinosaurs, astronomy, and such are a great tool to lure the scientifically challenged into learn some science. George Polya’s “How to Solve It” remains one of my all time favorites. Accessible to students with a rudimentary knowledge of geometry, it nevertheless retains its power for undergraduate and graduates students alike. Also, Imre Lakatos’s “Proofs and Refutations” was a “must have” according to one of my CS professors.. • http://localseasoning.blogspot.com/ I highly recommend The Symmetry of Things, by John H. Conway, Heidi Burgiel, and Chaim Goodman-Strauss. The first section does lots of real math (classifies compact surfaces with boundary, and therefore orbifolds with nonnegative curvature), but is completely accessible to a math-inclined reader who doesn’t know anything. • http://rigtriv.wordpress.com Flatland, How to Solve it. The lady drinking tea (about stats) by David Salsburg Passionate Minds and E=MC2 (both by David Boadanis) Feynman’s books on physics S.L. Loney: Plane Trigonometry Part I S.L. Loney: Co-ordinate Geometry Hall & Knight: Higher Algebra These were classics that I found very helpful as a high-school student. Classics that are over a century old and still in publication. There was one more book for calculus that engineering students in the US use in their freshman years, but I forget which one – had a russian author. • http://astrohacker.com/ I know that sounds ridiculous, but I learned a lot about set theory on Wikipedia that helped immensely in the courses I later took. There are many great books, but Wikipedia is an excellent way to browse a ton of mathematics freely and easily. In my post at 8:33 PM, I mistakenly referred to the book Fermat’s Enigma, by Simon Singh, as “Fermat’s Last Theorem.” I have to agree with “Flatland” – it’s an interesting blend of philosophy and mathematics. Regardless of the subtext of the book, it’s a mind opening story. “Fundamentals of Mathematics” by Moses Richardson (MacMillan, Various editions 1939-1966, now out of print. There is also a 1973 edition co-authored with – I think – his son.) I have the 1966 edition. It is a survey, yet one not only of astounding breadth, but also of great depth. Richardson maintains throughout the book the spirit of mathematical rigor. He begins with logical systems, then moves through the customary progression from counting numbers through to complex numbers, arithmetic, algebra (including group theory), functions,calculus, probability, even non-euclidian geometry and transfinites. All this in about 550 very well-written pages! It is a book that for forty years I have been able to pick up, ever confident that I would come upon an interesting passage or chapter. I always like learning the history of the mathematical concept. I find that this helps with understanding. A book that does this well is “Zero: The Biography of a Dangerous Idea” by Charles Seife. Numbers, series, and integrals are among the many mathematical topics covered in the history of zero. Sticking to just one, I’ll recommend “Forever Undecided: A Puzzle Guide to Godel”, by Ray Smullyan. It’s fun, challenging, and introduces some serious but sexy mathematics. Also, all of Joshua’s suggestions sound good. Respectfully, I am going to recommend against some of the others: (i) Polya; better to try solving some problems (ii) the Singh books ; basically pop science (iii) Gardner; generally good, but the excellent Moscow Puzzles is very much better. Most useful mathematics book: Advanced Mathematics for Engineers and Scientists. Yes, it’s a Schaum’s Outline, it’s easily the most useful mathematics textbook I ever had and I wish I had a copy earlier in life. Most inspiring sciences book: A Short History of Nearly Everything by Bill Bryson, this book puts the sciences into perspective. It tells a story of the knowledge of the earth and the way that knowledge was attained. Coincidences, Chaos, and All That Math Jazz. Fantastic book, very accessible and actually fun to read. Won’t actually teach anything, it’s more a way to grasp concepts like infinity, dimensions beyond the third, fractals, etc. It’s perfect for high school. • http://arcmathblog.blogspot.com/ I have a manuscript posted on my math blog that some of my students are reading to get better acquainted with basic calculus concepts. It’s written for people who have a passing acquaintance with algebra and a dash of trig. Folks are welcome to take a look. A Stroll through Calculus: A guide for the merely curious • http://rightshift.info “Differential and Integral Calculus” – Richard Courant Most of my undergraduate peers (in my country) understand calculus as a series of derivative and integral formula for standard functions. They can’t appreciate what a limit or divergence means. A good book on Calculus must build stuff from the ground up. That apart, “One, Two Three . . . Infinity” – by George Gamov was a fun general read. And as someone already mentioned, Feynman’s Lectures are a must. • http://whenindoubtdo.blogspot.com/ They could have a lot of fun reading Neal Stephenson’s Cryptonomicon. Why not linear algebra? It’s not exactly traditional high school material, but it doesn’t really have any prerequisites (beyond high school algebra and complex numbers), and it’s full of easy-to-visualize examples. Elementary group theory would also be appropriate, I would think. I recommend “Linear Algebra Done Right” by Sheldon Axler, although it’s clearly intended for math rather than science students. • http://diracseashore.wordpress.com/ I think that the book that I wish I’d read as a teenager (assuming it had been written then) is “Conceptual Mathematics”. While in school, someone had gifted me Courant & Robbins ‘What is Mathematics?’ That book was a revelation. But, If I were to go back in time, I’d give myself Weyl’s ‘Symmetry’. • http://www.phys.uconn.edu/~yerubandi What about the following books? J. Weeks, The Shape of Space T. Needham, Visual Complex Analysis Flatland is excellent but there is also a set of four books called from one to infinity that is a collection of all the major papers of mathematicians throughout history. A 15-year-old girl who hated math said it was the best book she had ever read. Ever! • http://tristram.squarespace.com From Here to Infinity by Ian Stewart. The Knot Book by Colin C Adams. How about “Number Theory in Science and Communication: With Applications to Cryptography, Physics, Digital Information, Computing and Self-Similarity” by Manfred Schroeder. Minimal background needed and as the title suggests, it has an applied orientation. I think that some basic differential geometry/vector calculus/complex analysis will be suitable-for tensor calculus I think the best book is Borisenko and Tarapov-old russian book-one of the best in the subject(a good book is also Introduction to Geometry by Coxeter). Probability theory is also a very good add in to the curriculum-for instance the book by Kolmogorov-not the classic Foundations of …, but a small, really interesting, high school level book , which goes all the way from dice to the central limit theorem-it is really interesting to learn the subject from the master’s book. • http://www.pieter-kok.staff.shef.ac.uk/ I’m a high school physics teacher. There has been an explosion of extremely enticing math books for the general reader (I really think Hawking’s “Brief History” was the start of this.) Many are published by Princeton, some by Johns Hopkins. The three big names are Ian Stewart, Paul J. Nahin and Eli Maor. Another reader above suggested abstract algebra; Stewart’s “Why Beauty is Truth” goes a long way toward addressing group theory for high school students. Barry Mazur and Simon Singh have also published good books for high school students and the general public. For logic, it’s very hard to beat Raymond Smullyan’s books. My favorite is his first, “What is the Name of This Book?” which gets to Gödel’s theorem via jokes and riddles. For classics, there are these: “Calculus Made Easy” by Silvanus P. Thompson, or the revised version with Martin Gardner, and Michael Spivak’s largely unknown “The Hitchhiker’s Guide to Calculus”. Courant and Robbins, “What is Mathematics?” is another classic, as is Hilbert and Cohn-Vossen’s “Geometry and the Imagination”. These are a little dry, as is Weyl’s “Symmetry”. I think the suggestion of the Feynman Lectures is not really a good one for most high school students. On the other hand, Feynman’s “Character of Physical Law”, though not strictly mathematical, is an excellent choice. Better yet in my opinion is “Feynman’s Lost Lecture” which is actually mathematics (that the inverse square law leads to elliptical orbits, done with very little more than Euclidean geometry.) Finally, some of the quirky books of Lillian and Hugh Lieber have recently been reprinted. These are “The Education of T. C. Mits”, “Infinity” and “The Einstein Theory of Relativity”. (Disclaimer: I LaTeXed and helped to edit the last, though I have no financial interest in the book’s success.) Roger Penrose – “The Road to Reality” a breathtakingly beautiful survey of mathematics and physics Sheldon Axler – “Linear Algebra Done Right” sharp, engaging introduction to mathematical reasoning and proofs, through linear algebra Harold Abelson & Gerald Jay Sussman – “Structure and Interpretation of Computer Programs” ingenious introduction to, well, I’m not really sure… “programming” would be a vast understatement. Its arc spans from Ackermann’s function, to symbolic computation of polynomials, to an interpreter for the Scheme programming language (which the book is written in, by the way) Richard Feynman – the lectures, volume 1 no introduction needed • http://blueollie.wordpress.com/ I’d suggest NO EXTRA CREDIT at all. What happens is that students get used to this option and then show up at college and expect a “way out” of learning the assigned material. But as far as reading, Abott’s “flatland” and (forget the author) “How to Lie with Statistics” are both readable and good. The Shape of Space by Weeks will probably be too much for them, but has lots of cool pictures. Neil Stephenson’s Anathem would be particularly relevant to students For an introduction to advanced mathematical ideas that should interest, rather than frighten, high schoolers, I think Burger & Starbird’s Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas is a good choice. It’s heavy on the concepts but light on the complicated math. I’ve had Starbird as a professor before, and he’s extremely talented at teaching non-math majors about weighty mathematical concepts. I’ve seen him teach topology to run-of-the-mill liberal arts majors with success. I have yet to find a suitable math book for high school students. The problem is that in math teaching, little is done to present the material so that it looks very exciting. What you need to do is to present some easy to understand spectacular stuff that will motivate the students to spend a lot of time studying math. E.g., you could teach modular arithmetic and then immediately show the power of this method. You can give an elementary proof of Fermat’s little theorem (a^(p-1) = 1 Mod p), Chinese Remainder theorem, Euler’s generalization of Fermat’s little theorem etc. Calculus: Why not explain Taylor’s expansion intuitively (just fit a polynomial by requiring that the derivatives match). Then a spectacular result would be to use it to give Euler’s intuitive nonrigorous argument that zeta(2) = pi^2/6. Also, use that to prove that the probability that two large integers have no common factor is 6/pi^2. If students are exposed to math in this way, then they will like it a lot more and be prepared to go through the tedious rigorous proofs. • http://www.soulphysics.org Richard Courant’s What is Mathematics?. This is the mathematician’s classic. It’s a serious introduction to the field. It’s also readable in highschool. and has the amazing propensity to make you want to do graduate work in math, before you even get to college. (That’s what happened to me, anyway.) I teach mathematics at a highly selective liberal arts college in the Midwest, and I have assigned as supplementary readings a number of excellent books written for a general audience that do a wonderful job of conveying the joy of mathematical discovery. I’ve seen two of them in the preceding comments, but I’ll repeat them here to add another vote for their selection. Chaos, by James Gleick The Shape of Space, by Jeffrey Weeks Sync, by Steven Strogatz Godel, Escher, and Bach, by Douglas Hofstadter I haven’t read a ton of math books, but I did like the History of Pi by Petr Beckman. It describes the development (or estimation) of the value of pi through the centuries. Four Colors Suffice: How the Map Problem Was Solved, by Robin Wilson This is one I haven’t seen among the many great suggestions. Four Colors Suffice is great because it connects an ancient problem to it recent solution, addresses a problem that may people wouldn’t even think of as math (map coloring), and actually does a great job of explaining the actual proof, in addition to giving the story of the people who did it. Early in the book readers learn how to prove that any map can be colored with six colors. This is pretty easy. Later they learn how to prove that any map can be colored with five, which is more difficult and includes all of the essential elements of the four color proof. The four color proof was eventually done with the assistance of a computer, which is an interesting twist to the It talks about many related problems as well as giving the stories of the many interesting people involved. Also, it’s not very long. It has all of the equations that are needed, but they are very simple formulas because it is a mapping problem. There are far more pictures than equations. I teach high school too, I know these will be a primary consideration for some kids. However, even with a Ph.D. in physics, I found this book fascinating. When I was that age, I read A History of Pi in one sitting after picking it off a friend’s shelf. You might want to check out The Mathematical Experience by Hersh & Davis. I recall reading this around high school age. It consists of short, self-contained essays, so you could pick and choose. And I second the notion that there should be no such thing as extra credit. Ever. • http://scienceblogs.com/sunclipse/ I read a fair number of math-y books when I was that age, but in thinking back, it’s easier to come up with glitz and glamour than it is to recall books which actually helped develop the mathematical skills I use as a science person. Knuth’s Surreal Numbers was fascinating, and I probably got some indirect benefits from encountering proof techniques (induction and such), but how often do physicists use surreal numbers? Moreover, books on more recent developments — like Singh and Aczel’s books on Fermat’s Last Theorem, or Keith Devlin’s various attempts to popularize the Riemann hypothesis — have too many gaps, too many places where the abstruse sophistication of the mathematical arguments are glossed over with a bit of narrative. It’s fun, yes, and it keeps the enthusiasm stoked, but you can’t actually solve problems in group theory by applying the life story of Evariste Galois. And, if you can’t actually use the mathematics, it’s not really part of your life, is it? Having issued all these caveats, then, here are some books I’ve enjoyed, ranked in increasing order of “I could do stuff after having read this”: Surreal Numbers, by Donald Knuth The Book of Numbers, by John Conway and Richard Guy QED: The Strange Theory of Light and Matter, by Richard Feynman Chaos, by James Gleick The Cartoon Guide to Statistics, by Larry Gonick and Woollcott Smith The Manga Guide to Statistics, by Shin Takahashi Also, I’m midway through Douglas Hofstadter’s I Am a Strange Loop and Marcus du Sautoy’s Symmetry: A Journey Into the Patterns of Nature (originally published as Finding Moonshine in the UK), and I bet I would have liked both of them when I was a ninth-grader. I very much enjoyed John Derbyshire’s book on the Riemann Hypothesis. Blake Stacey, above, seems to downplay an apparently similar book by Keith Devlin, which I don’t know, but Derbyshire held my interest throughout, and I suspect the book might help a bright highschooler appreciate the power of mathematics. Another one would be Morris Kline’s Mathematical Thought from Ancient to Modern Times. Sure, it’s a history book, but Kline teaches a lot of mathematics along the way. I continue to find that the amount of mathematics known to the Babylonians, Greeks, Egyptians et al. quite staggering, and I hope that your high-school students would feel the same. • http://www.savory.de/blog.htm Beyond Numeracy. I read from a great text in a college course on the history of Math, but would be great for high school because it is split up into short stories – a sketch for each element explained. Berlinghoff and Gouvêa, Math Through The Ages. Published by Mathematical Association of America. Another vote for Godel, Escher, Bach. For a more personal touch: “Men of Mathematics” by E. T. Bell. Although Bell doesn’t aways get the facts right, the book is a very entertaining look at the lives and work of famous mathematicians of the past. I was the sort who read lots of the books mentioned above at that age (at least the ones that were published then…). The most outstanding were: Weeks, The Shape of Space Hilbert and Cohn-Vossen, Geometry and the Imagination Hoftstader, Godel Escher Bach You can give them Penrose, either The Road to Reality or, maybe better, The Emporer’s New Mind. If they’re like me, they will only understand a fraction, but the writing will set them on the right path. I don’t think anyone has mentioned Stewart and Tall, The Foundations of Mathematics Stewart’s popularisations have been mentioned, but this is his text book specially designed to make the transition between school mathematics and the rigour of university study. I leant how to appreciate ‘real’ mathematics by self-study of that book and would highly recommend it. I am a high school math teacher as well and there are some interesting books on the list. Here a few not mentioned: Math Devil (an odd little book), Conned Again, Watson (Sherlock Holmes and Dr. Watson take on cases with mathematical solutions) and Journey Through Genious (a very well written -with lots of real math – history book). THE UNIVERSE IN A TEACUP by K.C. Cole Another vote for “Godel, Escher, Bach” – this can be, no hyperbole, a life changing book for the mathematically inclined. I’d also suggest “When Least is Best” by Paul Nahin, which is a fantastic book about mathematical optimization. It’s a very accessible but also very rigorous introduction to some of the most interesting and important applied mathematics out there. One I wish I had read in high school is “How to prove it” by Velleman (sp?). Very useful for someone who intents to take maths in university but has not taken a class on how to write proofs. Frankly, most of these professionals have lost sight of the reality of HS, when you have hot blood flowing thru your veins, its hard to concentrate on anything, much less the king of abstraction, Nonetheless, I’d wager 95% of your students are NOT going to major in mathematics. The bright ones will mostly go into engineering in college or engineering technology. Hence they need practical math skills, & not a cursory acquaintance with stuff they will rarely use. All the above esoteric math refs might be OK for gifted students, but the average Jane/Joe Schmoe will ultimately use math as a tool to facilitate earning a living, not an end to itself. Do not short change them. Equip them with practical algebra, trig, and elementary calculus so they leave with a diploma that empowers them, not retards or intimidates them. I have taught college physics in 4 states, at 4-yr universities, community colleges, & tech schools. Without questtion, math is the primary hangup. Conquer that, and all walls fall. • http://www.phys.uconn.edu/~yerubandi I’d second the nomination of Linear Algebra Done Right, 2nd edition, by Sheldon Axler. It explains proof ideas for the mathematical novice alongside the actual material. And it’s probably the next course the student will study in college anyway — except he’ll probably be forced to learn it the “wrong way” — through matrices, row reduction, systems of equations, I agree with Simon Singh’s Fermat book and James Gleick’s Chaos. Both make math seem exciting and you will probably get at least a little bit of math with you from reading them. Courant’s book is very nice, but I doubt that anyone but the already quite interested will want to read it. If you are a math geek it will be perfect, though. Calculus by Spivak get it done right • http://scienceblogs.com/sunclipse/ John T. Scott, I was thinking of the chapter “Hard Problems About Complex Numbers” in Keith Devlin’s Mathematics: The New Golden Age. It’s a good book, and if you want an introduction to Mersenne primes or Cantorian higher orders of infinity, it’s probably great for your purposes. (I believe there’s since been a revised edition which updates the chapter on Fermat’s Last Theorem and such.) My only issue is that I’d have to rank it only middling on the “I could actually do math and science after reading this” scale. (Yes, there are all sorts of factors influencing how much practical competence one gains from a book or a class, including one’s self-discipline, but holding all else constant, there’s still a gradation of books in this regard, I believe.) It might be heresy to mention a TV show in a thread about books, but I have to plug the Caltech production Project Mathematics!, which is an all-around nifty treatment of geometry and • http://www.users.bigpond.com/pmurray I’d wager 95% of your students are NOT going to major in mathematics. The bright ones will mostly go into engineering in college or engineering technology. Hence they need practical math skills, & not a cursory acquaintance with stuff they will rarely use. Well in that case, boolean algebra and math that relates to computing is a must. I don’t know what gets taught in computing classes these days: probably “how to use microsoft word to prepare a job application”. If you are interested in exposing the students to joys of abstract math, then LISP and Prolog might be the go. There are free interpreters out there. Or: why not teach boolean algebra by having them assemble logic circuits on breadboards? The real rudiments of computing: AND, OR and NOT gates, and making the LEDs flash. The chips are reasonably cheap, I think. You can go onto groups and modular arithmetic and whatnot from there. One, Two, Three . . . Infinity, by Gamow. Old, but as good as ever. It fascinated me in high school and decades later I startled a nephew by giving him a copy. The Archimedes Codex By Reviel Netz, William Noel Who Is Fourier?: A Mathematical Adventure (Paperback) by Transnational College of LEX (Author) This book was written by members of a Japanese educational commune who devote themselves to innovative learning styles. They learn languages by immersing themselves in language — they say expose themselves to 11 languages simultaneously, and it works! They were interested in Fourier series, as they related it to understanding sound, which was related to their interest in language, and they set out to understand it in creatively. This book has cartoonish pictures, but don’t get put off. It sets out to derive the concepts it needs from the ground up. I love math books that lead me along a chain of logic that makes everything fall into place. This book starts by explaining graphs and trig functions, and goes through Fourier transforms. I love Courant’s calculus, because it explains everything (although Courant keeps making comments that his work is simplified and not truly rigorous!), and I love this book for the same quality, or though it is the opposite end of the spectrum from Courant, which is about as formal as one will be exposed to these days. This book is unique. Check it out, I cannot do it justice. anon at 11:36 said: “While in school, someone had gifted me Courant & Robbins ‘What is Mathematics?’ That book was a revelation.” Same happened to me. That book made me a mathematician. I also strongly second Feynman’s Physical Law. It takes away the guilt from wanting to be a mathematician To Jimbo, who claimed hormones make study difficult: I disagree. Mathematics is the only thing gripping enough to take your thoughts away from sex. Or so it seemd to me as a teenager. Jimbo has it EXACTLY right. It’s hard to fathom how truly BAD most of these suggestions are (no, I don’t have any better ones other than seconding Mark’s very valuable and pragmatic suggestion). The fraction of HS students interested in pure mathematics is probably one or two orders of magnitude LOWER than those interested in a career in physics or astronomy. I’m very happy when I get students out of HS who are not 1) totally innumerate 2) totally turned off by math and science (almost always by bad secondary teaching). Oh, want to change these numbers? Go volunteer at a high school or your local community college (and not just to teach the “smart” kids!). (my own teaching career mirrors Jimbo’s pretty well, by the way). “Concise Introduction to Pure Mathematics”, Martin Liebeck – great book that I read a couple of months ago, and I’m in high school too. • http://home.comcast.net/~djmpark/ John Stillwell, _____Numbers and Geometry ____ Mathematics And Its History CoolStar, Comrade in Arms…Merci beau coup, Monsieur ! Paul Murray…Ask them to count in base 2 to 64. Me thinks you will get a sobering lesson in math reality, and soon forget about logic circuits…Better to focus only on soldering ! The state of American HS math is worse than the American economy… And that’s sayin A-Lot. Estraven: Thinking math might keep your mind off hot hormonal rushes is like hoping abstinence pledges will also ! I have to agree with coolstar & Jimbo. I can see that most people here don’t deal with real high school students, outside of perhaps those occasional “feel good” science outreach programs. Looking at the book suggestions here, I can clearly see overenthusiastic math teachers who cannot communicate at all with students. So sad. If these students are already into calculus at all, I’d suggest “Calculus Made Easy” by Silvanus Thompson, or (even better) Martin Gardner’s annotated reprint of the same book. As for the concept of infinity, Rudy Rucker’s “Infinity and the Mind” is a good one – very entertaining! Lastly, if any of your students enjoyed Edwin Abbott’s “Flatland”, I’d also suggest “Flatterland” by Ian Stewart, and “Spaceland – A Novel of the Fourth Dimension” by Rudy Rucker. Maybe “What is mathematics” – by courant, robbins and stewart And “Does god play dice” by stewart. “Another vote for “Godel, Escher, Bach” – this can be, no hyperbole, a life changing book for the mathematically inclined. ” Absolutely. I only have A-level maths, but it’s still the best non-fiction book I’ve read by some margin (sorry, Phil). • http://bhr-ett.blogspot.com
{"url":"http://blogs.discovermagazine.com/cosmicvariance/2009/02/08/mathematics-reading-list-for-high-school-students/","timestamp":"2014-04-17T13:35:44Z","content_type":null,"content_length":"169209","record_id":"<urn:uuid:123691f0-c992-49b4-8fe8-c576e17046c3>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
Volume of a sphere August 25th 2009, 07:50 PM #1 Aug 2009 i have a question that asks for the volume of the sphere using "Volume of Solids of Revolution" i know that the volume formula of a sphere is 4/3*pi*r^3 but how do i get to that because when i integrate the area, i come up with 1/3*pi*r^3. i just need to know how to go from the integral of pi*r^2 to 4/3*pi*r^3. thanks in advance Can I see the full integral please? Can I see the full integral please? don't worry, i have figured it out. need to find the integral of r^2=x^2+y^2, not pi*r^2. thanks anyway August 25th 2009, 08:03 PM #2 August 25th 2009, 08:08 PM #3 Aug 2009
{"url":"http://mathhelpforum.com/calculus/99242-volume-sphere.html","timestamp":"2014-04-20T02:44:25Z","content_type":null,"content_length":"33245","record_id":"<urn:uuid:8f543d08-2329-4aab-959a-48780b8f74ea>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Smallest number of items from given probability March 30th 2011, 05:24 AM Smallest number of items from given probability Hey All, Another problem I'm confused with. A bag contains several blue discs and several yellow discs. Find the smallest number of discs which should be taken out of the bag, if the probability that at least 3 of those removed are of the same color is 1. Only thing I can see is that least number must be > 3. How do I proceed if the no of items is unknown. I have feeling I am missing something simple, a hint would be great. Thanks again for all your help! March 30th 2011, 06:40 AM its hard to give a useful hint that isn't the answer itself. Try thinking along these lines: Suppose i took out 2 discs, it it possible that no 3 of them are the same color? Yes, (obviously since there aren't 3 discs!). eg 2 could be blue, none yellow. Suppose i took out 3 discs, it it possible that no 3 of them are the same color? Yes, eg 2 could be blue, one yellow (or the other way around) Keep going until you find that it is impossible (it wont take long!) March 30th 2011, 08:37 AM 5! either way will have 3 of same color. Thanks, seems so simple now that you showed the way. :)
{"url":"http://mathhelpforum.com/statistics/176312-smallest-number-items-given-probability-print.html","timestamp":"2014-04-18T00:57:09Z","content_type":null,"content_length":"4678","record_id":"<urn:uuid:34f22640-090c-4919-9629-98fd596d8ce0>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
Portability non-portable (rank-2 types, type families, scoped type variables) Stability experimental Maintainer Edward Kmett <ekmett@gmail.com> Safe Haskell Safe-Infered Based on the Functional Pearl: Implicit Configurations paper by Oleg Kiselyov and Chung-chieh Shan. The approach from the paper was modified to work with Data.Proxy and to cheat by using knowledge of GHC's internal representations by Edward Kmett and Elliott Hird. Usage reduces to using two combinators, reify and reflect. ghci> reify 6 (\p -> reflect p + reflect p) :: Int The argument passed along by reify is just a data Proxy t = Proxy, so all of the information needed to reconstruct your value has been moved to the type level. This enables it to be used when constructing instances (see examples/Monoid.hs). Reifying any term at the type level
{"url":"http://hackage.haskell.org/package/reflection-1.0/docs/Data-Reflection.html","timestamp":"2014-04-19T10:23:12Z","content_type":null,"content_length":"4937","record_id":"<urn:uuid:c313e6c9-e2e8-4c3e-b259-0feb6ada79dc>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
Show Inequality X^2 > a January 14th 2007, 04:42 AM #1 Jan 2007 Show Inequality X^2 > a If a > 0, show that the solution set of the inequality x^2 > a consists of all numbers x for which -(sqrt{a}) < a (sqrt{a}). Okay...what exactly is the question asking in simple math terms? I don't follow the logic behind this sort of reasoning. First there is a typo in your post, the phrase: "consists of all numbers x for which -(sqrt{a}) < a (sqrt{a})" is probably mistyped as it is always true, there should be an x in there somewhere. If you sketch the graph of x^2-a (with a>0) you will see that this is >0 everywhere except between the roots of x^2-a, and when this is >0 we have x^2>a. So x^2>0 when x<-sqrt(a), or when x>sqrt(a). You have, for $a>0$. $x^2 > a$ Express as, $x^2 - a >0$ $x^2 - (\sqrt{a})^2 > 0$ To be positive we require both factors to be positive or negative. 1)Both positive. That means, Another way of writing this is, Because if the top inequality is true then for certainly the bottom one, for it is contained in the top one. 2)Both negativei. That means, Another way of writing this is, Because if the bottom inequality is true then for certainly the top one, for it is contained in the bottom one. $x>\sqrt{a} \mbox{ or }x<-\sqrt{a}$ I want to thank both for your replies. You are right in making your statement. There is a typing error. I will now send the correct questions. January 14th 2007, 06:43 AM #2 Grand Panjandrum Nov 2005 January 14th 2007, 07:18 AM #3 Global Moderator Nov 2005 New York City January 14th 2007, 09:13 AM #4 Jan 2007
{"url":"http://mathhelpforum.com/algebra/9994-show-inequality-x-2-a.html","timestamp":"2014-04-19T17:10:03Z","content_type":null,"content_length":"43280","record_id":"<urn:uuid:414a4c4b-1a7c-4cec-a03a-19549f8f29c8>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
Power Spectral Density, Welch Technique Here is (5) from the paper roughly translated into Mathematica. 2 DeltaT/Nw Sum[Abs[1/Sqrt[Nd]Sum[w[[n]]x[[n]]^l E^((-I 2 Pi k(n-1))/Nd)]^2, {l, 1, Nw}], {n, 1, Nd}] To use this you need to assign appropriate constant values to DeltaT, Nw, Nd, choose and initialize a windowing function in list w, choose and initialize some plausible time domain signal list x. Then you should probably try using you data in both Matlab and Mathematica so you can compare the results and we start the process of trying to figure out why the results are completely different, what mistakes have been made, how to fix those, etc, etc, etc. I would start with really really simple data. A single simple sine wave that exactly fills a single segment with complete cycles is always a nice start. Then the sum of two different sine waves. Then... you get the idea. Gradually ratchet it up with multiple windows and overlapping windows and more complicated time domain data, exterminating every small error before making the next step. If Matlab anywhere in their documentation describes exactly what they do then we might avoid some of the confusion by trying to port the Matlab method to Mathematica, rather than me just grabbing the first random page that Google showed me and asking you if this was it.
{"url":"http://www.physicsforums.com/showthread.php?t=626458","timestamp":"2014-04-18T10:43:11Z","content_type":null,"content_length":"34267","record_id":"<urn:uuid:23f1bb2b-ad66-4da5-96ac-a8cdea85acd3>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
The Plus sports page: Power trip Being the manager of a Premier League football club may seem like one of the most glamorous jobs in the world — with the fame comes fortune and the opportunity to travel (well, to Hull, Wigan and Portsmouth anyway). However, as far as job security goes, football managers live on the edge. Their terms can be terminated almost on a whim by their club's owner, and they live and die by their team's results. It would seem that there is no way to predict how long their tenures will be. However, a collection of researchers from the UK, Singapore and the US have found that there may be a strong mathematical trend underlying how long football managers stay in their jobs. Toke S. Aidt, Bernard Leong, William C. Saslaw and Daniel Sgroi found that the distribution of tenure lengths for managers of sporting teams in many countries obey power laws. Power laws are fascinating because they arise in a surprisingly large number of naturally occurring phenomena, such as the size of cities, stock market returns, cook book ingredients and even how many times certain words are used in long books. A power law has the form The graph of a power law with negative exponent. To derive this formula, the authors plotted tenure lengths of real managers against their time of dismissal and then set out to find the curve of the from For tenures greater than one year, they found that in the English Premier League, football leagues across Europe, and American football and baseball competitions, there is a straight line of this form that fits the data. Moreover, the fit is statistically significant, that is, it's not just due to chance. The following graph is for English Premier League managers between 1874 and 2005. The logarithm of the length of managers' career plotted against the logarithm of the number of managers dismissed at that time of their career. The data can be approximated by a straight line and the fit is statistically significant. The data come from the English Premier League between 1874 and 2005. But what does all this mean? Add some more sand and you'll get avalanches. As we mentioned earlier, power laws are compelling as they can emerge from simple mathematical rules — the power law is often a macroscopic outcome of microscopic interactions between the players in the system (in this case football managers, the team, club owners and fans, etc). In fact, power laws are often seen as the signature of complexity. In the 1980s scientists found that there are dynamical systems based on simple rules which, through self-organisation, bring themselves into extremely sensitive states, where even the smallest change can cause wide-ranging and unpredictable chain reactions. An often quoted example of this phenomenon involves a pile of sand. When you sprinkle sand on a table, a pile will build up and after a while reach a maximal slope: any additional grain of sand will cause avalanches whose number and size are impossible to predict. Such a sensitive state is called a critical state and this behaviour is called self-organised criticality. It is an interesting phenomenon, because it may explain "spontaneous" emergence of complexity in nature, which is not a result of someone forcing the system. When a system has reached a critical state through self-organisation, it can often be described by power laws. In our sand example, the size distribution of the avalanches follows a power law. Power laws reflect complexity because they are similar on all scales. Suppose that the number for some constants which, apart from the constants involved, is essentially the same as that for smaller avavalanches - the same type of behaviour occurrs on all scales. Given that the power law highlights the fact that there is something interesting going on, the researchers set out to find out what it was. What are the simple rules of football management that govern this system, and is there self-organised criticality? The model The authors constructed a model which includes a manager's reputation — this is either enhanced or diminished, depending on the result of each match. The core of the model is a round-robin tournament with 20 teams playing each other once at home and once away — just like in the Premier League. The probabilities of win, lose and draw were modelled as 37%, 26%, 37%, respectively — these probabilities are those observed in the English football league between the years 1881 and 1991 and are assumed to be independent of the managers involved. The model starts with 20 randomly selected managers, each with a given reputation and tenure. (With a nod to realism, we will henceforth assume that all managers are male.) The initial reputation of each manager is described by a positive whole number, which is chosen at random from the numbers between the firing threshold and the poaching threshold (more on these in a moment). Each manager also starts with a random tenure length between 1 and 40 years. The managers gain reputation (+ 2 points in the model) every time their teams win, and lose reputation (-2 points) when their teams lose. There are no points for draws. Each game has equal importance and so each result is equally important for a manager's reputation. The length of a manager's tenure depends on how his reputation evolves. Termination of tenure can occur for four reasons: • The manager loses his job when his reputation falls below the firing threshold — that is, he is sacked; • The manager is poached by another club when his reputation reaches the poaching threshold — that is, he gets a better deal; • The manager retires if he gets too old (another parameter that can be varied); • The manager's team is relegated to a lower league because it has the lowest reputation at the end of the season — the team is demoted out of the league. When a manager leaves the system — that is, he is either fired or poached, relegated or retired — his place in the league is taken by another manager with tenure length of zero and a random starting With these rules in place, the researchers ran many simulations, varying the random parameters in each run. Such a process is known as a Monte-Carlo simulation. They recorded the distribution of tenure lengths corresponding to one hundred years of competition. They found that for a very broad range of starting parameters, the model produced a tenure length distribution statistically indistinguishable from a power-law distribution. Similar results were obtained for different probability distributions of win, loss or draw. However, the researchers also found that power laws only emerge when a win enhances reputation by the same amount as it is decreased by a loss, and when each match has equal importance. The latter makes sense if you think that the aim of a Premier League team is to maximise its profit: you need to fill the stadiums and make as much advertising revenue as you can at each game. And as the Premier League is a first-past-the-post competition, each win has equal worth on the league table, with position on the table more than anything guaranteeing further advertising and merchandising returns. Coming back to self-organised criticality, the researchers admit that their model does not prove the existence of this phenomenon in the world of sport. In fact, the model is not quite as self-organising as it could be, since certain parameters need to be artificially fixed at the outset. They do believe, however, that certain other factors point in the direction of self-organised criticality. The Premier League, they postulate, follows the Red Queen principle: it is an arms race where constant development is needed simply to compete. This explains why once a league has reached a self-organised critical state, it might stay there for a prolonged period of time. It is simply too difficult for a team to shake up the system, given that they are already in a process of continual change in order to stay with the pack. The term Red Queen comes from Lewis Carroll's Through the Looking-Glass in which the Red Queen says: "It takes all the running you can do, to keep in the same place". What the results surprisingly show is that ability and talent, although obviously playing some role, do not play a major role in a manager's success. His survival is far more determined by the sacking and poaching thresholds and simple randomness in his team's results. 2007 Chelsea manager Avram Grant was a good example of this: as he started his tenure with a low reputation, despite his team's good results, probabilities took their toll and he was sacked at the end of the season. In any case, it's hard to feel sorry for prematurely sacked Premier League managers when their average salaries are over £2 million. Marc West Further reading Marc West is a freelance science writer and former Assistant Editor of Plus who currently works in operations analysis in Sydney. As a wannabe Australian cricket player, the stars aligned when Marc somehow scored 114 against Mount Colah in a Sydney shires cricket game. He loves to write about science and sport and has been published in a variety of magazines and newspapers. You can read more of his writing on his personal blog. Submitted by Anonymous on August 14, 2013. The franchisee concured to reputation modifier revaluations act with us. Occupying the online hearsays broke her reputation changer reappraisals and it didn't. Rumsfeld is now nonresistant for a running into at the like. This year's modesty Best in Group arrived from Chelsea at West Brom. Had I needed to vulgarise their brand prefered for such a way.
{"url":"http://plus.maths.org/content/os/latestnews/sep-dec08/managers/index","timestamp":"2014-04-20T00:47:57Z","content_type":null,"content_length":"43036","record_id":"<urn:uuid:2c51b7be-56d6-4afc-a7b1-c39379074179>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
digitalmars.D - Re: 3D Math Data structures/SIMD Janice Caron Wrote: If you wanted to go even more general, you could go beyond std.matrix and head into the murky waters of std.tensor. Tensors are a generalisation of the progression: scalar, vector, matrix, .... be careful with that! a matrix is an algebraic structure very different from vectors and tensors. you could say that a tensor is a generalization of a vector (a vector is a rank-1 tensor) or that a tensor is a generalization of a scalar (a scalar is a rank-0 tensor), however a matrix is a different thing. think of a matrix as a two dimensional array with several algebraic rules and operations. matrices are away to represent and operate a bunch of numbers. matrices serve to represent scalars, vectors and rank-2 tensors. mathematically speaking, matrices are higher level than arrays but lower than tensors. Tensors are "geometric" entities that are independent of the coordinate system. Tensors are much more conceptualized than plain algebraic matrices, they are a very particular tool to represent the former. if you were to expand a high rank tensor product, representing the corresponding slices with matrices and their algebra should help you do that. IMHO when implementing mathematical concepts into the language, the A&D phase should be that given by the mathematics, its primitive conceptual design should be retained. Think of a scalar as a zero-dimensional array; a vector as a one dimentional array, and a matrix as a two dimentional array. A scalar is a tensor with rank zero; a vector is a tensor with rank one; a matrix is a tensor with rank two. This completely generalises for tensors of arbitrary (non negative integer) rank. (There is a complication though, in that you have to distinguish between contravariant and covariant indeces) If tensor mathematics were implemented, vectors and matrices could be trivially implemented in terms of tensors. See http://mathworld.wolfram.com/Tensor.html (That might be going a bit further than people are ever going to need though! :-) ) if it happens to be well implemented, i don't think so. cheers! Dec 22 2007
{"url":"http://www.digitalmars.com/d/archives/digitalmars/D/Re_3D_Math_Data_structures_SIMD_64093.html","timestamp":"2014-04-19T12:11:30Z","content_type":null,"content_length":"11093","record_id":"<urn:uuid:de72e36e-60ca-4a82-aea5-39ca5a8fd0a6>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
There have been numerous occasions where I have to pick a one of two records out of a given result set, but how I determine which one has some unique rules. I have two examples of this and I show how I used Row_Number() to solve this. The first set of data has these sets of rules: • For any given ISBN13, give me the 'Acquisition Editor' if they exist and if they do not exist, provide the 'Editor', known as [Role]. • If there are multiple persons for the chosen [Role], pick the one with the lowest [SortOrder]. • If [SortOrder] is null, pick the first name alphabetically. Here is the solution: select ISBN13, from (select ISBN13, ROW_NUMBER() OVER(Partition by ISBN13 order by [Role],isnull(cast(SortOrder as varchar(40)),DisplayName)) as Row from dbo.ProductContact where C006_Role in ('Acquisition Editor','Editor')) d where Row=1 The second set of data had the following rules: • For a given ISBN13, provide the sum of the 'First Printing', which can be under either PrintNumber '0000' or '0001'. • PrintNumber '0000' is always chosen over '0001' if it exists. Here is the solution: select isbn13, from (SELECT isbn13, Sum(Print_Run) as First_Run, ROW_NUMBER() OVER(PARTITION by isbn13 order by Printing_Number) as Row FROM dbo.PPB where Printing_Number in ('0000','0001') group by isbn13,Printing_Number) d where Row=1
{"url":"http://www.josefrichberg.com/","timestamp":"2014-04-20T14:03:07Z","content_type":null,"content_length":"49609","record_id":"<urn:uuid:cb7b4d36-c2ec-46ee-9f71-6e2c72374272>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
Random number generator isn't working??? Do you understand how the % operator works? Hmm. All I know is that it gives the remainder of an operation. On other sites I checked their methods of doing this and it involved doing the same thing with rand() then adding a number that was to be the minimum... And do you understand operator precedence? rand() returns a random number in the range 0-RAND_MAX. rand() % 115 returns a number in the range 0-114. Note that 115 < RAND_MAX. 85 + rand() % 115; This just adds 85 to above number so what you get is a number in the range 85-199. Sigh. That's what I get for copying down the code blindly off the book. I see the issue now. However, the number keeps going up anyway - I suppose that is part of the time function? Is there any way to generate random numbers one after the other without them being too lined up? (Like for example I keep getting increases such as 56, 65, 85, 114, 8, 15 as though it were on a Last edited on Do you get these numbers by running the program many times? It probably has to do with the seed following the clock. If you run the program twice in the same second you will even get the same exact number. Generating many random numbers within the same run of the program will probably give numbers that appear to be more random. Last edited on I see, yea I was running the same program over and over again. Thanks much for the help! Topic archived. No new replies allowed.
{"url":"http://www.cplusplus.com/forum/beginner/83140/","timestamp":"2014-04-18T18:15:32Z","content_type":null,"content_length":"13577","record_id":"<urn:uuid:bd4f056e-a360-4a33-9bbb-4fa879115ee1>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A mass m is released from height h on a block of mass m which rests on a smooth floor after elastic collision with the surface the mass will rise to a height ? • one year ago • one year ago Best Response You've already chosen the best response. No sure but the answer should be either h or 0. Best Response You've already chosen the best response. Best Response You've already chosen the best response. So, the ball was not dropped vertically Best Response You've already chosen the best response. Nope...and sorry i Forgot to Draw the Pic ! Best Response You've already chosen the best response. |dw:1353299020369:dw| The motion of the ball would be somewhat like that. Do u mean to calculate that h' interms of h. Best Response You've already chosen the best response. Best Response You've already chosen the best response. But The answer shuld be 2h/3 ....No prob...Show the Way u Did This..) Best Response You've already chosen the best response. Velocity at Bottom is sqrt 2gh Best Response You've already chosen the best response. Best Response You've already chosen the best response. So..According to u h =h1 Best Response You've already chosen the best response. i mean vg' is not equal to 0..., Best Response You've already chosen the best response. Best Response You've already chosen the best response. Is the ball rolling on the wedge? Even if it has pure translational KE, max height will be less than h because on top of the parabola, KE is not zero, due to horizontal component of motion. Best Response You've already chosen the best response. Ball is on Translation..) Best Response You've already chosen the best response. Not need to Consider Moment of Inertia Best Response You've already chosen the best response. @Vincent-Lyon.Fr @experimentX @siddhantsharan @ghazi @ajprincess @CliffSedge @akash123 @Aperogalics Best Response You've already chosen the best response. Since it is taken as a particle, I will assume that after collision with the floor, it will jump at an angle of 45 degrees. To the maximum height on the other side, H, Using this, \(v^2_y-u^2_y = 2aH\), where \(v^2_y=0, u^2_y =2gh \sin 45, a =g\) Now sub it all in and we get, \(H=\frac{h}{2}\) Did I miss something? Best Response You've already chosen the best response. The answer shuld be 2h/3 Best Response You've already chosen the best response. @CliffSedge @CliffSedge @CliffSedge @CliffSedge Best Response You've already chosen the best response. If there is no friction...then I suppose the height should be halved... Best Response You've already chosen the best response. Best Response You've already chosen the best response. i also got the ans as h/2. wonder where i'm going wrong....is the ans really 2h/3 ? Best Response You've already chosen the best response. It's H/2. Recheck your solutions. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50a9aa84e4b064039cbd14e9","timestamp":"2014-04-18T00:32:08Z","content_type":null,"content_length":"130053","record_id":"<urn:uuid:eec25183-7cd7-45b4-bc90-e0ed53545099>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Barrier potential with N steps...(and she's climbing the potential stariway to la...) Hey people, So I have a problem where i have to find the transmitivity and reflectivity coefficients of a potential: V=-(n^2-1)E where (E is positive, constant and is the energy of the particle) Note: V IS NEGATIVE for n=1,2,...N. For each "n" the region is of fixed length "a" except first and last one which go off to infinities. I was thinking of approximating the staircase of the potential as a harmonic oscillator given N is large but i don't really know how that problem should be solved, it's definetely not easier :) i.e. V=0 for x<0 =1/2m(w^2)(x^2) for 0<x<Na =(N^2-1)E for x>0 Any help is appreciated!!
{"url":"http://www.physicsforums.com/showpost.php?p=1930117&postcount=1","timestamp":"2014-04-20T05:44:03Z","content_type":null,"content_length":"9160","record_id":"<urn:uuid:0b1bcd30-c9ea-4ac1-a2ab-cdf57ca5dee6>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
Surface Area and Volume of a Sphere Date: 05/16/2000 at 22:31:39 From: Randy S. Subject: Surface Area and Volume of a Sphere I am doing research for a project on why the coefficient in the formula for the surface area of a sphere is 4, and why 4/3 is the coefficient in the formula for the volume of a sphere. I have looked through your archives and have found that you can prove both by using an infinite number of pyramids inside a sphere, but is that the only geometric way? All the other methods used in the archives are too complicated for me (some use Calculus.) I also have some trouble understanding Archimedes' hatbox because I don't understand why they multiply cosine and latitude lines. I also don't understand how one person proved it by using integrals because I don't even know what an integral is. So could you explain it to me geometrically, or else try to find some other geometric ways to explain why the coefficients are 4 and 4/3? Date: 05/16/2000 at 23:15:55 From: Doctor Peterson Subject: Re: Surface Area and Volume of a Sphere Hi, Randy. Don't worry about the integrals; that's calculus, and you'll need to learn a bit before you can follow the whole argument. The geometrical methods really use some of the ideas of calculus, but not the methods of calculus, so they can be followed more easily, though the work is It's not clear to me whether you saw this answer, which I think is the same as the hat-box, but doesn't mention cosines and latitudes, so it may be easier to follow: Volume of a Sphere There I start by finding the surface area using that method, and then use the pyramids to get the volume from that. There's another method that gets the volume directly using Cavalieri's theorem, which says that if every cross-section of two solids has the same area, then they have the same volume. We show that the sphere has the same volume as the cylinder circumscribed about the sphere, with a cone removed from each end. See Volume of a Hemisphere Using Cavalieri's Theorem - Doctor Peterson, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/55230.html","timestamp":"2014-04-19T00:32:57Z","content_type":null,"content_length":"7445","record_id":"<urn:uuid:dddeb29c-b2c4-4781-ba42-9733aa9e4d0a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Techniques Mathematical Techniques in Astronomy Mathematics is and always has been of central importance to astronomy. As soon as observations became quantified the possibility for calculation and prediction based on observations was open to astronomers. Mathematical developments were both applied to and motivated by astronomical calculations, and many of the most famous astronomers were also mathematicians and vice versa. Although techniques have become increasingly complex, the majority of mathematical astronomical techniques are concerned with positioning and calculation of relative distances of heavenly bodies. The basis of this is spherical trigonometry, which allows calculations on the celestial sphere based on observations taken from an observer on earth. The projection of the celestial sphere onto a flat surface allowed the construction of instruments such as the astrolabe and the mapping of the heavens. Techniques for increasingly accurate calculation were crucial to the development of astronomy as an exact science. It must be borne in mind, however, that not everyone studying or using astronomy was aware of or capable of applying the latest mathematical techniques. For example, there is evidence of a monk in northern France in the twelfth century positioning stars relative to architectural landmarks in his monastery, such as the windows along the dormitory wall. The first developments of mathematical astronomy came during the Mesopotamian and Babylonian civilisations, especially during the Seleucid Kingdom (ca. 320BC to ca. 620AD). Techniques were developed for prediction of eclipses and positions of the heavenly bodies, in terms of degrees of latitude and longitude and measured relative to the sun's apparent motion. Tables were calculated and written for reference, based on arithmetic methods. These tables were available to the Greeks, who adopted many elements of the approach taken by the Babylonians (the sexagesimal system of calculation remained in use in astronomy right up to the Early Modern period) in areas of maths. Many of the Egyptian methods developed for surveying could also be applied to mathematical problems in astronomy. Among the techniques developed and improved by the Greeks were geometric solutions of triangulation problems, including their application to three dimensions. Systems based on combinations of uniform circular motions were proposed to explain and predict the motion of the heavenly bodies, Eudoxus being among the first to suggest a model based on the rotations of concentric spheres. This kind of model wasn't very accurate at predicting positions, but generated a new type of curve, the hippopede, which provided a new area of research for geometry. Other popular types of model were based on epicycles (planets orbit along a circular path whose centre is at or near the earth) or eccentrics (the planets rotate around the sun, which in turn rotates around the earth). The development of ever more complex models of the celestial sphere required more complex calculations, and more sophisticated geometry to back them up. Textbooks on the sphere were written, consolidating mathematical techniques for astronomy Ð these were called spherics. Mathematicians and astronomers including Hipparchus developed techniques for the measurement of angles, and tables for calculations with these angles. Archimedes and Aristarchus studied the numerical ratios in triangles, and sophisticated theories and treatises on the application of these new theories to astronomy were published. These texts were the precursors of spherical trigonometry, which became vital to astronomy. Ptolemy's Almagest summarised and advanced these techniques and Hipparchus and Menelaus of Alexandria produced tables of what would today be called values of the sine The learning of the Greeks was transmitted to Arabic areas, who in turn added Indian and Chinese mathematical and astronomical texts to the corpus of works available. The Arab scholars improved and combined the methods they read about, predictive astronomy being central to many aspects of Islam. Important advances in mathematical techniques included al-Khwarizmi's predictions of the times of visibility of the time of the new moon, and calculations of the qibla or direction of Mecca, in which to pray, from astronomical observations. The Arabs worked with the sexagesimal system inherited from the Babylonians via the Greeks, but often converted numbers to the decimal system for complex calculations since it was easier. They called base-60 numbers the arithmetic of the astronomers. They also incorporated elements of the Indian system Ð including adding zero to the number system. Thabit and Ibrahim developed geometric methods for sundials, including the solution of conic sections and the application of this to the construction of sundials. In the late 10th century Abu al-Wafa and Abu Nasr Mansur proved theories of plane and spherical trigonometry and derived the laws of sines and tangents. Highly accurate tables and techniques for the calculation of trigonometric problems were produced. Abu Nasr's pupil, al-Biruni (973-1050) applied these techniques with great success to geographic and astronomical problems. The Greeks had developed the astrolabe, but the Arabs applied their new techniques to its improvement and the development of universal astrolabes that did not require separate plates for each degree of latitude. They perfected techniques for the projection of the celestial sphere onto a flat plane, described by the Greeks, and the marking of scales and lines to enable calculations of positions on the celestial sphere to be carried out on a flat surface. From the end of the 10th century West European scholars became increasingly interested in the writings of the Greeks and Arabs, and translations were made of important texts. Astronomy was part of the quadrivium (arithmetic, geometry, astronomy and music) of mathematical subjects which were taught to students in church educational institutions. With the founding of the universities came and increased study of Greek and Arab texts, including the mathematics of astronomy. The techniques of spherical trigonometry and other important applied geometry techniques were studied, commented on, and used to calculate astronomical tables for West European latitudes. For the next few centuries the majority of work on mathematical astronomy concentrated on consolidation and improvement of existing techniques. By the seventeenth century mathematics was becoming more institutionalised, with increasingly efficient means of communication between mathematicians and their colleagues. This enabled advances in maths to become widely known and applied more quickly. The publication in 1614 of John Napier's work on logarithms was quickly adopted as a way of simplifying mathematical calculations in astronomy, and new logarithmic, trigonometric and astronomical tables followed. These included Kepler's Rodolphine Tables, which made great use of the new techniques and was based on elliptic orbits about the sun. The accuracy of tables and techniques increased quickly, as did the accuracy with which the heavenly bodies could be observed. The development of calculus in the seventeenth century allowed calculation of changing quantities with greater accuracy and ease, including quantities like the speed a body is moving at. Developments in the representation of geometric quantities by algebraic expressions facilitated the further refinement of astronomical models. Increased understanding of the forces at work in the universe enabled calculations and predictions to take account of why things behaved as they did more effectively, and build this into the mathematical models used for calculation. The development of mathematical techniques for astronomy did not stop at the end of the seventeenth century, although much of the groundwork had been laid. In the following centuries more sophisticated mathematical methods were developed, building on the fundamentals of trigonometry and calculus and were applied to astronomy. The principles of spherical trigonometry underpin the calculations of modern astronomy, although the calculations are now carried out by computers rather than slide rules.
{"url":"http://www.hps.cam.ac.uk/starry/mathematics.html","timestamp":"2014-04-20T10:48:23Z","content_type":null,"content_length":"10907","record_id":"<urn:uuid:f6fe7e16-038f-4b01-b8a6-c2e4e72b2633>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply Hi Al-Allo! Given a positive integer M>1 let PM denote the set of all prime factors of M. As in your example if M=12 then PM={2,2,3} and if M=20 then PM={2,2,5}. As a further example consider M=105 so PM={3,5,7}. The first two (for 12 and 20) are multisets because they have more than one copy of a prime factor (two 2's in each). {3,5,7} is a typeset because it has only one copy of each prime The union of two sets whether multisets or typesets is obtained by choosing each type of prime seen and scanning all the sets for the maximum number occurring in any ONE of the sets. Then using "u" for union we get I: If M=12, N=20 then PMuPN = {2,2,3}u{2,2,5} = {2,2,3,5}. II: If M=12, N=18, L=30 then PMuPNuPL = {2,2,3}u{2,3,3}u{2,3,5} = {2,2,3,3,5}. III: If M=56, N=9, L=250 then PMuMPuPL = {2,2,2,7}u{3,3}u{2,5,5,5} = {2,2,2,3,3,5,5,5,7}. The lcm of any two, three or more numbers is the PRODUCT of all the elements of this UNION. For example I: lcm(12,20) = X{2,2,3,5} = 2x2x3x5 = 60 II: lcm(12,18,30) = X{2,2,3,3,5} = 2x2x3x3x5 = 180 III: lcm(56,9,250) = X{2,2,2,3,3,5,5,5,7} = 2x2x2x3x3x5x5x5x7 = 63000 These products produce the smallest positive integer that each of the given numbers divides into with NO REMAINDER because one can choose from the union a set looking like each of the sets obtained from the numbers M, N, etc. Example: In the third example above: the union: {2,2,2,3,3,5,5,5,7} M: {2,2,2, 7} N: {2 3,3 } L: {2, 5,5,5 } If we replace the commas in these sets with "x" then we get lcm/M = 2x2x2x3x3x5x5x5x7 2x2x2 x 7 which is 3x3x5x5x5=9x125=1125 which is an integer. Doing likewise for N and L we get lcm/N=3500 and lcm/L=252. If we leave out just one factor from the union, then at least one of M, N and L will NOT divide into the product of the union with zero remainder. NOTE: In a similar fashion the highest common factor hcf (or greatest common divisor gcd) of two or more numbers is the PRODUCT of the INTERSECTION of the prime factor sets: hcf(M,N,L) = X(PMnPNnPL) ( and lcm(M,N,L) = X(PMuPNuPL) as above) where "n" represents intersection which is based on choosing minimums (instead of maximums) from the scanning of each of the sets. Be careful though. If at least one of the sets has NONE of a prime that OCCURS IN ANOTHER set, then the minimum is ZERO for that type of prime. For the three given examples: I: PMnPN = {2,2} II: PMnPNnPL = {2,3} no 5's since 5 is missing from another set. III: PMnPNnPL = { } empty since each type of prime is missing from at least one of the sets. ( NOTE: X{ } = 1 by definition since X{a,b,c} means 1xaxbxc. Always start with a 1 factor.) So hcf(12,20) = X{2,2} = 4 hcf(12,18,30) = X{2,3} = 6 hcf(56,9,250) = X{ } = 1. The hcf is the largest integer with divides into each of the given numbers with no remainder. I hope this will help you, though it may be more information than you wanted. P.S. The edit was just to get some spelling corrected and to make the wording a little clearer.
{"url":"http://www.mathisfunforum.com/post.php?tid=18774&qid=247827","timestamp":"2014-04-20T08:31:10Z","content_type":null,"content_length":"27944","record_id":"<urn:uuid:4cf40e3b-3b71-4b52-aa53-230705ce7c00>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Ford Circles The Farey sequence for a positive integer is the set of irreducible rational numbers with and . For example, is , . The Ford circle has radius and center . It is tangent to the x axis at and to the circles corresponding to the two neighbors of in .
{"url":"http://demonstrations.wolfram.com/FordCircles/","timestamp":"2014-04-18T00:16:26Z","content_type":null,"content_length":"43593","record_id":"<urn:uuid:4e331616-4a16-4613-b53e-412d2d988f1f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
High-Scoring Molecules What makes for a good folding? In proteins the usual measure is the Gibbs free energy, a thermodynamic quantity that depends on both energy and entropy. If you could tug on the ends of a protein chain and straighten it out, the result would be a state of high energy and low entropy. The energy is high because amino acids that "want" to be close together are held at a distance; the entropy is low because the straight chain is a highly ordered configuration. When you let go, the chain springs back into a shape with lower energy and higher entropy, changes that translate into a lower value of the Gibbs free energy. The "native" state of a protein—the folding it adopts under natural conditions—is usually assumed to be the state with the lowest possible free energy. Prototeins can get along with a simpler folding criterion. Standard practice is to rank foldings simply by counting H-H contacts. It's more like keeping score than measuring energy. If the H's are viewed as analogues of hydrophobic amino acids, the scoring system reflects the tendency of hydrophobic groups to seek shelter from water. But the prototein model is so abstract that it doesn't really matter what kind of force is at play between the H's. Just say that H's are sticky, and it takes energy to pull them apart. One strategy for finding good folds, then, is to look for configurations that maximize the number of H-H contacts. A program to carry out the search runs through all the foldings of all the sequences of a given length, keeping only those foldings with the maximum number of contacts. How many contacts are possible in a folded prototein? A little doodling on graph paper shows that the highest possible ratio of contacts to H's is 7:6. Sequences that attain this limit are exceedingly rare. (I leave it as a puzzle for the reader to find the shortest such sequence, which I believe has 26 beads.) But proteins are not required to solve such mathematical puzzles. To find the stablest configurations of a given sequence, all you need do is find the foldings that have more H-H contacts than any other foldings of the same sequence, whether or not the number of contacts is the theoretical maximum. There is a shortcut for identifying these stable foldings. It begins with the sequence made up entirely of H's, which is rather like double-sided sticky tape that collapses on itself in a crumpled ball. If any sequence at all has a folding with a given number of H-H contacts, then that configuration must also be among the stablest foldings of the all-H sequence. In the all-H folding, however, some of the H's may not form contacts, and so they can be changed to P's without altering the score of the folding. By making all such substitutions, you recover the sequence with the minimum number of H's that can give rise to a given folding. Sequences with rigid, heavily cross-linked folds are fairly rare. Among chains with 21 beads the maximum number of H-H contacts is 12, and a chain must have at least 14 H's to reach this limit. There are only 80 sequences of 14 H's and 7 P's that produce 12 contacts, out of the universe of more than two million 21-bead sequences. Figure 1 shows some of the 80 maximally cross-linked 21-bead prototeins, along with a few other foldings chosen at random. The two populations of molecules are very different. The randomly chosen configurations tend to be loose and floppy, and their average number of H-H contacts works out to less than 1. The highest-scoring folds, in contrast, are all very compact, with the chain either wound around itself in a spiral shape or folded into zigzags. A lifelike feature of the compact foldings is a tendency for the H's to congregate in the interior of the molecule, leaving the P's exposed on the surface. The model has no explicit rule favoring the formation of such a hydrophobic core; it happens automatically when you select foldings with numerous H-H contacts. In this connection, Dill points out that for short prototein chains a two-dimensional lattice model may be more realistic than a three-dimensional one. The reason is that the perimeter-to-area ratio of a short chain in two dimensions approximates the surface-to-volume ratio of a longer chain in three dimensions. Not all features of the high-scoring prototein foldings inspire confidence in the model's realism. For example, a disproportionate number of the best sequences have H's at both ends, and these molecules tend to fold up with their ends tucked into the hydrophobic core. The reason is easy to see: An H at the end of a chain can participate in three contacts, whereas interior H's can have no more than two. But the sticky-end effect is an artifact of the model; there is no comparable phenomenon in real proteins. Another peculiarity can be traced to the choice of a square lattice. Two H's on a square lattice can form a contact only if they are separated within the prototein sequence by an even number of intervening beads. As a result, every prototein can be divided into odd and even subsequences that do not interact. No such parity effect is seen in proteins. This failure of realism is unfortunate; on the other hand, the segregation of odd and even sublattices allows some very handy optimizations in a simulation program.
{"url":"http://www.americanscientist.org/issues/pub/prototeins/4","timestamp":"2014-04-20T05:47:22Z","content_type":null,"content_length":"128846","record_id":"<urn:uuid:12a35163-54ec-4cd2-9019-eac38a326f45>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics Activities for Elementary School Teachers 10th Edition | 9780321575685 | eCampus.com List Price: [S:$38.67:S] Usually Ships in 2-3 Business Days Questions About This Book? What version or edition is this? This is the 10th edition with a publication date of 1/6/2009. What is included with this book? • The Used copy of this book is not guaranteed to inclue any supplemental materials. Typically, only the book itself is included. This revised edition features activities that can be used to develop, reinforce, and/or apply mathematical concepts. Activities can also be adapted for use with elementary students at a later time. They are closely keyed to the text and are ordered by the topics and skills within each chapter. References to these activities are in the margin of the main text. Table of Contents An Introduction to Problem Solving When You Don't Know What to Do What's the Last Card? An Ancient Game Magic Number Tricks Missing Persons Report Who Dunit? What's the Number? Eliminate the Impossible Candies, Couples, and a Quiz Ten People in a Canoe Pictorial Patterns Number Patterns Fascinating Fibonacci What's the Rule? Polygons and Diagonals Chapter Summary Numeration Systems and Sets Regrouping Numbers Find the Missing Numbers What's in the Loops? Loop de Loops Chapter Summary Whole Numbers and Their Operations It All Adds Up What's the Difference? A Visit to Fouria Multiplication Arrays Find the Missing Factor How Many Cookies? Multi-digit Multiplication Division Array Multi-digit Division Three Strikes and You're Out Target Number Chapter Summary Algebraic Thinking Patterns and Expressions The Function Machine What's My Function? Graphs and Stories Chapter Summary Integers and Number Theory Charged Particles Coin Counters Additional Patterns Subtracting Integers A Clown on a Tightrope Subtraction Patterns Multiplication and Division Patterns Integer (x) and (/) Contig Great Divide Game A Square Experiment A Sieve of Another Sort The Factor Game How Many Factors? Interesting Numbers Tiling with Squares Pool Factors Chapter Summary Rational Numbers as Fractions What is a Fraction? Equivalent Fractions Fraction War How Big Is It? What Comes First? Square Fractions Adding and Subtracting Fractions Multiplying Fractions Dividing Fractions The King's Problem Paper Powers Chapter Summary Decimals and Real Numbers What's My Name? Who's First? Race for the Flat Empty the Board Decimal Arrays Decimal Multiplication Dice and Decimals Target Number Revisited Repeating Decimals Chapter Summary Proportional Reasoning, Percents, and Applications Professors Short and Tall People Proportions A Call to the Border What is Percent? What Does It Mean? Chapter Summary What Are the Chances? Theoretical Probability The Spinner Game Numbers That Predict It's In The Bag What's The Distribution? Simulate It The Mystery Cube How Many Arrangements Pascal's Probabilities Chapter Summary Statistics: The Interpretation of Data Graphing m&m's“ Grouped Data What's the Average? The Weather Report Finger-Snapping Time Gaps and Clusters Are Women Catching Up? Populations and Samples Two Change or Not to Change What's in the Bag? Chapter Summary Introductory Geometry What's the Angle? Inside or Outside? Name that Polygon Triangle Properties-Angles Angles on Pattern Blocks Sum of Interior/Exterior Angles Stars and Angles Spatial Visualization A View from the Top Map Coloring Chapter Summary Constructions, Congruence, and Similarity Triangle Properties-Sides To Be or Not to Be Congruent? Paper Folding Construction Patter Block Similarity Similar Triangles Side Splitter Theorem Outdoor Geometry Exploring Linear Functions Chapter Summary Concepts of Measurement Units of Measure Regular Polygons in a Row What's My Area? Areas of Polygons Pick's Theorem Graphing Rectangles From Rectangles to Parallelograms From Parallelograms to Triangles From Parallelograms to Trapezoids Pythagorean Puzzles Right or Not? Surface Area Volume of a Rectangular Solid Pyramids and Cones Compare Volume to Surface Area Chapter Summary Motion Geometry and Tessellations Glide Reflections Size Transformations Cut It Out Draw It Chapter Summary Table of Contents provided by Publisher. All Rights Reserved.
{"url":"http://www.ecampus.com/mathematics-activities-elementary-school/bk/9780321575685","timestamp":"2014-04-21T15:47:09Z","content_type":null,"content_length":"53674","record_id":"<urn:uuid:8d6dea4b-d2e8-431d-823f-72f6c117da47>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
From datasets to algorithms in R Many statistical algorithms are taught and implemented in terms of linear algebra. Statistical packages often borrow heavily from optimized linear algebra libraries such as LINPACK, LAPACK, or BLAS. When implementing these algorithms in systems such as Octave or MATLAB, it is up to you to translate the data from the use case terms (factors, categories, numerical variables) into matrices. In R, much of the heavy lifting is done for you through the formula interface. Formulas resemble y ~ x1 + x2 + …, and are defined in relation to a data.frame. There are a few features that make this very powerful: • You can specify transformations automatically. For example, you can do y ~ log(x1) + x2 + … just as easily. • You can specify interactions and nesting. • You can automatically create a numerical matrix for a formula using model.matrix(formula). • Formulas can be updated through the update() function. Recently, I wanted to create predictions via Bayesian model averaging method (bma library on CRAN), but did not see where the authors implemented it. However, it was very easy to create a function that does this: predict.bic.glm <- function(bma.fit,new.data,inv.link=plogis) { # predict.bic.glm # Purpose: predict new values from a bma fit with values in a new dataframe # Arguments: # bma.fit - an object fit by bic.glm using the formula method # new.data - a data frame, which must have variables with the same names as the independent # variables as was specified in the formula of bma.fit # (it does not need the dependent variable, and ignores it if present) # inv.link - a vectorized function representing the inverse of the link function # Returns: # a vector of length nrow(new.data) with the conditional probabilities of the independent # variable being 1 or TRUE # TODO: make inv.link not be specified, but rather extracted from glm.family of bma.fit$call form <- formula(bma.fit$call$f)[-2] # extract formula from the call of the fit.bma, drop dep var des.matrix <- model.matrix(form,new.data) pred <- des.matrix %*% matrix(bma.fit$postmean,nc=1) pred <- inv.link(pred) The first task of the function finds the formula that was used in the call of the bic.glm() call, and the [-2] subscripting removes the dependent variable. Then the model.matrix() function creates a matrix of predictors with the original function (minus dependent variable) and new data. The power here is that if I had interactions or transformations in the original call to bic.glm(), they are replicated automatically on the new data, without my having to parse it by hand. With a new design matrix and a vector of coefficients (in this case, the expectation of the coefficients over the posterior distributions of the models), it is easy to calculate the conditional probabilities. In short, the formula interface makes it easy to translate from the use case terms (factors, variables, dependent variables, etc.) into linear algebra terms where algorithms are implemented. I’ve only scratched the surface here, but it is worth investing some time into formulas and their manipulation if you intend to implement any algorithms in R.
{"url":"http://realizationsinbiostatistics.blogspot.com/2011/12/from-datasets-to-algorithms-in-r.html","timestamp":"2014-04-16T11:15:01Z","content_type":null,"content_length":"154486","record_id":"<urn:uuid:0ac9da9b-2ae0-483a-aaae-d6fabdbb99ec>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry Tutors Oakland, CA 94602 Math tutor with extensive experience -- future math teacher ...I have worked freelance, for DVC in Pleasant Hill (at their math lab), and for UC Santa Cruz (as a learning assistant). I have tutored pre-algebra, algebra, , statistics, trigonometry, calculus, linear algebra, discrete math, proofs, and logic (among other... Offering 10+ subjects including geometry
{"url":"http://www.wyzant.com/Moraga_Geometry_tutors.aspx","timestamp":"2014-04-18T16:17:36Z","content_type":null,"content_length":"58990","record_id":"<urn:uuid:f2a27b6e-25cc-4d59-9666-f4077faec0a8>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Investigating Ordering Planets: Math Connections and Number Sense in Science In this Solar System Investigate, students will determine ways to order the planets. Teacher directed inquiry will suggest that they first order the planets according to their distance from the sun. Students will then work on their own methods of determining "order." Learning Goals Learning goals that should be achieved are: data analysis, model development, and questioning. Students will hopefully begin to develop an understanding of the importance of data order and number sense and how these concepts work outside of their math curriculum and spill into science. Students should also gain an appreciation for planetary distances and scale. Context for Use For this set of activities you will need approximately 50 minutes. Class size should range from 15-30 students and can be completed in the classroom. You will need (per student): Planet Data Chart Meter stick Adding machine tape (1 roll per 2-4 students) Prior Knowledge for this activity should include: Ordering Numbers/Number Sense Basic Measuring Skills Understanding "Scale" Understanding the difference between a planet and a star Understanding of gravity and orbit would also be helpful Subject: Geoscience, Physics:Astronomy:Solar System, Mathematics Resource Type: Activities:Classroom Activity Grade Level: Intermediate (3-5) Description and Teaching Materials Planet Data Chart (Microsoft Word 76kB Jul26 07) These activities are being taken directly from Harcourt School Publishers, "Science" Third Grade Level Teacher Edition, p.353-355/Lesson 3. ISBN: 0-15-343567 (www.harcourtschool.com) This lesson is designed to develop a deeper understanding of planetary distances based on ordering of numbers. Students are given a copy of a Planet Data Chart which contains an alphabetical listing of the planets. Once the list is distributed the teacher can then guide inquiry by asking: "Can you tell me how these planets are 'ordered' on the page?" Allow students time to examine the data on the chart(approx. 3-5 minutes). Students should each be given a piece of graph or lined paper, pencil, and ruler (to use as a guide for drawing straight lines on their paper). Using just the data in the first column on the Planet Data Chart, ask students to list the planets in order by their distance from the sun, from closest to farthest. Students will then draw conclusions and answer the following questions: 1. Which planet is closest to the sun? 2. Which planet is farthest from the sun? 4. How many planets are between Earth and the sun? 5. Which planets are Earth's nearest neighbors? How can you tell? Students should now move into groups and work on the following Inquiry Skill: Scientists sometimes use numbers to put things in order. List other ways you could order the planets using the Planet Data Chart. Collaborate and list on the board or overhead the student generated ideas. To help students better understand the immense scale of these numbers, and of the solar system, keep the students in their small (2-4) discussion groups and provide each group with adding machine tape and a meter stick. List the following data on the board: Earth: 1 Saturn: 10 Uranus: 20 Neptune: 30 Explain that the data show the approximate distance from the sun to each planet in AUs. You will need to explain that an AU is the distance from the sun to Earth. The abbreviation "AU" stands for Astronomical Unit. Have students label one end of a 50cm length of adding machine tape "sun." Using the scale 1 cm = 1 AU, have students draw a scale model of the distances of the planets. Teaching Notes and Tips Scale can be a difficult concept for students. Try a variety of methods--like multiple Styrofoam ball sizes to demonstrate scale (these can be quite costly and are easily damaged). The adding machine tape works well especially in confined classroom space. Teacher Observation of Individual and Group Work Students will provide a written list of planets in their order from closest to farthest from the sun. Students can demonstrate/share their understanding of number sense/order by providing oral clues to the teacher to put on the board/overhead. Student Groups will turn in a completed adding machine tape showing the scale of distance between planets beginning from the sun. 3.III.C.2 (The Universe) 3.I.A.1 (Scientific World View) 3.I.B.1 (Scientific Inquiry) References and Resources
{"url":"http://serc.carleton.edu/sp/mnstep/activities/planets.html","timestamp":"2014-04-20T01:32:43Z","content_type":null,"content_length":"24417","record_id":"<urn:uuid:3dca2854-e612-4ff6-8e7d-45154243ead7>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
4. The Parabola Why study the parabola? The parabola has many applications in situations where: • Radiation needs to be concentrated at one point (e.g. radio telescopes, pay TV dishes, solar radiation collectors) or • Radiation needs to be transmitted from a single point into a wide parallel beam (e.g. headlight reflectors). Here is an animation showing how parallel radio waves are collected by a parabolic antenna. Definition of a Parabola The parabola is defined as the locus of a point which moves so that it is always the same distance from a fixed point (called the focus) and a given line (called the directrix). [The word locus means the set of points satisfying a given condition. See some background in Distance from a Point to a Line.] In the following graph, • The focus of the parabola is at `(0, p)`. • The directrix is the line `y = -p`. • The focal distance is `|p|` (Distance from the origin to the focus, and from the origin to the directrix. We take absolute value because distance is positive.) • The point (x, y) represents any point on the curve. • The distance d from any point (x, y) to the focus `(0, p)` is the same as the distance from (x, y) to the directrix. The Formula for a Parabola - Vertical Axis Adding to our diagram from above, we see that the distance `d = y + p`. Now, using the Distance Formula on the general points `(0, p)` and `(x, y)`, and equating it to our value `d = y + p`, we have Squaring both sides gives: (x − 0)^2 + (y − p)^2 = (y + p)^2 Simplifying gives us the formula for a parabola: x^2 = 4py In more familiar form, with "y = " on the left, we can write this as: where p is the focal distance of the parabola. Now let's see what "the locus of points equidistant from a point to a line" means. Each of the colour-coded line segments is the same length in this spider-like graph: Don't miss Interactive Parabola Graphs, where you can explore concepts like focus, directrix and vertex. Example - Parabola with Vertical Axis Need Graph Paper? Sketch the parabola Find the focal length and indicate the focus and the directrix on your graph. Arch Bridges − Almost Parabolic The Gladesville Bridge in Sydney, Australia was the longest single span concrete arched bridge in the world when it was constructed in 1964. The shape of the arch is almost parabolic, as you can see in this image with a superimposed graph of y = −x^2 (The negative means the legs of the parabola face downwards.) [Actually, such bridges are normally in the shape of a catenary, but that is beyond the scope of this chapter. See Is the Gateway Arch a Parabola?] Parabolas with Horizontal Axis We can also have the situation where the axis of the parabola is horizontal: In this case, we have the relation: y^2 = 4px [In a relation, there are two or more values of y for each value of x. On the other hand, a function only has one value of y for each value of x.] Example - Parabola with Horizontal Axis Sketch the curve and find the equation of the parabola with focus (-2,0) and directrix x = 2. Shifting the Vertex of a Parabola from the Origin This is a similar concept to the case when we shifted the centre of a circle from the origin. To shift the vertex of a parabola from (0, 0) to (h, k), each x in the equation becomes (x − h) and each y becomes (y − k). So if the axis of a parabola is vertical, and the vertex is at (h, k), we have (x − h)^2 = 4p(y − k) If the axis of a parabola is horizontal, and the vertex is at (h, k), the equation becomes (y − k)^2 = 4p(x − h) 1. Sketch `x^2= 14y` 2. Find the equation of the parabola having vertex (0,0), axis along the x-axis and passing through (2,-1). 3. We found above that the equation of the parabola with vertex (h, k) and axis parallel to the y-axis is `(x − h)^2= 4p(y − k)`. Sketch the parabola for which `(h, k)` is ` (-1,2)` and `p= -3`. Helpful article and graph interactives See also: How to draw y^2 = x − 2?, which has an extensive explanation of how to manipulate parabola graphs, depending on the formula given. Also, don't miss Interactive Parabola Graphs, where you can explore parabolas by moving them around and changing parameters. Applications of Parabolas Application 1 - Antennas A parabolic antenna has a cross-section of width 12 m and depth of 2 m. Where should the receiver be placed for best reception? Application 2 - Projectiles A golf ball is dropped and a regular strobe light illustrates its motion as follows... We observe that it is a parabola. (Well, very close). What is the equation of the parabola that the golf ball is tracing out? Conic section: Parabola All of the graphs in this chapter are examples of conic sections. This means we can obtain each shape by slicing a cone at different angles. How can we obtain a parabola from slicing a cone? We start with a double cone (2 right circular cones placed apex to apex): If we slice a cone parallel to the slant edge of the cone, the resulting shape is a parabola, as shown. Didn't find what you are looking for on this page? Try search: Online Algebra Solver This algebra solver can solve a wide range of math problems. (Please be patient while it loads.) Go to: Online algebra solver Ready for a break? Play a math game. (Well, not really a math game, but each game was made using math...) The IntMath Newsletter Sign up for the free IntMath Newsletter. Get math study tips, information, news and updates each fortnight. Join thousands of satisfied students, teachers and parents! Share IntMath! Short URL for this Page Save typing! You can use this URL to reach this page: Math Lessons on DVD Easy to understand math lessons on DVD. See samples before you commit. More info: Math videos
{"url":"http://www.intmath.com/plane-analytic-geometry/4-parabola.php","timestamp":"2014-04-18T10:35:48Z","content_type":null,"content_length":"33182","record_id":"<urn:uuid:e19fd93b-6642-4b00-bee6-f226610766e6>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Object moving at speed of light as Reference frame. Is there any other object except photon which moves at the speed of light? Yes, other massless bosons. Why can't an object moving at the speed of light be taken as reference frame? Your question has been answered quite well here. Also, you might consider the problem in the context of space-time diagrams (google it or find discussions of space-time diagrams in other posts). The sketches below show a sequence in which an observer (blue frames of reference) moves at ever greater relativistic velocities with respect to a rest frame (black perpendicular coordinates). One aspect of the photon (any massless boson) that makes it so special is that its worldline always bisects the angle between the time axis and the spatial axis for any observer, no matter what the observer's speed (thus, the speed of light is the same for all observers). Notice in the sequence that the moving observer's X4 and X1 axes rotate toward each other, getting closer and closer to each other as the speed of light is approached. In the limit the X4 axis and the X1 axis overlay each other. So, if the observer were actually moving at the speed of light, both his time axis and his spatial axis would be colinear with the photon worldline. How would you define that as a coordinate system?
{"url":"http://www.physicsforums.com/showthread.php?s=fa205bd0ed2dce7c796c6f3aa71c72c7&p=3950674","timestamp":"2014-04-21T02:10:18Z","content_type":null,"content_length":"65292","record_id":"<urn:uuid:a773887d-d2c7-4101-8e61-0c8e2b34a801>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
Pi: a biography of the world's most mysterious number We all learned that the ratio of the circumference of a circle to its diameter is called π and that the value of this algebraic symbol is roughly 3.14. What we weren't told, though, is that behind this seemingly mundane fact is a world of mystery, which has fascinated mathematicians from ancient times to the present. Mathematicians call it a "transcendental number" because its value cannot be calculated by any combination of addition, subtraction, multiplication, division, and square root extraction. This elusive nature has led intrepid investigators over the years to attempt ever-closer approximations. In 2002, a Japanese professor using a supercomputer calculated the value to 1.24 trillion decimal places! Nonetheless, in this huge string of decimals there is no periodic repetition. In this delightful layperson's introduction to one of math's most interesting phenomena, Drs. Posamentier and Lehmann review π 's history from prebiblical times to the 21st century, the many amusing and mind-boggling ways of estimating π over the centuries, quirky examples of obsessing about π (including an attempt to legislate its exact value), and useful applications of π in everyday life, including statistics. This enlightening and stimulating approach to mathematics will entertain lay readers while improving their mathematical literacy. Review: Pi: A Biography of the World's Most Mysterious Number User Review - Nick - Goodreads Never made it all the way through this, but hope to finish it someday... I love this stuff. Read full review Review: Pi: A Biography of the World's Most Mysterious Number User Review - Ed - Goodreads I'm no master of maths, but numbers and mathematics make for fascinating material when given the proper treatment. This book succeeded in illuminating that mysterious number that is found in every circle. Read full review Acknowledgments 7 What Is n? 13 The History of n 41 9 other sections not shown Bibliographic information
{"url":"http://books.google.co.uk/books?id=QFPvAAAAMAAJ&q=Pi%3A+a+biography+of+the+world's+most+mysterious+number&dq=Pi%3A+a+biography+of+the+world's+most+mysterious+number&hl=en&ei=oN7PTIKkItTR4gaM6-SiBg&sa=X&oi=book_result&ct=result&resnum=1&ved=0CC0Q6AEwAABlockquote","timestamp":"2014-04-18T05:38:04Z","content_type":null,"content_length":"153282","record_id":"<urn:uuid:7f07a710-fd75-45e4-8858-866d61379299>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
Heat Transfer problem Hey guys. I've been cracking whole day over this question to no avail. I mean I have no idea if it's correct or wrong. And I desperately have to get it done by tomorrow and it's already midnight here so would really appreciate any sort of help I can get. Here are the doubts that I'd like to clear. 1. Should it be counter flow or parallel flow? (I chose counter flow and I can't justify why) 2. Should the water fluids flowing from the inner or outer and why? (I chose inner and I can't justify why as well) 3. Is the term for inner tube called annulus? 3. Is the final answer obtained 6 sections? Thanks in advance!! This is a problem that requires more work than you did. Maybe you can learn a little by doing it all the different configurations, and seeing what you get. I think your choice of counter flow is going to be best, but it isn't clear whether it would be better to have the water on the inside or outside. Strictly from an insulation point of view, of course, it would be better to have the water on the outside.
{"url":"http://www.physicsforums.com/showthread.php?s=b8d19007b5733d2ddc83ea67d195021d&p=4531838","timestamp":"2014-04-25T00:14:25Z","content_type":null,"content_length":"24939","record_id":"<urn:uuid:79649d7c-b179-49f6-9f4c-6b039e2fb40d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
category-extras-0.53.6: Various modules and constructs inspired by category theory Contents Index Portability non-portable (rank-2 polymorphism) Control.Comonad.HigherOrder Stability experimental Maintainer Edward Kmett <ekmett@gmail.com> extending Neil Ghani and Patrician Johann's HFunctor to higher order comonads class HFunctor f where ffmap :: Functor g => (a -> b) -> f g a -> f g b hfmap :: (g :~> h) -> f g :~> f h class HFunctor w => HCopointed w where hextract :: Functor f => w f a -> f a class HCopointed w => HComonad w where hextend :: (Functor f, Functor g) => (w f :~> g) -> w f :~> w g hduplicate :: (HComonad w, Functor (w g), Functor g) => w g :~> w (w g) Produced by Haddock version 2.1.0
{"url":"http://comonad.com/haskell/category-extras/dist/doc/html/category-extras/Control-Comonad-HigherOrder.html","timestamp":"2014-04-17T18:24:14Z","content_type":null,"content_length":"8423","record_id":"<urn:uuid:b5037a9e-0f1f-4a35-9362-c57647dceb04>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
DSP; Difference equation from a transfer function March 2nd 2009, 09:06 AM #1 Mar 2009 DSP; Difference equation from a transfer function This is a question relating to DSP. I wasn't sure what section to put it in. (move it if you so wish) I have the transfer function: 0.5 - 0.3 z-1 ------------- = H(z) 1 + 0.25 z -1 I need to obtain a difference equation ( i.e in the form y(z)= ) My problem is I cant find in text books, or on the web, a simple explanation for converting said transfer function to a difference equation. (please note this is a difference equation not a differential equation). I found the following link to be the most useful thing so far, (if you explore the site a bit). But I still don't understand how the process is done. [ http://ccrma.stanford.edu/~jos/fp3/Z_Transform_Difference_Equations.html ] Thanks for any help anyone can provide, This is a question relating to DSP. I wasn't sure what section to put it in. (move it if you so wish) I have the transfer function: 0.5 - 0.3 z-1 ------------- = H(z) 1 + 0.25 z -1 I need to obtain a difference equation ( i.e in the form y(z)= ) My problem is I cant find in text books, or on the web, a simple explanation for converting said transfer function to a difference equation. (please note this is a difference equation not a differential equation). I found the following link to be the most useful thing so far, (if you explore the site a bit). But I still don't understand how the process is done. [ http://ccrma.stanford.edu/~jos/fp3/Z_Transform_Difference_Equations.html ] Thanks for any help anyone can provide, $H(z)=\frac{0.5-0.3z^{-1}}{1+0.25 z^{-1}}$ $(1+0.25 z^{-1})y(n)=(0.5-0.3z^{-1})x(n)$ That is $z^{-1}$ may be considered to be the unit delay operator. Thanks for that, makes things a bit clearer. I'l keep staring at it, it'l click soon. DSP: difference equation to Z-domain transfer function to bode plot I've implemented a filter in a DSP and now wish to get its bode plot. I need some help determining first the z-domain transfer function (I've done this once but would like a second opinion), and then I would like to get a bode plot from that transfer function - how do I do this? Thanks Dave The difference equation is:- y(n) = [ intError(n-1) + { Igain }{ x(n) } ] + [ { Pgain }{ x(n) } ] intError(n) = [ intError(n-1) + { Igain }{ x(n) } ] Igain is a constant Pgain is a contstant March 2nd 2009, 12:15 PM #2 Grand Panjandrum Nov 2005 March 3rd 2009, 12:02 PM #3 Mar 2009 March 4th 2009, 06:53 PM #4 Mar 2009
{"url":"http://mathhelpforum.com/differential-equations/76532-dsp-difference-equation-transfer-function.html","timestamp":"2014-04-20T01:45:25Z","content_type":null,"content_length":"43862","record_id":"<urn:uuid:140001c4-64f9-4f28-b56c-72605fd13959>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US6185595 - Discrete cosine transformation operation circuit The present invention relates to a technique for processing image data and a technique for decompressing the compressed image data and, more particularly, to a technique which is useful when applied to a discrete cosine transformation operation circuit in the JPEG (Joint Photographic Experts Group) system or the MPEG (Motion Picture Experts Group) system. In the prior art, the discrete cosine transformation (DCT) operation circuit is equipped with a plurality of multipliers for multiplying input data and DCT transformation coefficients. In the case of a DCT matrix operation composed of 8 rows and 8 columns (8×8), as generally used in the JPEG system or the MPEG system, for example, there are provided eight multipliers. However, the multiplier is equipped with a number of gates to raise a drawback that the gate scale of the entire operation circuit is enlarged. In addition, the operating frequency of the multipliers is equal to the frequency (which is equal to the frequency of the inputting timing of the input data) of the outputting timing of the DCT operation result, and the product of their ratio (i.e., “1”) and the number of multipliers is as large as “8” to raise another drawback that the power consumption is increased. Specifically, the multipliers are driven with a high operating frequency, so that they consume a high power when driven. The number of the multipliers is as large as eight in the prior art so that the power consumption is further increased. As a measure for improvement, there is disclosed in Japanese Patent Laid-Open No. 4-280368, a DCT operation circuit which is constructed by using two multipliers operated with a frequency four times as high as that of the inputting timing of the data inputted to the DCT operation circuit. In this DCT operation circuit, for example, an input data x[11 ]is multiplied by DCT transformation coefficients d[11], d[21], d[31 ]and d[41 ]by one multiplier, and by DCT transformation coefficients d[51], d[61], d[71 ]and d[81 ]by the other multiplier, thereby obtaining four multiplication results x[11]d[11], x[11]d[21], x[11]d[31 ]and x[11]d[41], and four multiplication results x[11]d[51], x[11]d[61], x[11]d[71 ]and x[11]d[81]. The eight multiplication results thus obtained are stored in eight registers. Another input data x[21 ]is also multiplied by DCT transformation coefficients d[12], d[22], d[32 ]and d[42 ]by one multiplier and by DCT transformation coefficients d[52], d[62], d[72 ]and d[82 ]by the other multiplier, thereby obtaining four multiplication results x[21]d[22], x[21]d[22], x[21]d[32 ]and x[21]d[42], and four multiplication results x[21]d[52], x[21]d[62], x[21]d[72 ]and x[21]d [82]. The eight multiplication results thus obtained are stored in eight registers. Moreover, the eight multiplication results x[21]d[12], x[21]d[22], x[21]d[32], x[21]d[42], x[21]d[52], x[21]d[62], x[21]d[72 ]and x[21]d[82 ]and the preceding eight multiplication results x[11]d[11], x[11]d[21], x[11]d[31], x[11]d[41], x[11]d[51], x[11]d[61], x[11]d[71 ]and x[11]d[81 ]read out from the foregoing eight registers are added by adders, and the addition results are stored again in the aforementioned eight registers. By repeating the operation composed of such multiplication and cumulative addition eight times, the elements y[11 ]to y[81 ]of the matrix are determined. By repeating the operation eight times, moreover, all the elements of the matrix are determined. Thus, the one-dimensional 8×8 DCT matrix operation is ended. In the DCT operation circuit disclosed in Japanese Patent Laid-Open No. 4-280368, however, two multipliers are used and therefor improvement in the circuit scale is still needed. In other words, the number of multipliers is desirably reduced to one so that the circuit scale may be minimized. In the DCT operation circuit of the aforementioned Laid-Open, moreover, the multipliers are operated with a frequency four times as high as that of the inputting timing of the data inputted to the DCT operation circuit. As a result, the product of the ratio (hereinafter referred to as the “normalized frequency”) of the operating frequency of the multipliers to the frequency of the inputting timing of the data and the number of multipliers is “8”, and no improvement has been made in the power consumption. In order to reduce the power consumption, the product of the normalized frequency and the number of multipliers is desired to be minimized as much as possible. The invention has been made in view of the circumstances and has a main object to provide a discrete cosine transformation operation circuit whose power consumption is reduced by setting the product of the number of multipliers of a one-dimensional discrete cosine transformation operation circuit and the normalized frequency at 4 and to reduce the circuit scale by setting the number of multiplier at 1. The foregoing and other objects and novel features of the invention will become apparent from the following description to be made with reference to the accompanying drawings. The summary of representatives of the aspects of the invention to be disclosed herein will be described in the following. In the discrete cosine transformation operation circuit of the invention, more specifically, there is provided one multiplier which is operated with a frequency four times as high as that of the inputting timing of the data to be inputted to the discrete cosine transformation operation circuit to sequentially multiply the elements of the DCT transformation coefficients and the elements of the input data respectively. The multiplication results are added by the cumulative adder to determine a pair of cumulative addition results which correspond to the sums and differences of the paired elements of the data to be outputted from the discrete transformation operation circuit. The operations for determining the paired elements of the output data by adding and subtracting the cumulative addition results by the adder and the subtracter specific times the number of which is one-half of the number of elements of the column of the matrix of the input data. All the elements of the matrix of the output data are determined by performing those operations specific times the number of which is equal to the number of elements of the row of the matrix of the input data. In the discrete cosine transformation operation circuit of the invention, more specifically, there is provided one multiplier which is operated with a frequency four times as high as that of the inputting timing of the data to be inputted to the discrete cosine transformation operation circuit to sequentially multiply the elements of the DCT transformation coefficients by the elements of the input data respectively. The multiplication results are added as they are by the first cumulative adder, and the signs are alternately inverted to perform addition by the second cumulative adder specific times the number of which is one-half of the number of elements of the row of the matrix of the input data, and thereby to determine the elements of the column of the matrix of the output data. These operations are performed specific times the number of which is equal to the number of elements of the column of the matrix of the input data to determine all the elements of the matrix of the output data. Moreover, there are provided two multipliers to be operated with a frequency two times as high as that of the inputting timing of the data to be inputted to the discrete cosine transformation operation circuit, the DCT transformation coefficients are divided into two sets and stored in a ROM so that the respective multiplications of the elements of the sets of the DCT transformation coefficients and the elements of the input data may be simultaneously performed by the two multipliers. One or both of the discrete cosine transformation operation circuits are used to construct a two-dimensional discrete cosine transformation operation circuit comprising: a pair of one-dimensional discrete cosine transformation operation circuits; and an inverted RAM for performing the matrix operations, in which the elements of a row and the elements of a column of a matrix composed of operation results x[00], x[01], x[02], . . . received from the one-dimensional DCT operation circuit on the input side, are exchanged, to output the operation results x[00], x[10], x[20], . . . to the one-dimensional DCT operation circuit on the output side. According to the above-specified means, the product of the number of multipliers and the normalized frequency is 4 in the one-dimensional discrete cosine transformation operation circuit, so that the power consumption can be reduced. Thanks to the single multiplier, moreover, the scale of the discrete cosine transformation operation circuit is reduced. FIG. 1 is a block diagram showing a DCT operation circuit of a first embodiment schematically; FIG. 2 is a time chart showing a part of the operation timings of the DCT operation circuit; FIG. 3 is a block diagram showing a DCT operation circuit of a second embodiment schematically; FIG. 4 is a time chart showing a part of the operation timings of the DCT operation circuit; FIG. 5 is a block diagram showing a DCT operation circuit of a third embodiment schematically; FIG. 6 is a time chart showing a part of the operation timings of the DCT operation circuit; FIG. 7 is a block diagram showing a DCT operation circuit of a fourth embodiment schematically; FIG. 8 is a time chart showing a part of the operation timings of the DCT operation circuit; FIG. 9 is a block diagram showing a DCT operation circuit of a fifth embodiment schematically; and FIG. 10 is a time chart showing a part of the operation timings of the DCT operation circuit. FIG. 1 is a block diagram showing a discrete cosine transformation (DCT) operation circuit of a first embodiment schematically. In this DCT operation circuit 1, input data are inputted through a shift register 10, a hold register 11 and a multiplexer 12 to a multiplier 13, and DCT transformation coefficients, read out of a coefficient storage ROM 14, are inputted to the multiplier 13, so that those input data and the DCT transformation coefficients are multiplied. The results of multiplication are added by a cumulative adder 15 and are outputted through a demultiplexer 16 to an adder 17 and a subtracter 18, so that the output data, determined by the addition and subtraction, are outputted through registers 19A, 19B and 20 and a multiplexer 21. An address counter 23 is connected with the multiplexer 12 and the coefficient storage ROM 14, the DCT transformation coefficient corresponding to an address designated by incrementing the address counter 23 is outputted from the coefficient storage ROM 14, and the data corresponding to the address are outputted from the hold register 11 by the multiplexer 12. The input/output timings of the data in the individual registers 10, 11, 19A, 19B and 20, the multiplexers 12 and 21 and the cumulative adder 15 and the increment timing of the address counter 23 are controlled according to timing signals generated from a timing control unit 22. Here, FIG. 1 is a table prepared by using a reference clock CLK (frequency: φ0) inputted to the timing control unit 22 and the frequencies expressed using φ (φ is the frequency of the input timings of data to the shift register 10) of the timing signals outputted from the timing control unit 22 to the shift register 10, the hold register 11, the multiplexers 12 and 21, the cumulative adder 15, the registers 19A, 19B and 20 and the address counter 23. This DCT operation circuit 1 produces two DCT operation results, when one column of data of an input matrix are inputted to the multiplier 13, by exploiting the regularity of the DCT transformation The regularity of the DCT transformation coefficients will be described at first. For example, one-dimensional inverse DCT operations of 8×8 are expressed by a product of a matrix of the DCT transformation coefficients and an input coefficient matrix, as by Formula (1). $( x0 x1 x2 x3 x4 x5 x6 x7 ) = ( d a b c d e f g d c f - g - d - a - b - e d e - f - a - d g b c d g - b - e d c - f - a d - g - b e d - c - f a d - e - f a - d - g b - c d - c f g - d a - b e d - a b - c d - e f - g ) ( X0 X1 X2 X3 X4 X5 X6 X7 ) ( 1 )$ In this Formula, the 8×1 matrix on the lefthand side is the one-dimensional inverse DCT operation result, and the 8×8 matrix and the 8×1 matrix on the righthand side are the DCT transformation coefficients and the input data, respectively. Here, the coefficients a, b, c, d, e, f and g in the DCT transformation matrix are expressed as follows. a=cos (π/16)/{square root over (2)} b=cos (2π/16)/{square root over (2)} c=cos (3π/16)/{square root over (2)} d=cos (4π/16)/{square root over (2)} e=cos (5π/16)/{square root over (2)} f=cos (6π/16)/{square root over (2)} g=cos (7π/16)/{square root over (2)} Formula (1) can be transformed into Formula (2). The DCT operation circuit 1 produces the one-dimensional inverse DCT operation results by using the regularity of Formula (2). Here in Formula (1) and Formula (2), only the first column of the 8×8 matrix is expressed for the input data and the output data, but the second to eighth columns are similar to the first column. $( x0 + x7 x1 + x6 x2 + x5 x3 + x4 x0 - x7 x1 - x6 x3 - x4 x2 - x5 ) = 2 ( b f d d f - b d - d - f b d - d - b - f d d 0 0 a c e g c - g - a - e g - e c - a e - a g c ) ( X2 X6 X0 X4 X1 X3 X5 X7 ) ( 2 )$ The DCT operation circuit 1 will be described in detail for the case of Formula (2), for example, with reference to the timing chart shown in FIG. 2. Here, input timings X0′, X1′, X2′, X3′, X4′, X5′ and X6′ of the shift register 10 denote the data to be newly inputted to the shift register 10 while data X0, X1, X2, X3, X4, X5, X6 and X7 inputted at the immediately preceding cycle to the shift register 10, are subjected to the DCT transformation (this holds for FIGS. 4, 6 and 8). The individual data (elements) X0, X1, X2, X3, X4, X5, X6 and X7 of the input data are sequentially inputted to the shift register 10. When these eight data are held in the shift register 10, they are transmitted from the shift register 10 to the hold register 11. Until the eight data of the next column are held in the shift register 10, the hold register 11 holds the eight data transmitted from the shift register 10. As a result, even if the data are consecutively inputted to the shift register 10, the data of one column can be held in the hold register 11. The data fetching and shifting timings of the shift register 10 and the data fetching timings of the hold register 11 are controlled by the timing signals which are generated and outputted by the timing control unit 22. In the shift register 10, for example, the data are inputted and shifted at timings of a period T (frequency: φ). The hold register 11 receives the data from the shift register 10 at timings of a period 8T (frequency: φ/8). The eight data held in the hold register 11 are sequentially selected by the multiplexer 12 according to the addresses designated by the address counter 23, so that they are transmitted to the multiplier 13. In synchronism with the inputs of the eight data, the multiplier 13 reads the DCT transformation coefficients corresponding to the addresses designated by the address counter 23 from the coefficient storage ROM 14, and performs the multiplication of the DCT transformation coefficients b, f, d, d, a, c, e and g and the data X2, X6, X0, X4, X1, X3, X5 and X7 sequentially transmitted from the multiplexer 12. The multiplication results bX2, fX6, dX0, dX4, aX1, cX3, eX5 and gX7 are sequentially transmitted to an adder 15A of the cumulative adder 15. Of the DCT transformation coefficients thus read out, the four coefficients of the first half are the elements of the first to fourth columns of the first row of Formula (2), and the four coefficients of the second half are the elements of the fifth to eighth columns of the fifth row of Formula (2). The inputting timings of the data to be inputted from the multiplexer 12 to the multiplier 13 is controlled by the timing signal generated by and outputted from the timing control unit 22, and they are timings (frequency: 4φ) of a quarter of the aforementioned input period T of the data inputted to the shift register 10 (frequency: 4φ). Moreover, the multiplication results are sequentially transmitted at timings of a quarter of T of (T/4) (frequency: 4φ) from the multiplier 13 to the cumulative adder 15. As a result, the product of the number of the multipliers and the normalized frequency in the first embodiment is [1×4=4]. The cumulative adder 15 comprises of the adder 15A and a register 15B and cumulatively adds the four multiplication results bX2, fX6, dX0 and dX4 of the first half inputted from the multiplier 13. Specifically, the first multiplication result bX2 is temporarily held in the register 15B. In synchronism with the next multiplication result fX6 inputted from the multiplier 13, the multiplication result bX2 is transmitted from the register 15B to the adder 15A, in which the addition of [bX2+fX6] is effected, and this addition result is temporarily held again in the register 15B. Like this, for the individual multiplication results dX0 and dX4, the multiplication result dX0 is added to the sum [bX2+fX6] temporarily held in the register 15B, and the addition result [bX2+fX6+dX0] is temporarily held in the register 15B. The multiplication result dX4 is further added to obtain [bX2+fX6+dX0+dX4]. This cumulative addition result obtained by adding the four multiplication results is transmitted to the demultiplexer 16, so that it is distributed and inputted to the adder 17 and the subtracter 18. The four multiplication results aX1, cX3, eX5 and gX7 of the second half inputted from the multiplier 13 are likewise cumulatively added by the cumulative adder 15. The cumulative addition result [aX 1+cX3+eX5+gX7] thus obtained is transmitted to the demultiplexer 16, so that it is distributed and inputted to the adder 17 and the subtracter 18. The outputting timings from the cumulative adder 15 to the demultiplexer 16 are controlled by the timing signals generated by and outputted from the timing control unit 22, and are identical to the input period T (frequency: φ) of the input data. The adder 17 and the subtracter 18 respectively adds and subtracts the two inputted cumulative addition results [bX2+fX6+dX0+dx4] and [aX1+cX3+eX5+gX7]. More specifically, the results of the cumulative adder 15 are individually inputted in sets of two to the adder 17 and the subtracter 18, so that the adder 17 determines their sum whereas the subtracter 18 sequentially determines the differences of the first input values from the next input values, that is, the differences of the values inputted n-th (n is an odd number) from the values inputted m-th (m is an even number). Here, as apparent from Formula (2), the double of the first cumulative addition result [bX2+fX6+dX0 and dX4] and the double of the second cumulative addition result [aX1+cX3+eX5+gX7] are equal to the sum [x0+x7] and the difference [x0-x7], which are the elements of the output data (the inverse DCT operation results), respectively. As a result, the element x0 is obtained from the adder 17, and the element x7 is obtained from the subtracter 18. These two operation results x0 and x7 are simultaneously stored at the double period (2T) of the input period T of the input data in the registers 19A and 19B. Such operations are also performed for the combinations of the elements of the first to fourth columns of the second row of the DCT transformation matrix of Formula (2) and the elements of the fifth to eighth columns of the sixth row, the combinations of the elements of the first to fourth columns of the third row and the elements of the fifth to eighth columns of the seventh row, and the combinations of the elements of the first to fourth columns of the fourth row and the elements of the fifth to eighth columns of the eighth row, so that the elements x0, x7, x1, x6, x2, x5, x3 and x4 of the first column of the output data are obtained. As a result, the register 19A holds the elements x0, x1, x2 and x3 of the output data outputted from the adder 17. The register 19B holds the elements x7, x6, x5 and x4 of the output data outputted from the subtracter 18. The eight data (elements) of the first column of the data are transmitted, when held in the registers 19A and 19B, from the registers 19A and 19B to the register 20. Until the register 20 holds the eight data of the next column, it holds the eight data transmitted from the registers 19A and 19B. Here, the transmission timings of the data from the adder 17 and the subtracter 18 to the registers 19A and 19B are controlled by the timing signals generated by and outputted from the timing control unit 22, at a period (2T) twice the input period T of the input data. Moreover, the data transmissions from the registers 19A and 19B to the register 20 are performed in a cycle of 8T (frequency: φ/8). The individual elements x0, x1, x2, x3, x4, x5, x6 and x7 of the output data held in the register 20 are sequentially selected by the multiplexer 21 and outputted at timings of the period T (frequency: φ). As a result, for the elements X0, X1, X2, X3, X4, X5, X6 and X7 of the first column of the input data inputted for the period T (frequency: φ) to the DCT operation circuit 1, there are sequentially outputted in a cycle of T (frequency: φ) the individual elements x0, x1, x2, x3, x4, x5, x6, x6 and x7 of the first column of the one-dimensional (inverse) DCT operation result. By repeating the aforementioned operations from the second to eighth columns of the input data, it is possible to obtain the one-dimensional (inverse) DCT operation results of 8×8. FIG. 3 is a block diagram schematically showing a DCT operation circuit of a second embodiment according to the invention. In a DCT operation circuit 2 of this second embodiment, as in the DCT operation circuit 1 of the foregoing first embodiment, the data inputted through the shift register 10, the hold register 11 and the multiplexer 12, and the DCT transformation coefficients read out from the coefficient storage ROM 14 are multiplied in the multiplier 13. The combination of the input data and the DCT transformation coefficients to be multiplied is selected by the address counter 23 which is connected with the multiplexer 12 and the coefficient storage ROM 14. Moreover, the output data determined by adding the multiplication results by two cumulative adders 30 and 31 are outputted through the registers 19A, 19B and 20 and the multiplexer 21. Here, in one cumulative adder (a first cumulative adder) 30, the multiplication results sequentially inputted from the adder 13 are added as they are. In the other cumulative adder (a second cumulative adder) 31, the multiplication results sequentially inputted from the adder 13 are added after the sign of every other result is inverted by a sign inverter 32. The input/output timings of the data in the individual registers 10, 11, 19A, 19B and 20, the multiplexers 12 and 21 and the cumulative adders 30 and 31, the sign inverting timings of the sign inverter 32, and the increment timing of the address counter 23 are controlled according to the timing signals generated from a timing control unit 22. Here, FIG. 3 is a table prepared by using a reference clock CLK (frequency: φ0) inputted to the timing control unit 22; and the frequencies expressed by using φ (φ is the frequency of the input timings of data to the shift register 10) of the timing signals outputted from the timing control unit 22 to the shift register 10, the hold register 11, the multiplexers 12 and 21, the cumulative adders 30 and 31, the sign inverter 32, the registers 19A, 19B and 20 and the address counter 23. This DCT operation circuit 2 produces two DCT operation results, when one column of data of an input matrix are inputted to the multiplier 13, by exploiting the regularity of the DCT transformation coefficients expressed by Formula (1). The DCT operation circuit 2 will be described in detail for the case of Formula (1), for example, with reference to the timing chart shown in FIG. 4. The descriptions of the shift register 10, the hold register 11, the multiplexers 12 and 21, the coefficient storage ROM 14, the multiplier 13, the registers 19A, 19B and 20, the timing control unit 22 and the address counter 23 will be omitted by designating them by the same reference numerals because of the similar construction to the first embodiment. The individual data (or elements) X0, X1, X2, x3, X4, X5, X6 and X7 of the input data are sequentially inputted at timings of T/4 (frequency: 4φ) to the multiplier 13 through the shift register 10, the hold register 11 and the multiplexer 12. In synchronism with the input timings of the data X1, X1, X2, X3, X4, X5, X6 and X7, the DCT transformation coefficients d, a, b, c, d, e, f and g of the first row of Formula (1) are sequentially inputted to the multiplier 13 from the coefficient storage ROM 14. The individual multiplication results dX0, aX1, bX2, cX3, dX4, eX5, fX6 and gX7 obtained by the multiplications in the multiplier 13 are sequentially transmitted at timings of T/4 (frequency: 4φ) to an adder 30A of the first cumulative adder 30, and further to an adder 31A of the second cumulative adder 31 after the sign of every other result are inverted by the sign inverter 32. By the adders 30A and 31A and registers 30B and 31B of the individual cumulative adders 30 and 31, moreover, the eight multiplication results are added to obtain [dX0+aX1+bX2+cX3+dX4+eX5+fX6+gX7] from the first cumulative adder 30 and [dX0−aX1+bX2−cX3+dX4−eX5+fX6−gX7] from the second cumulative adder 31. Here, the accumulative additions are similar to those in the cumulative adder 15 of the foregoing first embodiment excepting that the number of cumulative additions is seven. As apparent from Formula (1), the first cumulative adder result [dX0+aX1+bX2+cX3+dX4+eX5+fX6+gX7] and the second cumulative adder result [dX0−aX1+bX2−cX3+dX4−eX5+fX6−gX7] are equal to the elements x0 and x7 of the output data (the inverse DCT operation results), respectively. As a result, the element x0 is obtained from the first cumulative adder 30, and the element x7 is obtained from the second cumulative adder 31. These two operation results x0 and x7 are stored in the registers 19A and 19B. By performing such operations for the second to fourth rows (the seven to fifth rows) of the DCT transformation matrix of Formula (1), there are obtained the elements x0, x1, x2, x3, x4, x5, x6 and x 7 of the first column of the output data. The eight data (elements) of the first column of the output data are transmitted, when held in the registers 19A and 19B, to the register 20, so that they are sequentially selected by and outputted from the multiplexer 21. As a result, for the elements X0, X1, X2, X3, X4, X5, X6 and X7 of the first column of the input data inputted for the period T (frequency: φ) to the DCT operation circuit 1, the elements x0, x1, x2, x3, x4, x5, x6 and x7 of the first column of the one-dimensional inverse DCT operation results are sequentially outputted in a cycle of the period T (frequency: φ). By repeating the aforementioned operations for the second to eighth columns of the input data, it is possible to obtain the one-dimensional DCT operation results of 8×8. Here, the data and the DCT transformation coefficients are inputted to the multiplier 13 at timings of a quarter (T/4) of the input period T of the data (frequency: 4φ) inputted to the shift register 10. The multiplication results are sequentially transmitted at timings of a quarter (T/4) of the period T (frequency: 4φ) from the multiplier 13 to the cumulative adders 30 and 31. Therefore, the product of the number of multipliers and the normalized frequency in this second embodiment is [1×4=4]. The cumulative addition results are outputted at timings of 2T (frequency: φ/2) from the cumulative adders 30 and 31 to the registers 19A and 19B. The timing control unit 22 generates the sign inverting signals at timings of T/2 (frequency: 2φ) and outputs them to the sign inverter 32. As a result, the sign of every other multiplication results out of the results inputted in a cycle of T/4 (frequency: 4φ) to the cumulative adder 31 is alternately inverted. FIG. 5 is a block diagram showing a third embodiment of the DCT operation circuit according to the invention. In a DCT operation circuit 3 of this third embodiment, a pair of multiplexers 40 and 41 are connected with the hold register 11 for holding, for example, eight data inputted through the shift register 10; multipliers 42 and 45 are connected with the multiplexers 40 and 41, respectively; and coefficient storage ROMs 43 and 46 are connected with the multipliers 42 and 45, respectively. Although the number is not especially limited, the thirty two DCT transformation coefficients (excepting “0”) of Formula (2), for example, are divided into two groups, so that sixteen coefficients are stored in each of the coefficient storage ROMs 43 and 46. In the multiplier 42, moreover, the data inputted from the shift register 10, the hold register 11 and the multiplexer 40, and the DCT transformation coefficients read out of the coefficient storage ROM 43 are multiplied. In the multiplier 45, the data inputted through the hold register 11 and the multiplexer 41, and the DCT transformation coefficients read out of the coefficient storage ROM 46 are multiplied. These operations in the multipliers 42 and 45 are processed in parallel. The combinations, multiplied in the individual multipliers 42 and 45, of the input data and the DCT transformation coefficient are selected by the address counter 23 which is commonly connected with the multiplexers 40 and 41 and the coefficient storage ROMs 43 and 46. An adder 47 is connected with the multipliers 42 and 45, and it adds the multiplication results outputted from the multipliers 42 and 45. With the adder 47, there is connected the cumulative adder 15 which further adds the two addition results consecutively outputted from the adder 47. The addition result outputted from the cumulative adder 15 is outputted, as in the DCT operation circuit 1 of the foregoing first embodiment, through the demultiplexer 16 to the adder 17 and the subtracter 18, and the output data determined by the addition and the subtraction are outputted through the registers 19A, 19B and 20 and the multiplexer 21. The input/output timings of the data in the individual registers 10, 11, 19A, 19B and 20, the multiplexers 40, 41 and 21 and the cumulative adder 15, and the increment timings of the address counter 23 are controlled according to the timing signals generated from the timing control unit 22. Here, FIG. 5 is a table prepared by using a reference clock CLK (frequency φ0) inputted to the timing control unit 22, and the frequencies expressed using φ (the frequencies of the input timings of data to the shift register 10) of the timing signals outputted from the timing control unit 22 to the shift register 10, the hold register 11, the multiplexers 40, 41 and 21, the cumulative adder 15, the registers 19A, 19B and 20 and the address counter 23. This DCT operation circuit 3 produces two DCT operation results, when half data of one column in an input matrix are inputted to the multiplier 42 and when the other half data of one column in the input matrix are inputted to the multiplier 45, by exploiting the regularity of the DCT transformation coefficients expressed by Formula (2). The DCT operation circuit 3 will be described in detail for the case of Formula (2), for example, with reference to the timing chart shown in FIG. 6. The descriptions of the shift register 10, the hold register 11, the cumulative adder 15, the demultiplexer 16, the adder 17, the subtracter 18, the registers 19A, 19B and 20, the multiplexer 21, the timing control unit 22 and the address counter 23 will be omitted by designating them by the same reference numerals because of the similar construction to the foregoing first embodiment. Half (four) data X2, X0, X1 and X5 of the eight data inputted through the shift register 10 and held in the hold register 11 are sequentially selected and transmitted to the multiplier 42 by the multiplexer 40 on the basis of the addresses designated by the address counter 23. In synchronism with the individual transfer timings of those data X2, X0, X1 and X5, the other four data X6, X4, X3 and X7 of the hold register 11 are sequentially selected and transmitted to the multiplier 45 by the multiplexer 41 on the basis of the address designation of the address counter 23. In synchronism with the inputs of the four data, the multiplier 42 sequentially reads out the DCT transformation coefficients corresponding to the addresses designated by the address counter 23, from the coefficient storage ROM 43, and operates the multiplications of the DCT transformation coefficients b, d, a and e (of which b and d are the DCT transformation coefficients of the first and third columns of the first row of Formula (2), and a and e are the DCT transformation coefficients of the fifth and seventh columns of the fifth row of Formula (2)) and the data X2, X0, X1 and X5 sequentially transmitted from the multiplexer 40. In synchronism with the inputs of the four data, the multiplier 45 sequentially reads out the DCT transformation coefficients corresponding to the addresses designated by the address counter 23, from the coefficient storage ROM 46, and operates the multiplications of the DCT transformation coefficients f, d, c and g (of which f and d are the DCT transformation coefficients of the second and fourth columns of the first row of Formula (2), and c and g are the DCT transformation coefficients of the sixth and eight columns of the fifth row of Formula (2)) and the data X6, X4, X3 and X7 sequentially transmitted from the multiplexer 41. The inputting timings of the data inputted from the multiplexers 40 and 41 to the multipliers 42 and 45 are controlled by the timing signals generated by and outputted from the timing control unit 22 , and they are timings of one-half (T/2) of the input period T of the data inputted to the shift register 10 (frequency: 2φ). In short, the operating frequency of the multipliers 42 and 45 is 2φ. As a result, the product of the number of multipliers and the normalized frequency in this third embodiment is [2×2=4]. The multiplication results bX2 and fX6, dX0 and dX4, aX1 and cX3, and eX5 and gX7 are in pairs inputted to the adder 47 synchronously from the multiplier 42 and the multiplier 45. Then, the adder 47 performs additions to output [bX2+fX6], [dX0+dX4], [aX1+cX3) and [eX5+gX7] sequentially to the cumulative adder 15. The input/output timings of the data of the adder 47 are controlled at timings of T /2 (frequency: 2φ) by the timing signals generated by and outputted from the timing control unit 22. The cumulative adder 15 cumulatively adds the two multiplication results [bX2+fX6] and [dX0+dX4] of the first half inputted from the adder 47. The operation result [bX2+fX6+dX0+dX4] are transmitted to the demultiplexer 16, and distributed and inputted to the adder 17 and the subtracter 18. The two multiplication results [aX1+cX3] and [eX5+gX7] of the second half inputted from the adder 47 are also added by the cumulative adder 15, so that the operation result [aX1+cX3+eX5+gX7] is fed to the demultiplexer 16, and distributed and inputted to the adder 17 and the subtracter 18. The outputting timings from the cumulative adder 15 to the demultiplexer 16 are controlled by the timing signal generated by and outputted from the timing control unit 22, and are timings of T/2 (frequency: 2φ). The two cumulative addition results inputted to the adder 17 and the subtracter 18, are then processed as in the foregoing first embodiment through the adder 17, the subtracter 18 and the registers 19A, 19B and 20, and outputted as the output data (the inverse DCT operation results) from the multiplexer 21. Such operations are also performed for the combinations of the elements of the first to fourth columns of the second row of the DCT transformation matrix of Formula (2) and the elements of the fifth to eighth columns of the sixth row, the combinations of the elements of the first to fourth columns of the third row and the elements of the fifth to eighth columns of the seventh row, and the combinations of the elements of the first to fourth columns of the fourth row and the elements of the fifth to eighth columns of the eighth row, and the elements x0, x7, x1, x6, x2, x5, x3 and x4 of the first column of the output data are obtained. By repeating the aforementioned operations from the second to eighth columns of the input data, it is possible to obtain the one-dimensional DCT operation results of 8×8. FIG. 7 is a block diagram showing a fourth embodiment of the DCT operation circuit according to the invention. In a DCT operation circuit 4 of this fourth embodiment, a pair of multiplexers 40 and 41 are connected with the hold register 11 for holding, e.g., eight data inputted through the shift register 10; multipliers 42 and 45 are connected with the multiplexers 40 and 41, respectively; and coefficient storage ROMs 43 and 46 are connected with the multipliers 42 and 45, respectively. Although the number is not especially limited, the sixty four DCT transformation coefficients of Formula (1), for example, are divided into two groups, so that thirty two DCT transformation coefficients are stored in each of the coefficient storage ROMs 43 and 46. In the multiplier 42, moreover, the data inputted from the shift register 10, the hold register 11 and the multiplexer 40, and the DCT transformation coefficients read out of the coefficient storage ROM 43 are multiplied. In the multiplier 45, the data inputted through the shift register 10, the hold register 11 and the multiplexer 41, and the DCT transformation coefficients read out of the coefficient storage ROM 46 are multiplied. These operations in the multipliers 42 and 45 are processed in parallel. The combinations, multiplied in the multipliers 42 and 45, of the input data and the DCT transformation coefficient are selected by the address counter 23 which is commonly connected with the multiplexers 40 and 41 and the coefficient storage ROMs 43 and 46. An adder 47 is connected with the multipliers 42 and 45, and it adds the multiplication results outputted from the multipliers 42 and 45. The cumulative adder 30 is connected with the adder 47, and the cumulative adder 31 is connected with the adder 47 through the sign inverter 32. The four addition results consecutively outputted from the adder 47 are added as they are in the cumulative adder 30, and added in the cumulative adder 31 after the sign of every other result is inverted by the sign inverter 32. The output data determined by the two cumulative adders 30 and 31 are outputted through the registers 19A, 19B and 20 and the multiplexer 21. The input/output timings of the data of the registers 10, 11, 19A, 19B and 20, the multiplexers 40, 41 and 21 and the cumulative adders 30 and 31, the sign inverting timings of the sign inverter 32, and the increment timings of the address counter 23 are controlled by the timing signals generated from the timing control unit 22. Here, FIG. 7 is a table prepared by using a reference clock CLK (frequency φ0) inputted to the timing control unit 22, and the frequencies expressed using φ (the frequencies of the input timings of data to the shift register 10) of the timing signals outputted from the timing control unit 22 to the shift register 10, the hold register 11, the multiplexers 40, 41 and 21, the cumulative adders 30 and 31, the sign inverter 32, the registers 19A, 19B and 20 and the address counter 23. This DCT operation circuit 4 produces two DCT operation results, when one-half data of one column in an input matrix are inputted to the multiplier 42 and when the other half data of one column in the input matrix are inputted to the multiplier 45, by exploiting the regularity of the DCT transformation coefficients expressed by Formula (1). The DCT operation circuit 4 will be described in detail for the case of Formula (1), for example, with reference to a timing chart shown in FIG. 8. The descriptions of the shift register 10, the hold register 11, the registers 19A, 19B and 20, the multiplexer 21 and the timing control unit 22 will be omitted by designating them by the same reference numerals because of the similar construction to the foregoing first embodiment. Moreover, the descriptions of the cumulative adders 30 and 31, the sign inverter 32 and the address counter 23 will be omitted by designating them by the same reference numerals because of the similar construction to the foregoing second embodiment. Half (four) data X0, X1, X4 and X5 out of the eight data inputted through the shift register 10 and held in the hold register 11 are sequentially selected and transmitted to the multiplier 42 by the multiplexer 40 on the basis of the addresses designated by the address counter 23. In synchronism with the individual transfer timings of those data X0, X1, X4 and X5, the other four data X2, X3, X6 and X7 of the hold register 11 are sequentially selected and transmitted to the multiplier 45 by the multiplexer 41 on the basis of the address designation of the address counter 23. In synchronism with the inputs of the four data, the multiplier 42 sequentially reads out the DCT transformation coefficients corresponding to the addresses designated by the address counter 23 from the coefficient storage ROM 43, and operates the multiplications of the DCT transformation coefficients d, a, d and e (which are in this order the DCT transformation coefficients of the first, second, fifth and sixth columns of the first row of Formula (1)) and the data X0, X1, X4 and X5 sequentially transmitted from the multiplexer 40. In synchronism with the inputs of the four data, the multiplier 45 sequentially reads out the DCT transformation coefficients corresponding to the addresses designated by the address counter 23 from the coefficient storage ROM 46, and operates the multiplications of the DCT transformation coefficients b, c, f and g (which are in this order the DCT transformation coefficients of the third, fourth, seventh and eighth columns of the first row of Formula (1)) and the data X2, X3, X6 and X7 sequentially transmitted from the multiplexer 41. The inputting timings of the data individually inputted from the multiplexers 40 and 41 to the multipliers 42 and 45 are controlled by the timing signals generated by and outputted from the timing control unit 22, and are the timings of one-half (T/2) of the input period T of the data inputted to the shift register 10 (frequency: 2φ). In short, the operating frequency of the multipliers 42 and 45 is 2φ. As a result, the product of the number of multipliers and the normalized frequency in this fourth embodiment is [2×2=4]. The multiplication results dX0 and bX2, aX1 and cX3, dX4 and fX6, and eX5 and gX7 are in pairs inputted to the adder 47 synchronously from the multiplier 42 and the multiplier 45. Then, the adder 47 performs additions to output [dX0+bX2], [aX1+cX3], [dX4+fX6] and [eX5+gX7] sequentially. The input/output of the data of the adder 47 are controlled at timings of one-half (T/2) of the input period T of the data inputted to the shift register 10 (frequency: 2φ) by the timing signals generated by and outputted from the timing control unit 22. The addition results outputted from the adder 47 are transmitted to the one cumulative adder 30 and through the sign inverter 32 to the other cumulative adder 31. The cumulative adder 30 cumulatively adds the four addition results sequentially transmitted from the adder 47, and outputs the result [dX0+bX2+aX1+cX3+dX4+fX6+eX5+gX7] to the register 19A. The sign of every other addition result out of the four addition results sequentially transmitted from the adder 47 is inverted by the sign inverter 32. The cumulative adder 31 cumulatively adds the four addition results, and outputs the result [dX0+bX2−aX1−cX3+dX4+fX6−eX5−gX7] to the register 19B. These two cumulative addition results inputted to the registers 19A and 19B are processed as in the foregoing second embodiment and outputted as the output data (the inverse DCT operation results) from the multiplexer 21 through the register 20. By performing such operations for the second to fourth rows (the seven to fifth rows) of the DCT transformation matrix of Formula (1), there are obtained the elements x0, x1, x2, x3, x4, x5, x6 and x 7 of the first column of the output data. The eight data (elements) of the first column of the output data are transmitted, when held in the registers 19A and 19B, to the register 20, and they are sequentially selected by and outputted from the multiplexer 21. As a result, for the elements X0, X1, X2, X3, X4, X5, X6 and X7 of the first column of the input data inputted in a cycle of T (frequency: φ) to the DCT operation circuit 1, the elements x0, x1, x2, x3, x4, x5, x6 and x7 of the first column of the one-dimensional inverse DCT operation results are sequentially outputted in a cycle of T (frequency: φ). By repeating such operations for the second to eighth columns of the input data, it is possible to obtain the one-dimensional DCT operation results of 8×8. Here, the cumulative addition results are outputted at timings of 2T (frequency: φ/2) from the cumulative adders 30 and 31 to the registers 19A and 19B. Moreover, the timing control unit 22 generates the sign inverting signals at timings of T (frequency: φ) and outputs them to the sign inverter 32. As a result, the sign of every other addition result out of the addition results inputted in a cycle of T/2 (frequency: 2φ) from the adder 47 to the cumulative adder 31 is inverted. Here, the DCT operation circuits of the foregoing first, second, third and fourth embodiments execute the inverse DCT operations when these embodiments are applied to an image data decoding process conforming of the MPEG or its corresponding standards. FIG. 9 is a block diagram showing a fifth embodiment of the DCT operation circuit according to the invention. A DCT operation circuit 5 of this fifth embodiment executes two-dimensional DCT operations by using the one-dimensional DCT operation circuit 1, 2, 3 or 4 of the foregoing first, second, third or fourth embodiment. In this two-dimensional operation circuit 5, the data of an input matrix are inputted to one one-dimensional DCT operation circuit 1 (2, 3 or 4) to effect the one-dimensional DCT operations, as described in conjunction with the first, second, third or fourth embodiment. This output is inputted to an inverted RAM 6, the output of which is inputted to the other one-dimensional DCT operation circuit 1 (2, 3 or 4). Then, the one-dimensional DCT operations described in conjunction with the first, second, third or fourth embodiment, so that the two-dimensional DCT operation results are obtained. FIG. 10 is a time chart showing examples of the operation timings of the two-dimensional DCT operation circuit of the fifth embodiment. Upon receiving sixty four input data X00, X01, X02, . . . of an 8×8 matrix, the one-dimensional DCT operation circuit 1 (2, 3 or 4) on the input side of the two-dimensional DCT operation circuit 5 shown in FIG. 9 performs the one-dimensional DCT operations and outputs the operation results x00, x01, x02, . . . to the inverted RAM 6 after a fixed delay D1. Upon receiving to the sixty four operation results x00, x01, x02, . . . from the one-dimensional DCT operation circuit 1 (2, 3 or 4) on the input side, the inverted RAM 6 performs the matrix operations for exchanging the column elements and the row elements of the matrix comprising the received operation results, and outputs the results x00, x10, x20, . . . to the one-dimensional DCT operation circuit 1 (2, 3 or 4) on the output side. Upon receiving the sixty four operation results x00, x10 , x20, . . . from the inverted RAM 6, the one-dimensional DCT operation circuit 1 (2, 3 or 4) on the output side performs an additional one-dimensional DCT operations for the received operation results, and outputs the operation results y00, y10, y20, . . . after a fixed delay D2. By these operations, it is possible to provide the two-dimensional DCT operation results. When this embodiment is applied to the image data decoding process comforming to the MPEG or its corresponding standards, for example, each one-dimensional DCT operation circuit 1 (2, 3 or 4) executes the inverted DCT operations. In this case, the data of the matrix inputted to the two-dimensional DCT operation circuit 5 of this embodiment are the data which are prepared, although not specifically shown, by converting the data of an input image into DCT coefficients by another coding two-dimensional DCT operation circuit, quantizing and compressing the DCT coefficients by a quantizer, and decompressing the compressed coefficients by a reverse-quantizer. Moreover, the data outputted from the two-dimensional DCT operation circuit 5 are transmitted to the motion compensation predicting unit (not shown). As has been described in detail, the one-dimensional DCT operation circuit 1 or 2 has either of the two constructions: a construction where there is provided one multiplier 13 operated with the normalized frequency 4 to multiply the elements of the DCT transformation coefficients and the elements of the input data sequentially, the multiplication results are added by the cumulative adder 15 to determine a pair of cumulative addition results which correspond to the respective sums and differences of the paired elements of the data to be outputted from the DCT operation circuit 1, the operations for determining the paired elements of the output data by adding and subtracting the cumulative addition results by the adder 17 and the subtracter 18 specific times the number of which is one-half of the number of elements of the column of the matrix of the input data, and all the elements of the matrix of the output data are determined by performing those operations predetermined times the number of which is equal to the number of elements of the row of the matrix of the input data; and a construction where the multiplication results obtained by the multiplier 13 operating with the normalized frequency 4 are added as they are by the first cumulative adder 30, and the sign of every other addition result is inverted to perform the additions by the second cumulative adder 31 specific times the number of which is one-half of the number of elements of the row of the matrix of the input data thereby to determine the elements of the column of the matrix of the output data, and these operations are performed specific times the number of which is equal to the number of elements of the column of the matrix of the input data to determine all the elements of the matrix of the output data. As a result, only one multiplier is used to reduce the scale of the DCT operation circuit 1 or 2, and the product of the number of multipliers and the normalized frequency is 4 at most, so that the power consumption can be reduced. The one-dimensional DCT operation circuits 3 or 4 has either of the two constructions: a construction where there are provided the one-dimensional DCT operation circuits 3 and 4 which are equipped with the paired multipliers 42 and 45 to be operated with the normalized frequency 2 to effect the multiplications of the elements of one-half of the DCT transformation coefficients and the elements of one-half of the input data sequentially in parallel by the multipliers 42 and 45, thereby to determine the paired cumulative addition results corresponding to the respective sums and differences of the paired elements of the data outputted from the DCT operation circuits, the operations for determining the paired elements of the output data by adding and subtracting the addition results by the adder 17 and the subtracter 18, respectively, to determine the elements of the column of the matrix of the output data, and the operations are performed specific times the number of which is equal to the number of elements of the column of the matrix of the input data to determine all the elements of the matrix of the output data; and a construction where the multiplication results obtained from the paired multipliers 42 and 45 operated with the normalized frequency 2 are added by the adder 47, the resultant sums are added as they are by the first cumulative adder 30 and added by the second cumulative adder 31 after the sign of every other resultant sum is inverted. These additions are performed for one-half of the number of elements of the column of the matrix of the input data to determine the elements of the column of the matrix of the output data, the operations are performed specific times the number of which is equal to the number of elements of the column of the matrix of the input data to determine all the elements of the matrix of the output data. As a result, the product of the number of multipliers and the normalized frequency is 4 at most so that the power consumption can be reduced. Moreover, the two-dimensional DCT operation circuit 5 is given one or two of the aforementioned four constructions including the two one-dimensional DCT operation circuits 1 (or 2, 3 or 4) and the inverted RAM 6. As a result, the total number of multipliers in the two one-dimensional DCT operation circuits 1 (or 2, 3 or 4) can be reduced to two or four, thereby reducing the scale of the two-dimensional DCT operation circuit 5. Moreover, the product of the number of multipliers in the two-dimensional DCT operation circuit 5 and the normalized frequency is 8 at most, so that the power consumption can be reduced. Although the invention has been specifically described in conjunction with its embodiments, the invention should not be limited to the embodiments but can naturally be modified in various manners without departing from the gist thereof. For example, the shift register 10, the hold register 11, the multiplexers 12 and 21, the registers 19A, 19B and 20, the timing control unit 22, the address counter 23 and so on should not be limited to those of the foregoing embodiments but can be modified in various manners. Moreover, the invention should not be limited to the DCT transformation of 8×8 but can also be applied to a circuit for the DCT transformations of 4×4 and 16×16. Our invention has been described mainly on the background which is the field of is application assumed to be applied to the decoding technique by the inverse DCT operations of coded image data. Despite of this description, however, the invention should not be limited to the application but can be utilized in a data processing system for the DCT transformation operations or the inverse DCT transformation operations. According to the invention, as has been described hereinbefore, only one multiplier is used in the one-dimensional discrete cosine transformation operation circuit, so that the scale of the discrete cosine transformation operation circuit can be reduced, the produce of the number of multipliers and the normalized frequency is 4 at most, and hence the power consumption can be reduced.
{"url":"http://www.google.com/patents/US6185595?dq=5,666,293","timestamp":"2014-04-18T06:32:57Z","content_type":null,"content_length":"129243","record_id":"<urn:uuid:f0a51593-48e3-479f-b534-51df66f481eb>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
2.2 Three Goals for Inference from Data 2.2 Three Goals for Inference from Data 13 2.2.1 Estimation of Parameter Values One goal we may have is deciding the extent to which we should believe in each of the possible values of an underlying parameter. In the case of the coin, we may use the observed data to determine the extent to which we should believe that the bias is 20%, 50%, or 80%. What we are determining is how much we believe in each of the available parameter values. In most real appli- cations, we allow for a continuum of possible biases from zero to one, and the Bayesian mathematics reveal the credibility of every possible value on that continuum. Because the flip of the coin is a random process, we cannot be certain of the underlying true probability of getting heads. So our posterior beliefs are an esti- mate. The posterior beliefs typically increase the magnitude of belief in some parameter values, while lessening the degree of belief in other parameter val- ues. So this process of shifting our beliefs in various parameter values is called estimation of parameter values. 2.2.2 Prediction of Data Values
{"url":"http://my.safaribooksonline.com/book/-/9780123814852/2dot2-three-goals-for-inference-from-data/223_model_comparison","timestamp":"2014-04-21T00:23:58Z","content_type":null,"content_length":"85372","record_id":"<urn:uuid:92f83d38-2e6a-47e4-a075-3e7c83aa6918>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
Venanzio Capretta I recently moved to the University of Nottingham. This page will not be maintained any longer. Please visit my new home page in Nottingham. Venanzio Capretta Postdoctoral researcher Foundation Group Computer Science Institute (ICIS) Radboud University Nijmegen P.O. Box 9010, NL-6500 GL Nijmegen The Netherlands e-mail: venanzio @ cs.ru.nl telephone: +31-24-3652631 fax: +31-24-3652525 room: 02.528 We know nothing, not even whether we know or do not know, or what it is to know or not to know, or in general whether anything exists or not. -- Metrodorus of Chios Reflections on Type Theory, Lambda Calculus, and the Mind Essays Dedicated to Henk Barendregt on the Occasion of his 60th Birthday Erik Barendsen, Herman Geuvers, Venanzio Capretta, Milad Niqui (Eds.) A collection of essays associated with the celebration of Henk Barendregt's 60th Birthday Work in progress These are some articles that I and some coauthors are working on. Click on the title to get the PDF file. Corecursive Algebras: A Study of General Structured Corecursion Coauthors: Tarmo Uustalu and Varmo Vene Slides of two seminars about this topic: Higher Order Abstract Syntax in Type Theory Coauthor: Amy Felty Coq formalization and example application. A polymorphic representation of induction-recursion Also in PostScript format. Abstracts of Articles Common Knowledge as a Coinductive Modality Reflections on Type Theory, Lambda Calculus, and the Mind. Essays Dedicated to Henk Barendregt on the Occasion of his 60th Birthday, December 2007, pages 51-61 (bibtex entry). I prove in Coq Aumann's Theorem: In perfect information games, common knowledge of rationality implies backward induction equilibrium. The notion of common knowledge is formalized, using a coinductive definition, as a modality containing an infinite amount of information. Computation by Prophecy Coauthor: Ana Bove. TLCA 2007, pages 70-83 (bibtex entry). We describe a new method to represent (partial) recursive functions in type theory. For every recursive definition, we define a co-inductive type of prophecies that characterises the traces of the computation of the function. The structure of a prophecy is a possibly infinite tree, which is coerced by linearisation to a type of partial results defined by applying the delay monad to the co-domain of the function. Using induction on a weight relation defined on the prophecies, we can reason about them and prove that the formal type-theoretic version of the recursive function, resulting from the present method, satisfies the recursive equations of the original function. The advantages of this technique over the method previously developed by the authors via a special-purpose accessibility (domain) predicate are: there is no need of extra logical arguments in the definition of the recursive function; the function can be applied to any element in its domain, regardless of termination properties; we obtain a type of partial recursive functions between any two given types; and composition of recursive functions can be easily defined. Formal Correctness of Conflict Detection for Firewalls Coauthors: Bernard Stepien, Amy Felty, and Stan Matwin. ACM Workshop on Formal Methods in Security Engineering, November 2007, pages 22-30 (bibtex entry). We describe the formalization of a correctness proof for a conflict detection algorithm for firewalls in the Coq Proof Assistant. First, we give formal definitions in Coq of a firewall access rule and of an access request to a firewall. Formally, two rules are in conflict if there exists a request on which one rule would allow access and the other would deny it. We express our algorithm in Coq, and prove that it finds all conflicts in a set of rules. We obtain an OCaml version of the algorithm by direct program extraction. The extracted program has successfully been applied to firewall specifications with over 200,000 rules. Combining de Bruijn Indices and Higher-Order Abstract Syntax in Coq Coauthor: Amy Felty TYPES 2006, LNCS 4502, pages 63-77 (bibtex entry). The use of higher-order abstract syntax is an important approach for the representation of binding constructs in encodings of languages and logics in a logical framework. Formal meta-reasoning about such object languages is a particular challenge. We present a mechanism for such reasoning, formalized in Coq, inspired by the Hybrid tool in Isabelle. At the base level, we define a de Bruijn representation of terms with basic operations and a reasoning framework. At a higher level, we can represent languages and reason about them using higher-order syntax. We take advantage of Coq's constructive logic by formulating many definitions as Coq programs. We illustrate the method on two examples: the untyped lambda calculus and quantified propositional logic. For each language, we can define recursion and induction principles that work directly on the higher-order syntax. Recursive Coalgebras from Comonads (long version) Coauthors: Tarmo Uustalu and Varmo Vene Information and Computation, volume 204, issue 4 (2006), pages 437-468. (bibtex entry). The concept of recursive coalgebra of a functor was introduced in the 1970s by Osius in his work on categorical set theory to discuss the relationship between wellfounded induction and recursively specified functions. In this paper, we motivate the use of recursive coalgebras as a paradigm of structured recursion in programming semantics, list some basic facts about recursive coalgebras and, centrally, give new conditions for the recursiveness of a coalgebra based on comonads, comonad-coalgebras and distributive laws of func- tors over comonads. We also present an alternative construction using countable products instead of cofree comonads. General Recursion via Coinductive Types Also in PostScript format. Logical Methods in Computer Science, Vol. 1, Iss. 2 (2005), pages 1-28. (bibtex entry) Coq formalization (requires vectors). A fertile field of research in theoretical computer science investigates the representation of general recursive functions in intensional type theories. Among the most successful approaches are: the use of wellfounded relations, implementation of operational semantics, formalization of domain theory, and inductive definition of domain predicates. Here, a different solution is proposed: exploiting coinductive types to model infinite computations. To every type A we associate a type of partial elements A^, coinductively generated by two constructors: the first, return(a) just returns an element a:A; the second, step(x), adds a computation step to a recursive element x:A^. We show how this simple device is sufficient to formalize all recursive functions between two given types. It allows the definition of fixed points of finitary, that is, continuous, operators. We will compare this approach to different ones from the literature. Finally, we mention that the formalization, with appropriate structural maps, defines a strong monad. Recursive Functions with Higher Order Domains Coauthor: Ana Bove. In TLCA 2005, LNCS 3461, pages 116-130. (bibtex entry) In a series of articles, we developed a method to translate general recursive functions written in a functional programming style into constructive type theory. Three problems remained: the method could not properly deal with functions taking functional arguments, the translation of terms containing λ-abstractions was too strict and partial application of general recursive functions was not allowed. Here, we show how the three problems can be solved in an impredicative type theory. The solution hinges on a definition of the type of partial functions between given types. Every function, including arguments to higher order functions, λ-abstractions and partially applied functions, is translated as a pair consisting of a domain predicate and a function dependent on the predicate. Higher order functions are assigned domain predicates that inherit termination conditions from their functional arguments. The translation of a λ-abstraction does not need to be total anymore, but generates a local termination condition. The domain predicate of a partially applied function is defined by fixing the given arguments in the domain of the original function. As in our previous articles, simultaneous induction-recursion is required to deal with nested recursive functions. Impredicativity is essential for the method to apply to all functions, since the inductive definition of the domain predicate can refer globally to the domain predicate itself. Privacy in Data Mining Using Formal Methods Coauthors: Stan Matwin, Amy Felty, and Istvan HernNadvNvlgyi. In TLCA 2005, LNCS 3461, pages 116-130. (bibtex entry) There is growing public concern about personal data collected by both private and public sectors. People have very little control over what kinds of data are stored and how such data is used. Moreover, the ability to infer new knowledge from existing data is increasing rapidly with advances in database and data mining technologies. We describe a solution which allows people to take control by specifying constraints on the ways in which their data can be used. User constraints are represented in formal logic, and organizations that want to use this data provide formal proofs that the software they use to process data meets these constraints. Checking the proof by an independent verifier demonstrates that user constraints are (or are not) respected by this software. Our notion of "privacy correctness" differs from general software correctness in two ways. First, properties of interest are simpler and thus their proofs should be easier to automate. Second, this kind of correctness is stricter; in addition to showing a certain relation between input and output is realized, we must also show that only operations that respect privacy constraints are applied during execution. We have therefore an intensional notion of correctness, rather that the usual extensional one. We discuss how our mechanism can be put into practice, and we present the technical aspects via an example. Our example shows how users can exercise control when their data is to be used as input to a decision tree learning algorithm. We have formalized the example and the proof of preservation of privacy constraints in Coq. Recursive Coalgebras from Comonads (short version) Coauthors: Tarmo Uustalu and Varmo Vene. Electronic Notes in Theoretical Computer Science, Volume 106, Proceedings of CMCS 2004, pages 43-61. (bibtex entry) We discuss Osius's concept of a recursive coalgebra of a functor from the perspective of programming semantics and give some new sufficient conditions for the recursiveness of a functor-coalgebra that are based on comonads, comonad-coalgebras and distributive laws. Setoids in type theory Coauthors: Gilles Barthe and Olivier Pons. In Journal of Functional Programming, 13(2), pages 261-293, 2003. (bibtex entry) Formalising mathematics in dependent type theory often requires to use setoids, i.e. types with an explicit equality relation, as a representation of sets. This paper surveys some possible definitions of setoids and assesses their suitability as a basis for developing mathematics. In particular, we argue that a commonly advocated approach to partial setoids is unsuitable, and more generally that total setoids seem better suited for formalising mathematics. Type-theoretic functional sematics Coauthors: Yves Bertot and Kuntal Das Barman. In TPHOLs 2002, LNCS 2410, pages 83-97. (bibtex entry) We describe the operational and denotational semantics of a small imperative language in type theory with inductive and recursive definitions. The operational semantics is given by natural inference rules, implemented as an inductive relation. The realization of the denotational semantics is more delicate: The nature of the language imposes a few difficulties on us. First, the language is Turing-complete, and therefore the interpretation function we consider is necessarily partial. Second, the language contains strict sequential operators, and therefore the function necessarily exhibits nested recursion. Our solution combines and extends recent work by the authors and others on the treatment of general recursive functions and partial and nested recursive functions. The first new result is a technique to encode the approach of Bove and Capretta for partial and nested recursive functions in type theories that do not provide simultaneous induction-recursion. A second result is a clear understanding of the characterization of the definition domain for general recursive functions, a key aspect in the approach by iteration of Balaa and Bertot. In this respect, the work on operational semantics is a meaningful example, but the applicability of the technique should extend to other circumstances where complex recursive functions need to be described formally. Nested General Recursion and Partiality in Type Theory Coauthor: Ana Bove. In TPHOLs 2001, LNCS 2152, pages 121-135. (bibtex entry) We extend Bove's technique for formalising simple general recursive algorithms in constructive type theory to nested recursive algorithms. The method consists in defining an inductive special purpose accessibility predicate, that characterizes the inputs on which the algorithm terminates. As a result, the type-theoretic version of the algorithm can be defined by structural recursion on the proof that the input values satisfy this predicate. This technique results in definitions in which the computational and logical parts are clearly separated; hence, the type-theoretic version of the algorithm is given by its purely functional content, similarly to the corresponding program in a functional programming language. In the case of nested recursion, the special predicate and the type-theoretic algorithm must be defined simultaneously, because they depend on each other. This kind of definitions is not allowed in ordinary type theory, but it is provided in type theories extended with Dybjer's schema for simultaneous inductive-recursive definitions. The technique applies also to the formalisation of partial functions as proper type-theoretic functions, rather than relations representing their graphs. Certifying the Fast Fourier Transform with Coq In TPHOLs 2001, LNCS 2152, pages 154-168. (bibtex entry) We program the Fast Fourier Transform in type theory, using the tool Coq. We prove its correctness and the correctness of the Inverse Fourier Transform. A type of trees representing vectors with interleaved elements is defined to facilitate the definition of the transform by structural recursion. We define several operations and proof tools for this data structure, leading to a simple proof of correctness of the algorithm. The inverse transform, on the other hand, is implemented on a different representation of the data, that makes reasoning about summations easier. The link between the two data types is given by an isomorphism. This work is an illustration of the two-level approach to proof development and of the principle of adapting the data representation to the specific algorithm under study. CtCoq, a graphical user interface of Coq, helped in the development. We discuss the characteristics and usefulness of this tool. The logic and mathematics of occasion sentences Coauthors: Pieter A. M. Seuren and Herman Geuvers. In Linguistics and Philosophy, 24(5), 2001. (bibtex entry) The prime purpose of this paper is, first, to restore to discourse-bound occasion sentences their rightful central place in semantics and secondly, taking these as the basic propositional elements in the logical analysis of language, to contribute to the development of an adequate logic of occasion sentences and a mathematical (Boolean) foundation for such a logic, thus preparing the ground for more adequate semantic, logical and mathematical foundations of the study of natural language. Some of the insights elaborated in this paper have appeared in the literature over the past thirty years, and a number of new developments have resulted from them. The present paper aims at providing an integrated conceptual basis for this new development in semantics. In Section 1 it is argued that the reduction by translation of occasion sentences to eternal sentences, as proposed by Russell and Quine, is semantically and thus logically inadequate. Natural language is a system of occasion sentences, eternal sentences being merely boundary cases. The logic has fewer tasks than is standardly assumed, as it excludes semantic calculi, which depend crucially on information supplied by cognition and context and thus belong to cognitive psychology rather than to logic. For sentences to express a proposition and thus be interpretable and informative, they must first be properly anchored in context. A proposition has a truth value when it is, moreover, properly keyed in the world, i.e. is about a situation in the world. Section 2 deals with the logical properties of natural language. It argues that presuppositional phenomena require trivalence and presents the trivalent logic PPC_3, with two kinds of falsity and two negations. It introduces the notion of Σ-space for a sentence A (or /A/, the set of situations in which A is true) as the basis of logical model theory, and the notion of /P^A/ (the Σ-space of the presuppositions of A), functioning as a `private' subuniverse for /A/. The trivalent Kleene calculus is reinterpreted as a logical account of vagueness, rather than of presupposition. PPC_3 and the Kleene calculus are refinements of standard bivalent logic and can be combined into one logical system. In Section 3 the adequacy of PPC_3 as a truth-functional model of presupposition is considered more closely and given a Boolean foundation. In a noncompositional extended Boolean algebra, three operators are defined: 1_a for the conjoined presuppositions of a, Nc for the complement of a within 1_a, and Nb for the complement of 1_a within Boolean 1. The logical properties of this extended Boolean algebra are axiomatically defined and proved for all possible models. Proofs are provided of the consistency and the completeness of the system. Section 4 is a provisional exploration of the possibility of using the results obtained for a new discourse-dependent account of the logic of modalities in natural language. The overall result is a modified and refined logical and model-theoretic machinery, which takes into account both the discourse-dependency of natural language sentences and the necessity of selecting a key in the world before a truth value can be assigned. Recursive Families of Inductive Types Also in PostScript format. In TPHOLs 2000, LNCS 1869, pages 73-89. (bibtex entry) Families of inductive types defined by recursion arise in the formalization of mathematical theories. An example is the family of term algebras on the type of signatures. Type theory does not allow the direct definition of such families. We state the problem abstractly by defining a notion, strong positivity, that characterizes these families. Then we investigate its solutions. First, we construct a model using wellorderings. Second, we use an extension of type theory, implemented in the proof tool Coq, to construct another model that does not have extensionality problems. Finally, we apply the two level approach: We internalize inductive definitions, so that we can manipulate them and reason about them inside type theory. Universal Algebra in Type Theory In TPHOLs 1999, LNCS 1690, pages 131-148. (bibtex entry) We present a development of Universal Algebra inside Type Theory, formalized using the proof assistant Coq. We define the notion of a signature and of an algebra over a signature. We use setoids, i.e. types endowed with an arbitrary equivalence relation, as carriers for algebras. In this way it is possible to define the quotient of an algebra by a congruence. Standard constructions over algebras are defined and their basic properties are proved formally. To overcome the problem of defining term algebras in a uniform way, we use types of trees that generalize wellorderings. Our implementation gives tools to define new algebraic structures, to manipulate them and to prove their properties. A general method for proving the normalization theorem for first and second order typed λ-calculi Coauthor: Silvio Valentini. In Mathematical Structures in Computer Science, volume 9, issue 6, pages 719-739, 1999. (bibtex entry) In this paper we describe a method for proving the normalization property for a large variety of typed lambda calculi of first and second order, based on a proof of equivalence of two deduction systems. We first illustrate the method on the elementary example of simply typed lambda calculus and then we show how to extend it to a more expressive dependent type system. Finally we use it for proving the normalization theorem for Girard's system F.
{"url":"http://cs.ru.nl/~venanzio/","timestamp":"2014-04-16T07:14:23Z","content_type":null,"content_length":"37415","record_id":"<urn:uuid:681d0e45-d4dc-49ae-8a13-7ab430292ac8>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
Sierpinski number A positive, odd integer k such that k times 2^n + 1 is never a prime number for any value of n. In 1960 Waclaw Sierpinski showed that there were infinitely many such numbers (though he didn't give a specific example.) This a strange result. Why should it be that while the vast majority of expressions of the form m times 2^n + 1 eventually produce a prime, some don't? For now, mathematicians are focused on a more manageable problem posed by Sierpinski: What is the smallest Sierpinski number? In 1962, John Selfridge discovered what is still the smallest known Sierpinski number, k = 78557. The next largest is 271129. Is there a smaller Sierpinski number? No one yet knows. However, to establish that 78557 is really the smallest, it would be sufficient to find a prime of the form k(2^n + 1) for every value of k less than 78557. In early 2001, there were only 17 candidate values of k left to check: 4847, 5359, 10223, 19249, 21181, 22699, 24737, 27653, 28433, 33661, 44131, 46157, 54767, 55459, 65567, 67607, and 69109. In March 2002, Louis Helm of the University of Michigan and David Norris of the University of Illinois started a project called "Seventeen or Bust," the goal of which is to harness the computing power of a worldwide network of hundreds of personal computers to check for primes among the remaining candidates. The team's effort have so far eliminated five candidates – 46157, 65567, 44131, 69109, and 54767. Despite this encouraging start, it may take as long as a decade, with many additional participants, to check the dozen remaining candidates. Related category TYPES OF NUMBERS
{"url":"http://www.daviddarling.info/encyclopedia/S/Sierpinski_number.html","timestamp":"2014-04-21T12:12:10Z","content_type":null,"content_length":"7658","record_id":"<urn:uuid:ef7b737b-b877-43be-8f52-5381924e6126>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
nalysis 2 Teaching Experience I began teaching mathematics in 1988 as a teaching assistant at The Ohio State University. I taught at Indiana University as a graduate stu dent and ... PROPOSED GRADUATE MATHEMATICS COURSES, SPRING 2011 (September8,2010) PROPOSED GRADUATE MATHEMATICS COURSES, SPRING 2011 (September8,2010) Time and place may change Math 5001: Linear Algebra, CC507, TR 9:30-10:50AM Instructor TBA Vector spaces ... New Publications Offered by the AMS, Volume 53, Number 6 New Publications Offered by the AMS J UNE /J ULY 2006 N OTICES OF THE AMS 715 Algebra and Algebraic Geometry Coxeter Groups and Hopf Algebras Marcelo Aguiar , Texas A ... Solution Manual Source: http://coding.derkeiler.com/Archive ... Complex Analysis by Lars Ahlfors.djvu Elementary Number Theory and Its Application 5E by K.H .Rosen solution manual ... Programming Solutions to Exercises New Publications Offered by the AMS, Volume 53, Number 6 ... Part 2 : A supplement to Ahlforss ... Order code ULECT/38 Bergman Spaces and Related Topics in Complex Analysis Proceedings of a ... Through extensive exercises, ... Pure and Applied Mathematics 3, Semester 2,2010 Pure and Applied Mathematics 3, Semester 2,2010 Unit of Study: MATH3964, Complex Analysis with Applications (Adv.) Lecturer: Dr. C.M. Cosgrove Lectures (39): Tuesday 2:00 ... Teaching Experience - fluxion Teaching Experience I began teaching mathematics in 1988 as a teaching assistant at The Ohio State University. I taught at Indiana University as a graduate stu dent and ... Anx.17 E - M Sc Maths _SDE_ 2007-08 2. W.Rudin, Real and Complex Analysis, ... Existence of solutions in the large - Existence and uniqueness of solutions of ... Complex Analysis by L.V. Ahlfors, ... EXERCISES FOR MATHEMATICS 205A FALL 2008 ISBN: 0{13{181629{2. Solutions to nearly all the exercises below are given in separateles called solutionsn.pdf ... L. V. Ahlfors, Complex Analysis (3 rd Ed.). COMPLEX VARIABLES-MATH 555 [FALL 2008] COMPLEX VARIABLES-MATH 555 [FALL 2008] INSTRUCTOR: H. Gingold, Armstrong Hall 406A, phone 293-2011 (EX2334). E-MAIL:gingold@math.wvu.edu. TIME: T TH, 14:30-15:45. Introduction to Complex Analysis -excerpts The notion of complexdierentiability lies at the heart of complex analysis. A special role among the founders of complex analysis was played by Leonard Euler, ... Complex Analysis + Lars V.Ahlfors,Complex analysis. Anintroduction to thetheory of analytic functionsof one complex variable. 3rd ed., ... New Publications Offered by the AMS, Volume 53, Number 7 R. Rouquier , Categorification of sl 2 and braid groups; ... Order code MEMO/182/859 Non-Doubling Ahlfors Measures, Perimeter Measures, and the ... and complex analysis. ... Study guide for Ph.DExaminationin Math 340 (Complex Analysis) Ahlfors, Complex analysis , 3rdedition, An introduction to the theory of analytic functions of one complex variable; International Series in Pure and ... Complex Analysis (contains lots of exercises) + Lars V.Ahlfors,Complex analysis. ... Instances of duplication in dierent homework solutions and ... Complex Line Integrals, IV.2. Learning Algebraic Number Theory Learning Algebraic Number Theory Sam Ruth May 28,2010 1 Introduction After multiple conversations with all levels of mathematicians (undergrads, grad students, and professors ... Zeros of Gaussian Analytic Functions and Determinantal Point Processes Preface Random configurations of points in space, also known as point processes, have been studied in mathematics, statistics and physics for many decades. Complex Analysis with MATHEMATICAu00ae Contents ix 7 Symmetric chaos in the complex plane 138 Introduction _ 138 7.1 Creating and iterating complex non-linear maps 139 7.2 A movie of a symmetry-increasing bifurcation ... MATH 532-Complex Analysis Syllabus -Spring 2005 MATH 532-Complex Analysis Syllabus -Spring 2005 Professor: Michael Dorfi O-ce: 281 TMCB O-ce Phone: 422-1752 Email: mdorfi@math.byu. edu O-ce Hours: TTh 1:30-2:30 pm Course Text: ... Department of MathematicalSciences COMPLEX ANALYSIS HOMEWORK PROBLEMS SPRING QUARTER 2010 Please provide plenty of details! ... Read pp.1-136 in Ahlfors and do problems 1-4 on p.133. (5) ...
{"url":"http://www.cawnet.org/docid/complex+analysis+2+by+ahlfors+solutions+to+exercises/","timestamp":"2014-04-20T20:59:57Z","content_type":null,"content_length":"52234","record_id":"<urn:uuid:a25b3d70-6500-4693-92cb-9322c56e8734>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
Aventura, FL Statistics Tutor Find an Aventura, FL Statistics Tutor ...The non-exhaustive list of subjects that I am currently tutoring are: SSAT, PSAT, SAT AND SAT Subject Tests, ACT, DEC, GRE, GMAT, as well as prealgebra, algebra I & II, geometry, trigonometry, precalculus, calculus, differential calculus, statistics, finite math, and linear algebra.Algebra 1, for... 24 Subjects: including statistics, calculus, geometry, GRE ...Not only have I done well, but I have been at the top or near the top of my class for all courses relating to these fields. I am confident in teaching any aspect of biology, chemistry and mathematics up to the undergraduate level. Everyone learns differently and therefore a tutor should possess the skills needed to adapt to the student's needs. 27 Subjects: including statistics, chemistry, geometry, algebra 1 ...I have also contributed to scientific journal articles, i.e. Journal of Applied Physiology as well as presenting at American College of Sports Medicine annual meeting. I am a Certified Strength and Conditioning Specialist through the National Strength and Conditioning Association as well as tea... 13 Subjects: including statistics, reading, algebra 1, algebra 2 ...For English and writing, teaching students how to edit papers is, in my view, the most effective way to learn grammar—which will also very quickly empower you to write well scribed scripts without much struggle. Also, with more advanced concepts (personification, analogies, etc), having the stud... 36 Subjects: including statistics, reading, English, chemistry ...With confidence, students can work faster so time is not a factor. I review each of the subject areas and assess the student's ability to answer specific types of problems in each subject. I explain what they do wrong and what they do not know, then I send them home with specific sets of problems to work out before the next session. 27 Subjects: including statistics, chemistry, physics, calculus
{"url":"http://www.purplemath.com/Aventura_FL_statistics_tutors.php","timestamp":"2014-04-18T08:45:30Z","content_type":null,"content_length":"24453","record_id":"<urn:uuid:3071a1f7-f5b0-4012-bbc8-36cc23549154>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
PCA for NIR Spectra_part 004: "Projections" February 26, 2012 By jrcuesta This plot in 2D, help us to decide the number of PCs, it is easy to create in R, once we have discompose the X matrix into a P matrix (loadings) and a T matrix (scores). For this plot, we just need the T matrix. > CPs<-seq(1,10,by=1) > matplot(CPs,t(Xnipals$T),lty=1,pch=21, + xlab="PC_number",ylab="Explained_Var") Every dot for every vertical line represents the score of a sample for that particular PC. We made the NIPALS calculations for 10 PCs. Every vertical line represents the projections of the samples over that particular PC. The score of a sample for that PC is the distance to the mean. We can calculate for every PC, the standard deviation for all the scores and the variance. As we see the firsts 2 PCs represents almost all the variance, and for the rest the projections are becoming narrower. This plot is good to select how many components to choose, and also to detect outliers, extreme samples,..... for the author, please follow the link and comment on his blog: daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/pca-for-nir-spectra_part-004-projections/","timestamp":"2014-04-16T10:36:51Z","content_type":null,"content_length":"38119","record_id":"<urn:uuid:1f2b4b67-1353-409f-91ff-020d9c19c716>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
Goucher College Department of Physics and Astronomy Modern Physics Fall 2010 Click here for Modern Physics simulations · Lecturer Dr. Sasha Dukan Office: G10E phone: 410-337-6323 e-mail: sdukan@goucher.edu Office hours: MWF 11:30am-12:30pm. Please respect this schedule and make appointment to see me at other times. Class meets MWF 9:30-10:20am in HS B27. · Textbook Serway, Moses and Moyer: Modern Physics, Saunders College Publishing, 3rd edition. Student is expected to read assigned textbook chapters by the date assigned in the syllabus. Belloni, Christian and Cox: Physlet Quantum Physics, An Interactive Introduction will be used as a supplemental material and will be provided by the instructor. Other Recommended Texts (available in the the library and/or my office): · Modern Physics, by Ohanian · Modern Physics from Alpha to Z0, by Rohlf · Quantum Physics of Atoms, Molecules, Solids, Nuclei and Particles, by Eisberg and Resnick · Modern Physics, by Krane · Nonclassical Physics, by Harris · Course Description Physics 220 is an introductory course in Modern Physics (or Nonclassical Physics as I prefer to call it) designed for a student who has completed a calculus based General Physics (Introductory Classical Physics) course. It is intended to acquaint students majoring in physical sciences and/or mathematics with the wide range of physical principles that have developed in the 20th century. The course will use calculus extensively, knowledge of the differential equations is a plus but not required. Emphasis in this course will be shifted from the fancy mathematics towards the understanding of underlying physical concepts. The intention is to bring students to the frontiers of physics in a simple, comprehensible manner through discussions, problem solving and additional readings. Introduction to Special theory of relativity and Introduction to Quantum Mechanics constitutes the core of the course. We will discuss theoretical ideas and various experiments that revolutionized our understanding of nature and led to the development of new fields such as atoms and molecular physics, condensed matter physics, nuclear and elementary physics, astrophysics, quantum chemistry, biophysics etc. · Instructional Methods Students have an opportunity to learn basic concepts of special theory of relativity and quantum mechanics from a variety of sources during the semester, including: ̃Assigned textbook readings ̃Classroom lectures and discussions ̃Frequent computer demonstrations and simulations using Physlet Quantum Physics ̃Homework assignments and Blackboard® presentation of solutions ̃In-class problem solving exercises ̃Computational problems as homework assignments using Physlet Quantum Physics ̃In-class tests ̃Discussions with me outside of the class Classroom time will be mostly centered around the discussions and student participation is required. · Responsibilities of Students In order to get the most out of this course: ̃Attend each class, arrive on time and come prepared . Read an assigned sections of the textbook before coming to the class to familiarize yourself with notation and topic. Read the relevant section of the textbook with comprehension before attempting to solve homework problems. ̃Participate in class by paying close attention to what is presented and offering suggestions or corrections when you think something that is presented is incorrect or confusing. ̃Work on and try to complete all homework problems on time. You are encouraged to discuss problems with your peers but, if at all possible, complete these problems without assistance from anyone else. This way you will truly understand the problem and will be prepared for the exams. ̃Read the homework solutions and use the opportunity to improve your homework grade by presenting a correct solution orally. ̃Make your work neat and carefully organized. If I can’t follow your solution then you will not receive a full credit. ̃Come talk to me outside of the class frequently. Asking for help or hints with solving problems, or asking for clarification of the lectures or the textbook demonstrates your interest in the subject. · Exams There will be three exams and a final project at the end. Tentative dates which may be adjusted according to the rate at which the material is being covered are: First exam: October 15 , Ch 1-4 Second exam: November 12, Ch 5-7 Third exam: Dec13-19, Ch 8-9 There will be a review exam posted on Blackboard before each exam. · Homework Assignments Homework assignment of about 5 to 10 problems will be assigned each week. These will be due at the beginning of the lecture on the due date listed in the homework schedule. One or two computational problems utilizing Physlet Quantum Physics will be assigned each class period. These will be due a following class period. Assignments will be posted on the homework web page. No late homework will be accepted. You are encouraged to work on the homework problems with other students, but this does not mean distributing the work load or copying. Solving problems is the most important part of the learning process in this course. Students can improve a homework grade, within one week after the homework has been graded and solutions have been posted, by demonstrating an understanding of a correct solution on a whiteboard in my office. · Final Project: Presentation and Paper Instead of the traditional comprehensive final exam there will be a final project on the topic of your interest. The final projects should illustrate the experimental applications of the concepts discussed in lecture or a further development of the theoretical ideas introduced. Possible topics are: General Relativity and Black Holes; Scanning Tunneling Microscope; Atomic Force Microscopy; Laser Cooling, Atom Trapping and Experimental Bose-Einstein Condensation; Nuclear Magnetic Resonance; Spectroscopy: Quantum Mechanics in Action; Superconductivity; Quantum Hall Effect; Neutron Stars and Pulsars; Nuclear Reactors; High-Energy Accelerators and Particle Detectors; Quarks; String Theory. If you would like to pick you own topic you must get my approval before starting your project. The projects should utilize literature and web based research. I expect a 20 min long Power Point presentation. All topics should be finalized and approved by November 1^st. You can view Power Point presentations of former students by clicking on the links below: String Theory , Quarks , Moore's Law, Magnetic Resonance Imaging. · Grades The course grade will be based upon exams, final project presentation and paper, homework and class participation. There will be no make-up exams. The grade break down is as follows: Homework and Class participation: 30% Three hourly exams:50% Final Project Presentation: 10%, Paper: 10% The assessment rubric for a final paper or a presentation can be found here. The grade distribution will be as follows: · A ≥90.1% ; 87.1% ≤ A- <90%; · 83.1% ≤ B+ <87%; 73.1% ≤ B<83%; 70.1% ≤ B- <73%; · 67.1% ≤ C+ <70%; 63.1% ≤ C < 67%; 60.1% ≤ C- < 63%; · 57.1% ≤ D+ < 60%; 53.1% ≤ D < 57%; 50.1% ≤ D- < 53%; · numerical grade below 50% is F · Academic Ethics All students are bound by the standards of the Academic Honor Code, found at www.goucher.edu/documents/General/AcademicHonorCode.pdf
{"url":"http://meyerhoff.goucher.edu/physics/phys220/220syl-10.htm","timestamp":"2014-04-17T03:48:28Z","content_type":null,"content_length":"34741","record_id":"<urn:uuid:f704607c-2cb1-4ba4-801d-508941ddf6c4>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
Lecture 6. Entropic CLT (3) In this lecture, we complete the proof of monotonicity of the Fisher information in the CLT, and begin developing the connection with entropy. The entropic CLT will be completed in the next lecture. Variance drop inequality In the previous lecture, we proved the following decomposition result for functions of independent random variables due to Hoeffding. ANOVA decomposition. Let Note that In the previous lecture, we proved subadditivity of the inverse Fisher information Lemma. Let 1. Recall that a fractional packing is a function Example 1. Let Example 2. If 2. The original paper of Hoeffding (1948) proves the following special case where each ) satisfies Of course, if Proof. We may assume without loss of generality that each We then have, using orthogonality of the terms in the ANOVA decomposition: For each term, we have where the second inequality follows from the definition of fractional packing if again using orthogonality of the Monotonicity of Fisher information We can now finally prove monotonicity of the Fisher information. Corollary. Let for any hypergraph Proof. Recall that Now by using the Pythagorean inequality (or Jensen’s inequality) and the variance drop lemma, we have as desired. Remark. The Thus we have proved the monotonicity of Fisher information in the central limit theorem. From Fisher information to entropy Having proved monotonicity for the CLT written in terms of Fisher information, we now want to show the analogous statement for entropy. The key tool here is the de Bruijn identity. To formulate this identity, let us introduce some basic quantities. Let Observe that Remark. Let us recall some standard facts from the theory of diffusions. The Ornstein-Uhlenbeck process has a generator We can now formulate the key identity. de Bruijn identity. Let 1. Differential form: Integral form: The differential form follows by using the last part of the claim together with integration by parts. The integral form follows from the differential form by the fundamental theorem of calculus, which yields the desired identity since This gives us the desired link between Fisher information and entropy. In the next lecture, we will use this to complete the proof of the entropic central limit theorem. Lecture by Mokshay Madiman | Scribed by Georgina Hall You must log in to post a comment.
{"url":"http://blogs.princeton.edu/sas/2013/11/13/lecture-6-entropic-clt-3/","timestamp":"2014-04-18T13:13:50Z","content_type":null,"content_length":"83931","record_id":"<urn:uuid:7ed78c50-25dd-4cad-8604-65a5efda4c1e>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
Schultzville, PA Math Tutor Find a Schultzville, PA Math Tutor I have taught Mathematics in Latvia for 20 years, elementary, middle and high school. Math is not easy for all students. I understand the special needs student as I have tutored also for many 6 Subjects: including algebra 2, geometry, trigonometry, algebra 1 I enjoy teaching students of all ages. I have been a preschool teacher and an elementary school teacher. Currently, I am employed by the East Penn School District as an instructional assistant in Algebra 1 classes. 3 Subjects: including algebra 1, prealgebra, elementary (k-6th) ...I also completed four years of Spanish while in high school. I diligently studied English for many years, and I believe I have well developed writing skills. I currently tutor five students at my college in calculus, chemistry and microeconomics. 14 Subjects: including algebra 1, algebra 2, calculus, geometry ...Middle school and early High School are the ages when most children develop crazy ideas about their abilities regarding math. It upsets me when I hear students say, 'I'm just not good in math! ' Comments like that typically mean that a math teacher along the way wasn't able to present the materi... 9 Subjects: including geometry, Microsoft Outlook, algebra 1, algebra 2 ...I have experience with after school tutoring from 2003-2006. I was an Enon Tabernacle after school ministry tutor for elementary and high school students 2011-2012. These are just a few 13 Subjects: including algebra 2, trigonometry, psychology, biochemistry Related Schultzville, PA Tutors Schultzville, PA Accounting Tutors Schultzville, PA ACT Tutors Schultzville, PA Algebra Tutors Schultzville, PA Algebra 2 Tutors Schultzville, PA Calculus Tutors Schultzville, PA Geometry Tutors Schultzville, PA Math Tutors Schultzville, PA Prealgebra Tutors Schultzville, PA Precalculus Tutors Schultzville, PA SAT Tutors Schultzville, PA SAT Math Tutors Schultzville, PA Science Tutors Schultzville, PA Statistics Tutors Schultzville, PA Trigonometry Tutors Nearby Cities With Math Tutor Barto Math Tutors Bechtelsville Math Tutors Congo, PA Math Tutors Englesville, PA Math Tutors Eshbach, PA Math Tutors Fredericksville, PA Math Tutors Harlem, PA Math Tutors Hill Church, PA Math Tutors Landis Store, PA Math Tutors Layfield, PA Math Tutors Niantic, PA Math Tutors Passmore, PA Math Tutors Shanesville, PA Math Tutors Woodchoppertown, PA Math Tutors Worman, PA Math Tutors
{"url":"http://www.purplemath.com/Schultzville_PA_Math_tutors.php","timestamp":"2014-04-19T17:26:17Z","content_type":null,"content_length":"23781","record_id":"<urn:uuid:80ec1249-95f6-4844-b2ce-668ee897dd62>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
Perlin Noise Perlin Noise and Turbulence Written by Paul Bourke January 2000 It is not uncommon in computer graphics and modelling to want to use a random function to make imagery or geometry appear more natural looking. The real world is not perfectly smooth nor does it move or change in regular ways. The random function found in the maths libraries of most programming languages aren't always suitable, the main reason is because they result in a discontinuous function. The problem then is to dream up a random/noisy function that changes "smoothly". It turns out that there are other desirable characteristics such as it being defined everywhere (both at large scales and at very small scales) and for it to be band limited, at least in a controllable way. The most famous practical solution to this problem came from Ken Perlin back in the 1980's. His techniques have found their way in one form or another into many rendering packages both free and commercial, it has also found its way into hardware such as the MMX chip set. At the bottom of this document I've included the original (almost) version of the C code as released by Ken Perlin and on which most of the examples in the document are based. The basic idea is to create a seeded random number series and smoothly interpolate between the terms in the series, filling-in the gaps if you like. There are a number of ways of doing this and the details of the particular method used by Perlin need not be discussed here. Perlin noise can be defined in any dimension, most common dimensions are 1 to 4. The first 3 of these will be illustrated and discussed below. 1 Dimensional A common technique is to create 1/f^n noise which is known to occur often in natural processes. An approximation to this is to add suitably scaled harmonics of this basic noise function. For the rest of this discussion the Perlin noise functions will be referred to as Noise(x) of a variable x which may a vector in 1, 2, 3 or higher dimension. This function will return a real (scalar) value. A harmonic will be Noise(b x) where "b" is some positive number greater than 1, most commonly it will be powers of 2. While the Noise() functions can be used by themselves, a more common approach is to create a weighted sum of a number of harmonics of these functions. These will be refered to as NOISE(x) and can be defined as Where N is typically between 6 and 10. The parameter "a" controls how rough the final NOISE() function will be. Small values of "a", eg: 1, give very rough functions, larger values give smoother functions. While this is the standard form in practice it isn't uncommon for the terms a^i and b^i to be replaced by arbitrary values for each i. The following shows increasing harmonics of 1 dimensional Perlin noise along with the sum of the first 8 harmonics at the bottom. In this case a and b are both equal to 2. Note that since in practice we only ever add a limited number of harmonics, if we zoom into this function sufficiently it will become smooth. 2 Dimensional The following show the same progression but in two dimensions. This is also a good example of why one doesn't have to sum to large values of N, after the 4th harmonic the values are less than the resolution of a grey scale image both in terms of spatial resolution and the resolution of 8 bit grey scale. 0 (1) 1 (2) 2 (4) 3 (8) 4 (16) Sum As earlier, a and b are both set to 2 but note that there are an infinite number of ways these harmonics could be added together to create different effects. For example, if in the above case one wanted more of the second harmonic then the scaling of that can be increased. While the design of a particular image/texture isn't difficult it does take some practice to become proficient. 3 Dimensional Perlin noise can be created in 3D and higher dimensions, unfortunately it is harder to visualise the result in the same way as the earlier dimensions. One isosurface of the first two harmonics are shown below but it hardly tells the whole story since each point in space has a value assigned to it. Perhaps the most common use of 3D Perlin noise is generating volumetric textures, that is, textures that can be evaluated at any point in space instead of just on the surface. There are a number of reasons why this is desirable. • It means that the texture need not be created beforehand but can be computed on the fly. • There are a number of ways the texture can be animated. One way to to translate the 3 dimensional point passed to the Noise() function, alternatively one can rotate the points. Since the 3D texture is defined everywhere in 3D space, this is equivalent to translating or rotating the texture volume. • The exact appearance of the texture can be controlled by either varying the relative scaling of the harmonics or by adjusting how the scalar from the Perlin functions is mapped to colour and/or • It gets around the problem of mapping rectangular texture images onto topologically different surfaces. For example, the following texture for a sun was created using 3D noise and evaluating the points using the same mapping as will be used when the texture is mapped onto a sphere. The result is that the texture will map without pinching at the poles and there will not be any seams on the left and right. Source code The original C code by Ken Perlin is given here: perlin.h and perlin.c. The above describes a way of creating a noisy but continuous function. In normal operation one passes a vector in some dimension and the function returns a scalar. How this scalar is used is the creative part of the process. Often it is can be used directly, for example, to move the limbs of virtual character so they aren't rigid looking. Or it might be used directly as the transparency function for clouds. Another fairly common application is to use the 1D noise to perturb lines so they look more natural, or to use the 2D noise as the height for terrain models. 2D and 3D perlin noise are often used to create clouds, a hint of this can be seen in the sum of the 2D Noise() functions above. When the aim is to create a texture the scalar is used as an index into a colour map, this may either be a continuous function or a lookup table. Creating the colour map to achieve the result being sought is a matter of skill and experience. Another approach is to use NOISE() functions as arguments to other mathematical functions, for example, marble effects are often made using cos(x + NOISE (x,y,z)) and mapping that to the desired marble colours. In order to effectively map the values returned from the noise functions one needs to know the range of values and the distribution of values returned. The original functions and therefore the ones presented here both have Gaussian like distributions centered on the origin. Noise() returns values between about -0.7 and 0.7 while NOISE() returns values potentially between -1 and 1. The two distributions are shown below. The possibilities are endless, enjoy experimenting. Ken Perlin An Image Synthesizer Computer Graphics, 1985, 19 (3), pp 287-296. Donald Hearn and M. Pauline Baker Fractal-Geometry Methods Computer Graphics, C-Version, 1997, pp 362-378. Perlin, K Live Paint: Painting with Procedural Multiscale Textures Computer Graphics, Volume 28, Number 3. David Ebert, et al (Chapter by Ken Perlin) Texturing and Modeling, A Procedural Approach AP Professional, Cambridge, 1994. Perlin, K., Hoffert, E. Computer Graphics (proceedings of ACM SIGGRAPH Conference), 1989, Vol. 22, No. 3.
{"url":"http://paulbourke.net/texture_colour/perlin/","timestamp":"2014-04-18T02:58:57Z","content_type":null,"content_length":"10763","record_id":"<urn:uuid:20d53d4d-aa46-4b1b-9ed8-4a6ba6523068>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: How does guage symmetry work? Date: Mon Jul 2 14:13:14 2001 Posted By: Michael Wohlgenannt, Grad student, Ph.D. student, Department of theoretical physics , university of munich Area of science: Physics ID: 992822192.Ph Message: Hi Gashib, that is a good question. The concept of gauge symmetry is a fundamental one in the theory of elementary particles (maybe even in all theoretical physics). In an astonishing way the gauge principle introduces interaction into a theory of otherwise free particles. By the gauge principle I mean that physics does not depend on which "coordinate system" one chooses. Different people may conduct the same experiment at different places choosing different "coordinate systems", nevertheless the result of the experiment has to be the same. So you can fix any coordinate system (i.e., choose a gauge). The first to introduce a gauge theory was Maxwell, that is classical electromagnetism. The vector potential A_m=(phi, vec{A}) is the fundamental quantity of this theory. phi is the electrostatic potential, vec{A} is the 3 dimensional vector potential. The electric field E and the magnetic field B are unaltered under the transformation A_m ---> A_m+1/e d_m P where d_m is the partial derivative with respect to the m-th coordinate, P is some real function, m=0,1,2,3. This transformation does not change the electromagnetic field strength, i.e. it does not change neither the magnetic nor the electric field. This is called a gauge transformation. A_m can only be determined up to a function P, which does not influence physics. So one can fix P, choose a gauge. Two common ways of fixing the gauge are the Lorentz gauge and the Coulomb gauge. The Lagrange function for electromagnetism is L = F_mn F^mn, F_mn = d_m A_n - d_n A_m. Variation with respect to A will lead to Maxwell's eqns. Now we want to introduce also electrons. Free electrons (psi) are given by the Lagrangian L_e = barpsi (i\gamma^m d_m - m) psi, where barpsi means the conjugate of psi. \gamma are the usual gamma matrices, m is the mass. Variation of this equation will lead to the equation of motion for free electrons, the dirac equation. For the total action we have to add L and L_e. Sofar we have introduced no interaction! Further we see that the function L_e is invariant if we replace psi ---> e^(iP) psi, if P is constant. But P had not to be constant in the above considerations. It does not matter what we choose for P. P can be chosen differently in Rome and in Paris, and experiments performed in Rome and in Paris yield the same result, independent of P. So we demand, that L+L_e has to be invariant under the transformation A_m(x) ---> A_m(x) + 1/e d_m P(x), psi(x) ---> e^(iP(x)) psi(x). This transformation is called a local gauge transformation, since the gauge parameter depends on x. But we have seen that L+L_e is not unvariant under above transformations! Therefore it cannot be the right Lagrange function. If we want to render L+L_e invariant we have to add terms. We have to replace the derivative of the electron field psi d_m psi by D_m psi = (d_m + i e A_m) psi, where e is the electric charge (coupling parameter). This is called the minimal coupling of the gauge field A. The resulting Lagrange function is invariant under this local gauge transformation. Properly, we have to start at L_e, a theory of free electrons. We demand that the theory is invariant under a local gauge transformation psi --> e^{iP(x)} psi. Then we have to replace derivatives d_m by D_m, covariant derivatives. That means we have to introduce the gauge field A. Finally we add a kinetic term for the gauge field, L. We start at a free theory and invariance of physics under a certain (local) symmetry introduces gauge fields and interactions. The difference between electromagnetism and (classical) QCD is in the gauge transformation. In case of electromagnetism we had P(x) a function, therefore e^{iP(x)} build a group called U(1), 1 because there is only one parameter, U because of unitarity, e^{iP(x)}* e^ {iP(x)}=1. In case of QCD P(x) is a matrix valued function. P(x)=sum_i P_i(x) T^i, where the sum is over i=1,2,3,4,5,6,7,8. T^i antisymmetric 3x3 matrices which are said to generate SU(3). They so to say build a basis of SU(3). It further means that e^{iP(x)} is a 3x3 matrix and psi has to be a 3-vector, in order we can multiply e^{iP(x)} with psi and compute the transformation psi --> e^{iP(x)}.psi Now we return to your question for the colours. The components of psi are called colours, because they are not components in space-time but in an internal space. The gauge transformation above is merely a change of coordinates in this internal space, in colour space. psi has 3 components, and there are 3 complementary colours red, blue and green. We fix a coordinate system and call the components of psi red, blue and green. The analogy is that you can compose any colour using the colours red, blue and green. But you can also use 3 different complementary colours, eg. yellow, indigo and violet. This corresponds to a coordinate transformation in the internal space, a gauge transformation in colour space. I hope I could help you in understanding gauge interactions. otherwise do not hesitate to ask further questions. Current Queue | Current Queue for Physics | Physics archives Try the links in the MadSci Library for more information on Physics. MadSci Home | Information | Search | Random Knowledge Generator | MadSci Archives | Mad Library | MAD Labs | MAD FAQs | Ask a ? | Join Us! | Help Support MadSci MadSci Network, webadmin@www.madsci.org © 1995-2001. All rights reserved.
{"url":"http://www.madsci.org/posts/archives/2001-07/994165600.Ph.r.html","timestamp":"2014-04-17T06:46:09Z","content_type":null,"content_length":"9891","record_id":"<urn:uuid:0a0b407e-1425-44b3-a322-92b2f985aaab>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
Audubon Park, NJ Trigonometry Tutor Find an Audubon Park, NJ Trigonometry Tutor ...Not only do I have hundreds of hours of tutoring experience, but I am also certified in both English AND Math (a very rare qualification). I hold Pennsylvania Level II teaching certifications in English 7-12, Math 7-12, and Health K-12. Additionally, I was two classes away from getting a B.S. in general science. Standardized test prep is a huge industry. 47 Subjects: including trigonometry, chemistry, English, reading ...I am a world-renowned expert in the Maple computer algebra system, which is used in many math, science, and engineering courses. My tutoring is guaranteed: During our first session, I will assess your situation and determine a grade that I think you can get with regular tutoring. If you don't get that grade, I will refund your money, minus any commission I paid to this website. 11 Subjects: including trigonometry, calculus, statistics, precalculus ...This gave me the opportunity to tutor students in a variety of math subjects, including Differential Equations. I have a bachelor's degree in secondary math education. During my time in college, I took one 3-credit course in Linear Algebra. 11 Subjects: including trigonometry, calculus, geometry, algebra 1 ...Additionally, it makes you think more logically and efficiently! It's my favorite subject. A course in Pre-Algebra reinforces mathematical skills taught in the younger grades, with additional advanced computation including an emphasis on Algebraic concepts. 12 Subjects: including trigonometry, geometry, algebra 1, ASVAB ...As an artist, I also take photographs for art and creative purposes. I can teach the mechanical operation of cameras, both digital and film, from the simple point-and-click to the DSLR. The artistic ideas behind photography spread across the other artistic disciplines. 19 Subjects: including trigonometry, calculus, geometry, algebra 2
{"url":"http://www.purplemath.com/Audubon_Park_NJ_Trigonometry_tutors.php","timestamp":"2014-04-18T04:10:55Z","content_type":null,"content_length":"24712","record_id":"<urn:uuid:b702dd0a-6e8f-438a-9681-795ba4cd0f3f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: The Center for Control, Dynamical Systems, and Computation University of California at Santa Barbara Winter 2007 Seminar Series The Role of Broken Extremals in Nonholonomic Roger Brockett Harvard University Friday, February 23th, 2007 3:00pm-4:00pm 1001 LSB Rathmann Auditorium We will discuss a problem which has a deceptively simple description. We want to transfer a nons- ingular matrix $X(0)$ to a second nonsingular matrix $X(1)$ under the assumption that the matrix evolves according to $\dot{X}=UX$ with $U(t)=U^T(t)$. This system is controllable on the space of nonsingular matrices with positive determinant and the first order necessary conditions associated with minimizing $$\eta = \int _0^T ||U(t)|| \; dt $$ subject to the condition that $U$ should steer the system from $X(0)=X_0$ to $X(1)=X_1$ imply that $U$ should take the form $$U(t) = e^{\Omega t}He^{-\Omega t }$$ with $\Omega = -\Omega ^T $ and $H=H^T$, both constant. What makes this problem interesting, however, is the abundance of conjugate points, the non uniqueness of the solu- tions of the first order necessary conditions and the possible necessity for broken extremals. These
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/368/4180850.html","timestamp":"2014-04-20T23:42:21Z","content_type":null,"content_length":"8331","record_id":"<urn:uuid:db85d79c-3c34-4a19-bede-11a97cff3083>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
Discussing with a non statistician colleague, it seems that the logistic regression is not intuitive; Some basics questions like : - Why don't use the linear model? - What's logistic function? - How can we compute by hand, step by step t... Creating a QGIS-Style (qml-file) with an R-Script How to get from a txt-file with short names and labels to a QGIS-Style (qml-file)? I used the below R-script to create a style for this legend table where I copy-pasted the parts I needed to a txt-file, like for the WRB-FULL (WRB-FULL: Full soil code o... R/Finance 2013 Registration Open The registration for R/Finance 2013 -- which will take place May 17 and 18 in Chicago -- is NOW OPEN!Building on the success of the previous conferences in 2009, 2010, 2011 and 2012, we expect more than 250 attendees from around the world. R users from... R / Finance 2013 Open for Registration The annoucement below just went to the R-SIG-Finance list. More information is as usual at the R / Finance page:Now open for registrations:R / Finance 2013: Applied Finance with R May 17 and 18, 2013 Chicago, IL, USAThe registration for R/Fin... Veterinary Epidemiologic Research: GLM (part 4) – Exact and Conditional Logistic Regressions Next topic on logistic regression: the exact and the conditional logistic regressions. Exact logistic regression When the dataset is very small or severely unbalanced, maximum likelihood estimates of coefficients may be biased. An alternative is to use exact logistic regression, available in R with the elrm package. Its syntax is based on an events/trials formulation. Veterinary Epidemiologic Research: GLM – Evaluating Logistic Regression Models (part 3) $Veterinary Epidemiologic Research: GLM – Evaluating Logistic Regression Models (part 3)$ Third part on logistic regression (first here, second here). Two steps in assessing the fit of the model: first is to determine if the model fits using summary measures of goodness of fit or by assessing the predictive ability of the model; second is to deterime if there’s any observations that do not fit the The evolution of EU legislation (graphed with ggplot2 and R) During the last half century the European Union has adopted more than 100 000 pieces of legislation. In this presentation I look into the patterns of legislative adoption over time. I tried to create clear and engaging graphs that provide … Continue reading → Veterinary Epidemiologic Research: GLM – Logistic Regression (part 2) $Veterinary Epidemiologic Research: GLM – Logistic Regression (part 2)$ Second part on logistic regression (first one here). We used in the previous post a likelihood ratio test to compare a full and null model. The same can be done to compare a full and nested model to test the contribution of any subset of parameters: Interpretation of coefficients Note: Dohoo do not report the Veterinary Epidemiologic Research: GLM – Logistic Regression $Veterinary Epidemiologic Research: GLM – Logistic Regression$ We continue to explore the book Veterinary Epidemiologic Research and today we’ll have a look at generalized linear models (GLM), specifically the logistic regression (chapter 16). In veterinary epidemiology, often the outcome is dichotomous (yes/no), representing the presence or absence of disease or mortality. We code 1 for the presence of the outcome and 0 Stop Sign Project Post1: Some GIS stuff done in R (This article was first published on bRogramming, and kindly contributed to R-bloggers) To leave a comment for the author, please follow the link and comment on his blog: bRogramming. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics...
{"url":"http://www.r-bloggers.com/search/GIS/page/3/","timestamp":"2014-04-18T05:39:27Z","content_type":null,"content_length":"38980","record_id":"<urn:uuid:c713067e-55a2-4c09-bd93-3411feb761bd>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
Audiophilia: hobby or disease? (CONTINUED) The Quake III Machine C2D e6750 @ Gigabyte GA-P45T-ES3G | 8GB GSkill | Venemous X | Asus 560Ti | Corsair 520HX | 750GB Seagate | Antec 300 | Thanks SPCR StarfishChris wrote: For the former it's as much a hobby as silencing for most of us. But with the latter, I consider it a fever - and the only cure is more cowbell. Er, I mean, concerts, and I expect they will have more 'bang for the buck' than $64,000 speakers. It's much better to listen to music how it's meant to be heard! (Unfortunately cowbells don't feature in classical music - they should've replaced the viola, who needs that, anyway...) temporary link Contributing Writer, SPCR Want something reviewed? Help us get samples! Donate for Patron or Friend Status! sthayashi wrote: I get carried away with how the drummer's snare sounds or how clever a bass line was played, or how tight the band was. My Power Rig, Storage Rig, HTPC and Main Rig James Randi has extended his million dollar challenge to the Shakti stones Contributing Writer, SPCR Want something reviewed? Help us get samples! Donate for Patron or Friend Status! yeha wrote: ah well, back to my luddite low-end-equipment existence. ah well, back to my luddite low-end-equipment existence. http://www.pcavtech.com/abx/abx_wire.htm wrote: PSACS ABX Test Results PSACS ABX Test Results Interconnects and Wires Interconnects and Speaker Wires Result Correct p less than Listeners $2.50 cable vs. PSACS Best NO DIFFERENCE 70 / 139 = 50% - 7 $418 Type "T1" vs. Zip Cord NO DIFFERENCE 4 / 10 = 40% - 1 Type "Z Cable vs. Zip Cord NO DIFFERENCE 70 / 139 = 50% - 7 $990 "T2" Cable vs. Zip Cord NO DIFFERENCE 16 / 32 = 50% - 2 In the first test, five specialty interconnects from AudioQuest, MIT, Monster Cable, H.E.A.R., plus Belden cable with Vampire connectors were compared to a $2.50 blister pack RCA phono interconnect. Listeners used Etymotic Research ER4 in-ear phones driven by the headphone jack of a Bryston 2B power amplifier. The next three tests are the data from Tom Nousaine's "Wired Wisdom: The Great Chicago Cable Caper", listed on the ABX periodicals page. The Type "T1" cables were compared on a system including an Sumo Andromeda power amp and JS Engineering Infinite Slope speakers, by the system's owner. He chose his own program material and had no time limit. The Type "Z" cables were tested on the system of a high end audio shop employee including: Snell type B-Minor speakers; Forte Model 6 Power Amp; and an outboard DAC. He used his own program material selected to show the differneces he expected. The Type "T2" cables were compared on a system including a Denon DCD-1290 CD player, and Accuphase P-300 power amplifier, and Snell KII speakers. mathias wrote: Bobdog wrote: mathias wrote: I think your post demonstrates the reasons for hostility towards high end audio very well. Especially since you claim to be of the relatively sane type of audiophile. Why exactly has my post "demonstrat[ed] the reasons for hostility towards high end audio"? What it demmonstrates is how obsessive a lot of audiophiles can get. You even edmited that you noticed that you were becoming completely fanatical. for MEhostile BobDog wrote: mathias wrote: I think your post demonstrates the reasons for hostility towards high end audio very well. Especially since you claim to be of the relatively sane type of audiophile. Why exactly has my post "demonstrat[ed] the reasons for hostility towards high end audio"? Edward Ng wrote: Green Shoes wrote: Bias or Discrimination, on the other hand, is the act of judging a single person based on the actions of other similar people, which is what this thread is possibly rampant with. ?!? Where's your doubt coming from?!? rbrodka wrote: Do people make irrational buying decisions to make them feel better about themselves? And if the things they buy do make them feel better, is it an irrational decision? Starfish Chrischeck this out. It's coming back rbrodka wrote: Perhaps it comes down to people with low self-esteem buy these things to compensate. (Unfortunately cowbells don't feature in classical music - they should've replaced the viola, who needs that, anyway...) Whether it is a huge house, fancy automobile, big power boat, extensive wardrobe, hundreds of pairs of shoes, bleeding edge computer parts, or high-end audio equipment, don’t many people buy more than they need or use? The guy driving the huge truck feels powerful and manly driving it – it makes him feel good about himself. He may never go off-road with it, and may never haul anything heavier than a few bags of groceries, but that truck makes him feel good. My wife probably will never wear the hundreds of pairs of shoes she owns, but they make her feel good. She loved buying them, and I think she feels a sense of power or accomplishment just having them. Those shoes, in a way, give her a sense of worth in the competitive world of womanhood. Audiophiles and computer enthusiasts both revel in their technical expertise. Computer over-clockers will labor hours to get that last 100 MHz out of their CPU’s, and audiophiles will invest countless hours to get the very best audio equipment. While these activities may seem irrational to most, these technical achievements make the participants feel good about themselves and prove their worth and elite ‘ness in this world. Perhaps it comes down to people with low self-esteem buy these things to compensate. If I don’t feel important, maybe I can buy elite equipment/whatever and I will feel equal to the truly elite. Of course, buying things does not really improve self-esteem, and for some, when the temporary “feel-good” of buying a high-end item wears off, they have to buy an even higher-end item for that “elite” Are audiophiles especially susceptible to this phenomenon? I don’t know, perhaps they are. The audiophile world seems to be very rich in expensive products with wild but dubious performance claims. These products exist in the market because people buy them, people who are willing to suspend disbelief to get products that make their audio systems better than what others have. There are many very fine audiophile products, and there are many audiophiles that appreciate them. However, the audio equipment market seems to be unique in the number of products that appear to be nothing more than “scams” fostered on a segment that is susceptible to being taken. So, what are you really buying… ? and if it makes you feel good about yourself, isn't that good enough? Contributing Writer, SPCR Want something reviewed? Help us get samples! Donate for Patron or Friend Status! Green Shoes wrote: Bias or Discrimination, on the other hand, is the act of judging a single person based on the actions of other similar people, which is what this thread is possibly rampant with. (1) I have GIVEN UP on hi-end audio, I just don't think it's worth it any more for me--at this time at least, so (2) I decided to give HTPC a try, for fun, and (3) I found SPCR a great resourse for my NEW hobby, but (4) I have spent all my time here defending/talking about hi-end audio Maybe instead of flaming my audio posts (a topic on which, I am pretty sure, I know more than most on this thread), and look over my HTPC specs, detailed a few posts ago (a topic on which I know little and am a total nubee compared to everyone else here), and give me some advice? Most of the hardware is already bought, but I am so very, very open to suggestions anyway! Edward Ng wrote: And I am still quite convinced that there are plenty of people (particularly included in this thread) that would be perfectly happy to loop myself in with the lunatics. I consider generalization to be just that, generalization. You either do it, or you don't. It's plenty clear to me who is. Especially since not one of them has acknowledged my point. Contributing Writer, SPCR Want something reviewed? Help us get samples! Donate for Patron or Friend Status! BobDog wrote: Edward Ng wrote: But still, I like my sound, and I like it to be very good; I won't lie, my sound system cost me around $4-6K, but you know what? I'm totally satisfied with it, and see no reason to spend any more on Then do not spend a dollar more is. Especially Bobdog wrote: I said I wouldn't reply to yeha anymore, but I cannot resist. This is exactly the sort of stupid-, oops, I mean "pseudo-" science I was talking about before. PSACS ABX Test Results PSACS ABX Test Results Interconnects and Wires Interconnects and Speaker Wires Result Correct p less than Listeners $2.50 cable vs. PSACS Best NO DIFFERENCE 70 / 139 = 50% - 7 $418 Type "T1" vs. Zip Cord NO DIFFERENCE 4 / 10 = 40% - 1 Type "Z Cable vs. Zip Cord NO DIFFERENCE 70 / 139 = 50% - 7 $990 "T2" Cable vs. Zip Cord NO DIFFERENCE 16 / 32 = 50% - 2 In the first test, five specialty interconnects from AudioQuest, MIT, Monster Cable, H.E.A.R., plus Belden cable with Vampire connectors were compared to a $2.50 blister pack RCA phono interconnect. Listeners used Etymotic Research ER4 in-ear phones driven by the headphone jack of a Bryston 2B power amplifier. The next three tests are the data from Tom Nousaine's "Wired Wisdom: The Great Chicago Cable Caper", listed on the ABX periodicals page. The Type "T1" cables were compared on a system including an Sumo Andromeda power amp and JS Engineering Infinite Slope speakers, by the system's owner. He chose his own program material and had no time limit. The Type "Z" cables were tested on the system of a high end audio shop employee including: Snell type B-Minor speakers; Forte Model 6 Power Amp; and an outboard DAC. He used his own program material selected to show the differneces he expected. The Type "T2" cables were compared on a system including a Denon DCD-1290 CD player, and Accuphase P-300 power amplifier, and Snell KII speakers. Here is a set of data from the link suggested by yeha, this looks very scientific and conclusive. It is about cables (the most controversial topic here), and it consistently shows that a group of listeners cannot tell the difference between hi-end cables and Brand-X cables. Wow, quite a finding, no? Oops, clicked submit before I meant to... ... I continue. Let’s look at these data. The p-value is the percent chance that results leading us to reject the null hypothesis (the cables are not audibly different) are due to random chance (e.g. sampling error), smaller is better. Although they give a “-“ for the p vale, there is some value there, it is just very large—meaning we cannot reject the null with any level of statistical significance. This is where the problems begin. First of all, ABX massively and incorrectly inflates their “n” (number of listeners) in “$2.50 cable vs. PSACS Best” test by aggregating all of their test results (note there are only 7 listeners but 139 “tests” (at first I thought this was just a type-o until I figured up what they were up to—I was thinking “how do you get 139 observations from 7 people???)). You can, of course, add up observations and test for significance, but ONLY if the observations are independent. By using the same 7 testers in each test, they have clearly violated independence. They really have several tests with 7 observations in each. Well, you ask, what does this matter. Well, as I am sure Mr. Science, yeha, would tell you, significance depends upon the number of individuals tested. This is easy to see when we note that the standard error (SE) is computed as (1/n*sigma(x i – x bar)^2)^-2| x bar = the sample mean; and that the p value is a function of the SE: for a normal distribution p. = 0.1 is about 2SE (of course for n = 7 we must use the slightly more restrictive T-distribution, not the normal distribution). But note that SE decreases in n, so for a very large n statistical significance is easy to find, but for a very small n, significance is very difficult to obtain. As we generally consider n >/= 30 to be “large” (where the law of large numbers begins to operate), significance at n = 7 is almost incomprehensibly difficult (though not impossible) to achieve. Given that anything we observe in an n = 7 observe is due to sampling variation, and significance is pretty much unachievable, it is almost laughable that yeha holds up these tests as scientific! Of course we cannot use them to show that cables DO make a difference, but we really cannot tell ANYTHING from these data. (Intuitively, say you only tested seven people and one was deaf. Since one in seven people in the population is not deaf, that one individual would badly skew the results—against hi-quality cable.) But it gets worse. N = 7 is the LARGEST n used here! One test, “$418 Type "T1" vs. Zip Cord” has n = 1, that’s right, ONE, listener. Scientific? Hahahahaha. This violates everything statistical. Namely, n must be greater than k (the number of variables tested) in order to get ANY results. Yet we have a case here of n = 1, k = 1. Throw this test out. Finally, we do not know if these individuals were randomly sampled. Given the embarrassingly amateur nature of the rest of the test, I doubt it. If randomness in selection is not observed, then, again, we can have no confidence in our results because we may have a skewed sample-draw from the population. I am not against double blind testing, but do it competently, for goodness sake. And yeha, stop posting goofy things like this, it only makes you look foolish… which I am sure you are not, really. Let’s get n > 30 randomly selected people in a suitable room, do our tests in an actual scientific manor. those are results I would be interested in seeing. Again, I came to SPCR to talk computers and I end up talking audio (my OLD hobby) and statistics (one of the main subjects in my PoliSci Ph.D.). Oh well, maybe some one will actually learn something from this. sthayashi wrote: This is an area where the regular skeptics sources are barely touching. James Randi has extended his million dollar challenge to the Shakti stones, but I don't think he'll go so far as to extend the challenge to other areas of audiophilia. I bought a bunch of Shakti stones and could not hear a difference. I bought a bunch of VPI Magic bricks also and DID hear a difference... they made my system sound worse. Unlike many here, I am open to trying new stuff--it's FUN. Sometimes it works far beyond my expectations (cable elevators), sometimes it does nothing (Shakti stones) and sometimes it makes things worse (Magic Bricks). That's why I try and LISTEN! BobDog wrote: Oh well, maybe some one will actually learn something from this. That would be me And Bob, could I direct you to a quick read of my subwoofer ordeal . Maybe you can offer some insight. Btw, Bob in case nobody has already said this to you: Welcome to SPCR!!! The Quake III Machine C2D e6750 @ Gigabyte GA-P45T-ES3G | 8GB GSkill | Venemous X | Asus 560Ti | Corsair 520HX | 750GB Seagate | Antec 300 | Thanks SPCR Damn, this thread got real long real fast. Here's the million dollar question(or whatever the phrase is), "Would the benefits of all this fancy equipment be cancelled out by me singing allong?" And the last option in the poll is very narrow minded, who says there aren't rednecks who are very concerned about the accoustics of their double barrel shotguns? Someone must be buying those I forgot to mention degrees of freedom in the PSACS ABX Test Results post but... oh, you get the idea. the number of people sampled is low for population testing, but the people involved were presumably high-end audio enthusiasts, and none of them could tell the difference. having 7 involved people all fail to find a difference is more convincing than having no people at all ever bother to do blind tests, which is my problem with extreme-audiophilia. so much talk about better performance, absolutely *NO* empirical evidence! all i want is a test! a single credible test! an oscilloscope, self-noise recording, anything! and the community has no tests whatsoever they can offer as as for tests that shoot down cable performance, here's one "We tested the cables dynamically with white noise and square waves feeding identical signals into one Kimber Kable and one Radio Shack cable and then setting the scope to invert and sum the two signals. If there was no difference at the output between the cables' ability to transmit an audio frequency signal, it should show up as a straight line on the scope. Any difference would generate an observable difference signal. We could not observe any difference signal on any of the interconnect cables within the resolution limits of our scope and within the band width limits of our square wave generator at any frequency close to audio. We did observe a slight high frequency roll-off on very high frequency (100 KHz and above) square waves on all the cables and that the roll-off was slightly less with the premium Kimber Kables. We saw no overshoot or ringing with any of the cables." .... "We tested the cables dynamically as we had the interconnects, driving them from two channels of a Fet-Valve 500 amplifier carefully checked for identical channel performance (which all our amps have). We matched a Kimber Kable with a Radio Shack cable into an 8 ohm load and measured the difference signal at the load. Again, neither on white noise nor on square waves could we detect any difference between the cables." here's some blind testing of power cables , which also unsurprisingly found that there's no difference whatsoever. I found references to some trials held by a tom nousaine chap but haven't found hard results, just summaries (the summaries said there was no difference found between cables). in fact i can't find a single test at all that documents people able to distinguish between cables in blind tests. nada. not one. of course all the tests i've found are amateur at best compared to those held over at hydrogenaudio for lossy audio compression comparisons, i still find it amazing that such little effort has been put into the area. perhaps i'll join a local linux user group and see if i can talk them into it, my regular circle of friends wouldn't be up for such a nerdly activity. Why do I keep doing this? yeha wrote: but the people involved were presumably high-end audio enthusiasts, and none of them could tell the difference. (A) We cannot presume to know who these people are, we are not told. (B) According to the "tests" they correctly identified the cable 50% of the time... note that this is not necessarily the same as random chance 50% because the "scientists" at ABX crammed the most interesting test results together. Who knows, maybe the MIT was correctly identified 100% of the time but the Monster (which is heavy gauge zip cord anyway) was missed 100% of the time... unlikely, but who knows given their reporting. At a minimum, some of them DID tell the difference some of the time (whether by chance or not, we cannot know) so, again, you misspeak. yeha wrote: as for tests that shoot down cable performance, here's one: As I have said several times, testing is only a beginning—if something measures bad it is almost surely bad, if it measures well then more investigation must be undertaken because our testing techniques are not at the same level of our ears… yet. I know you think I am against measurements, but nothing could be further from the truth, I just do not think they are the be-all-end-all you do. You sound so much like the perfect-sound-forever crowd in the 1980s (who were so thoroughly discredited by the 1990s), it is not even funny... hey, you didn't used to work for Julian Hersch did you? It is nice to see that they test the cables into an actual load, which is a good idea, but that brings up another point: because ALL cables have different inductance and capacitance (something they do not mention or even test, it seems), they will interact differently with different electronics and speakers. I promise you will hear this. You should like this argument, actually: it says that there are no “better” cables, just cables that happen to work with some electronics and not others—expensive cables are just hype when connected to the wrong gear. An easy example of this is Linn: their cables (speaker and interconnect work like magic with their stuff—and they are DIRT CHEAP—Linn gives their interconnects away with their electronics). If you accept this argument however, you *win* because you can say that better cables are just illusion and hype, but if you do, then you also *lose* because then you have to admit that different cables do, in fact, sound... different, depending on the associated system. Much like testing, I take a balanced approach (unlike you) wherein I must admit that some cable sounds best with certain gear (Linn, again, being a good example, my dedicated Spectron speaker cables being another), but I have also seen other cable that just sound better period (though I would not bet against your ability to find gear that would bring out their worst). yeha wrote: i still find it amazing that such little effort has been put into the area. perhaps i'll join a local linux user group and see if i can talk them into it, my regular circle of friends wouldn't be up for such a nerdly activity. As I said before, I think this is a good idea too. Too bad you let your own (and curiously vehement) bias shine through so clearly. You know, there are SOME out there who consider tooling away on computers all day as the REALLY nerdy activity.... Please stop posting. I am tired of replying but everything you say is so…so… poorly thought through (as I try to be diplomatic in a way you apparently are not), I feel that it just begs for someone with a little sense (and an absence of a sense of vendetta against ANY ONE) to respond. Yeha, I'm not saying don't post any more, at all. That's not my place to say... and I wouldn't say it even if it was. I am just saying that you should stop posting on things you know nothing about... like audio, dielectric constants, statistics, testing techniques (audio or otherwise)... it just pains me to have to read these. I value your opinion on anything computer related (as far as I know...), please post on. Ok, just because I feel like being argumentative, I think I'll wade into this (excellent) discussion. I think everyone here has agreed that a good audio system should reproduce the original as closely as possible. Most of the disagreement has been around the extent to which expensive audio equipment actually does this and/or how much a measureable improvement actually translates into an audible one. My question is this: What is the original? If the goal is to reproduce a recording, the rationalists should win this discussion hands down. However, if you're trying to reproduce a piece of music or a sound as accurately as possible, it's not so simple. While there are very scientifically acceptible ways of measuring how well a particular setup reproduces a recording, reproducing music is a much more subtle task. Keep in mind that a recording is itself a reproduction of an original performance. If the goal of high end audio equipment is to reproduce the original performance, not the recording, there's a lot of flex room for equipment that "inaccurately" reproduces the original recording but may (on average) come closer to the original performance. I suppose that the ideal reproduction of a performance would have these attributes: -Recording would be done with two (and only two) mics positioned as close as possible to the relative position of the eventual listener's ears. -The pickup pattern of microphones would perfectly emulate the frequency response of the listener's ear. -No mixing or mastering would take place before or after the recording is imprinted onto whatever medium is used to store it. -All transmission of the recording data, and the recording medium itself, would be completely lossless. -The recording would be listened to on headphones (ideally the in-ear kind) that reproduce exactly the recording they have been given. Only the last two points have been covered in this debate, which the general consensus (I think) that it is possible to meet these goals within the limits of human perception, and that this is possible without going overboard on the amount of money spent. There has been discussion about the importance of room acoustics, and I think this holds the key to the point I am about to make. I specified that headphones should be used in order to eliminate the variance of room acoustics, but this variable will affect every evaluation of an audio system's quality, subjective or objective. No matter how much "objective" testing is done, there is always a subjective element that creeps in regarding how the measurement is done. Our standards of objective measurement are nothing more than conventions that imitate our subjective impressions. When evaluating how well a recording is reproduced, scientific method requires that it be "captured" with an instrument of some kind. How this instrument is used is defined conventionally according to what best reproduces our subjective impressions. The most obvious conventional variable is the position of the instrument. Different setups are likely to have different "sweet spots" where they reproduce the recording most accurately. These will vary based on the specific components in the system, and the room in which they are tested. As was mentioned, the quality (which I take to mean accuracy) of an audio system varies considerably from room to room. Since it's unfeasible to test every system in its most optimal conditions, scientific testing must set a conventional standard of measurement that takes into account both how the system is intended to be used and how it is actually going to be used. This conventional standard is going to be subjective; there's no way around it. Determining what standard will produce the most subjectively "accurate" objective results is itself an art, not a science. A scientifically feasible standard of measurement will necessarily favour some systems over others according to the distance of the instrument of measurement, the acoustics in which it is tested and a host of other factors. So much for our ability to objectively measure a system's accuracy of reproduction. However, the conventions inherent in trying to measure a sound are doubly crucial when trying to record it. I think I'm correct in saying that the purpose of a recording is to reproduce a performance as accurately as possible. This purpose is similar the purpose of measuring the accuracy of a playback system: In both cases the idea is to reproduce a source as "accurately" as possible in a different medium. In the case of making a scientific measurement, the source noise is the output of the speakers (and any background noise) at a particular point in space, and the new medium is the data output by the instrument of measurement (typically, another recording which can be compared — bit by bit — against the original). When recording a performance, the source is the performance, and the new medium is the final recording (technically, I think it's the mode of representation, but that's irrelevant for my point). In both cases, the original source is "correct" by definition, and the end result is judged according to how well we feel the recording matches the source. This judgement is subjective, even in scientific analysis. In the case of a recording, a "good" recording is one that reproduces our original subjective impression of the performance (assuming a perfect playback system); for an objective measurement, a "good" system of measurement is one that produces results that we feel (subjectively) represents the original source. You still with me? Good. Now, here's my real point: I will assume that a musical recording is recorded in a manner that is designed to bring the recording as close as possible. As someone mentioned, in pop music, the recording process is often just as much a part of the final product as the performance, in which case my critique needs revising. For the moment however, I will assume that the recording process is intended to be as transparent as I mentioned at the beginning of my post that an ideal recording system would position the mics at the position of the final listener's ears, and would use microphones that perfectly imitate what the listener hears. However, virtually no recordings of quality are made in this way. Microphones do not emulate the human ear perfectly, so compromises are made in the name of subjectively improving the final recording. Instead of the dual mic setup I suggested, each instrument/voice is routinely recorded separately from the others and mixed together afterwards in a way that better imitates what the mixer originally heard (or, more likely, what he/she wants to hear. In this respect, recording engineers are like the scorned audiophiles who distort the source because it sounds "better". However, this rejects the assumption that the goal of a recording is an accurate reproduction of an original). Again assuming that accurate reproduction is the goal of a recording, the unrealistic practices of mixing and studio mic-ing are necessary because microphone technology is not the same as the human ear. There may be a bit of cross-contamination going on — microphones are designed for specific purposes, such as studio recording — but when it comes down to it, the most subjectively "accurate" recordings are achieved in setups that are completely untrue to how the performance is actually heard. This brings up another source of subjectivity in recording: In order to determine what the recording should emulate, a particular listening position must be selected. Since there is rarely an objective basis for preferring one position over another, the recording engineer simply picks the one that sounds "best". In other words, a recording is mixed according to how the engineer imagines it should sound. In most cases, this is probably whatever gives him the most pleasure to listen to (since I presume this is the goal of music). Now, its quite obvious that there are imperfections in the "accuracy" of most musical recordings when compared to a live performance. These imperfections creep into the music in both the recording and the playback. However, when most people listen to music, the only part of the experience they have control over is the playback system. So, it seems perfectly natural to me that a playback system that can "correct" some common flaws in the recording system might well be more true to the original performance than the recording it is playing back. And, if this is true, I can understand why people are willing to pay enormous amounts for "inaccurate" audio systems and even why they claim it plays back the audio more accurately when objective measurement says otherwise. Likewise, I can understand why people feel vinyl has higher fidelity than digial audio. Obviously, all recordings are not made in the same way, so an audio system that sounds fantastic for some recordings may sound terrible for others, but this is true of even low-fi systems. The trick is to find some average that sounds reasonably good no matter what is played on it. Obviously, you can "tune" a system towards a particular genre of music (or "recording style"), but this introduces a further subjective element: The intended use of the system. There are few touchstones for attaining this average. A system that is perfectly faithful to the source recording (not the original performance) may be one of these. Sound clarity may be another. (Incidentally, is there a scientifically acceptible way of measuring clarity? It doesn't seem that either frequency or amplitude reproduction fully captures it. Measuring timing is probably part of it, but I'm not sure this captures it either.) I think it's not implausible that there might be other attributes of audio gear that might compensate for recording errors while reducing their ability to accurately reproduce a recording. My headphone amp has a "sound enhancer" feature that introduces a small amount of crosstalk and delay between the stereo channels. There's no doubt that this distorts the source recording. There's also no doubt in my mind that it does "enhance" the sound; it adds a sense of "presence" that is often lacking. Ostensibly, the feature is to simulate the crosstalk that naturally occurs between our right and left ears when listening to loudspeakers (which is missing in headphones). In this case, the goal is to bring the listening experience more in line with the original "performance" (or at least, the way the music was intended to be heard, since most music is mixed for loudspeakers) at the expense of accurate reproduction of the recording. I believe there is some precedent using this technique during the recording process to add "presence" to an otherwise flat recording. In this case, the recording itself is modified to make it more "faithful" to the original performance. Ultimately, I'm not sure I buy the rationalistic argument that the most "accurate" reproduction of a recording (or even a performance) is best. No matter how many objective results proving the transparency of a particular audio system are thrown in my face, I have to ask what does the system "objectively" reproduce? Music is subjective to begin with; it's purpose (most of the time) is to give us pleasure. Pleasure is a sublimely subjective experience. Unless it's possible to somehow "objectively" reproduce a subjective experience, I have difficulty understanding why I should care whether my stereo system can accurately reproduce a recording. As I mentioned above, I think this ability is a good guideline for finding a system that will produce decent sound over a wide sample of recordings, and short of listening to every song I own on a given system, it may well be the criterion I use to buy an audio system, but I don't see how this refutes the "long term" subjective impressions that I have of a system. Ultimately, my judge of a sound system's quality will be how much pleasure it gives me from the music I listen to. there isn't really much else for me to say - if my scientific arguments were wrong the best thing to do would be correct them, instead of passing them off as misguided. i'm really trying hard to understand where you're coming from, it's not matching up with any physics i'm familiar with, but i'm not getting any pointers as to where i'm going wrong. as for the abx testing, the psacs grouped results are questionable but the others are objection-proof - if you guess right as often as you guess wrong the only possibilities are perceptual equality or deliberately throwing the test. abx testing is nothing new to me. as for cables, the idea that the human ear can notice differences in a transmitted signal that an oscilloscope misses hits me like a punch in the stomach. yes different cables can sound different (obviously with different gauges), my whole tirade has been against companies selling measureably identical to bog-standard cable for high prices and people saying it sounds better. it doesn't. it's just copper. if it's experimentally identical, it sounds identical, there's nothing else to it. i'd love to be shut up, heck i love being proved wrong because it means i'm about to learn something, but it's going to take some concrete test data, scientific explanations or actual measurements. none of those seem to be forthcoming, and none of my searches have turned up anything close. every test i've found has shown expensive cable to be nothing but snake oil, the same goes for many other components. i need more than the extreme-audiophile community has to offer it seems. i don't mind being called misguided, wrong, self-humiliating or just plain dumb, but i need actual corrections to change opinions. A good point was made about 'how is it supposed to sound?' Like the real instruments of course. But how do real instruments sound? I play drums which is a good test for bass, mid and high trebble, and that is pretty tough to reproduce with my smallish speakers. Other non amplified instruments like jazz music or most classical instruments have their 'real' sound. With most music however we cannot say how it is supposed to sound because the instruments use amplification and a speaker just before they start making a sound! Synthersizers are often used and how on earth how their 'instruments' supposed to sound? They are artificial to start with. Even 'real' instruments sound different from model to model. Saying that your system plays 'true to life' is IMO something you can never be sure of. All you can say is that it comes 'fairly close' to the real thing, as there are too many variables to go any further than that. Therefore I see it as a totally pointless thing to spend time and money on freezing my CD's, demagnetizing them, putting my wires on stands, putting dampers under my amp.. These things don't do anything. Cables are actually in the 'system' but their effect has often been proven to be 0 or near 0. What about the long lines of 'poor quality signal' in the circuit boards in the amp and cd player? Coils in the speaker filter are many meters worth of normal copper. What can I possibly achieve by changing a 2 meter cable between bog standard circuit boards and coils? So, IMO you can't realistically try to get 'true sound' in your living room. For me it is a combination of speaker placement and room accoustics, until I end up with a 'believable' sound that doesn't obviously distort or sounds harsh. The point being : What on earth should I listen for when I get new components other than speakers? How can I say it got more true to life? Perhaps I like the sound a bit more, but chances are that the changes in sound only help 50% of my albums and hurt the other half. So I get 'decent' stuff, position it decently in a decent room, and enjoy the music. I made this comparison between 192kbit mp3 and original a while ago. This is a VERY much zoomed in bit of less than 15ms (0.015sec) which imo explains why I can't hear any differences as they aren't really there http://members.home.nl/taselaar/niels/wavvsmp3.gif http://members.home.nl/taselaar/niels/wavvsmp3merged.gif Green Shoes: nice link Going back a bit: mathias wrote: And I would laugh my ass of if I ever saw someone wrecking their low floored supercar on a speed bump. I was walking home an hour ago and saw one particular supercar - 'lowered' VW Golf - driving off a driveway, except it going over one of the steeper parts adjacent to the road. I heard the scrape of its twin exhausts(!) first, followed by the rest of the car ruining the pavement It's coming back @ Devonavar (well, the first half of your post, I'm not going to quote it all here). You make a good point but the recording industry has been having this argument for years. If you truly wanted people to have the most accurate reproduction possible, you would mix on only one pair of speakers, a set of headphones. You would also record all noise sources in the "binaural" technique, a set of omni-directional mics positioned to be about an ear's length apart form each other. However, this isn't good enough, so what you really need is a foam binaural head, which can more accurately represent the sound dampening that your head does to one ear or the other, as well as the incredibly complex filtering that the rook and folds of your ear perform (you think MP4 encoding is complicated, you should see what your ear does). Recordings that are made using this technique sound astounding on headphones (you actually get a 3-D soundstage; if someone moves the soundsource behind the dummy's head, it moves behind yours to), but sound like crap from any other pair of speakers. The soundstage has no depth and nothing sparkles at all, it's just a muddy mix. It's for this reason that everything is (typically) recorded in either an XY or blumlein stereo technique, and them mixed on a variety of speakers to obtain the best sound possible no matter where people listen to it (cars have notoriously unbalanced systems, for example). These techniques sound much better over a speaker system, but they lose much of the original presence of the recording in the process. As far as the recording space goes, 98% of music released today is recorded in a basically dead space. Orchestral stuff (say the strings that they mix down so far in the background all you can hear is the first violin Classical music, on the other hand, doesn't sound right in anything less than a 2-sec hall (and every 2-second hall sounds different, that's the beauty of it). I've heard Barber's Adagio performed in a medium-size studio room by some players practicing....it sounds great but it won't bring a tear to your eye. I know several audiophiles who actually have two entirely different setups, one for classical/some jazz and the other for everything else. This makes a fair bit of sense to me, as classical has a huge dynamic range (70dB or more sometimes) and needs to try to recreate a huge space, whereas modern music has maybe a 6dB range and is meant to sound good on a 2.5-inch Sanyo speaker. However, I think they'd get their money's worth more if they spent the time to adjust or even rebuild their listening room. I admit, it's fun in theory to talk about accurately reproducing a recording, but in reality there are just too many variables for it to ever happen. Take the ubiquitous car stereo; new decks are coming out with BBE, an algorithm designed to replace some of the incredible noise cancellation that occurs when you have two in-phase speakers pointed directly at each other But with your long point, that you should just buy whatever sounds best to you, I agree wholeheartedly. In fact, I don't think anyone in this discussion would disagree. It's the people who think that x speaker sounds better than y simply because it costs twice as much that I question Wedge, thanks for the link. I was trying to explain it to my wife but she was giving me funny looks; this'll be much easier StarfishChris wrote: Green Shoes: nice link Sorry, I missed this with my previous post. See above, under "binaural recording technique". It would blow your mind away if you ever get to test it....one of the most surreal experiences I've ever
{"url":"http://www.silentpcreview.com/forums/viewtopic.php?f=18&t=21932&start=30","timestamp":"2014-04-20T09:39:50Z","content_type":null,"content_length":"149227","record_id":"<urn:uuid:ba14f39c-98fa-4025-934e-271966610d9c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
Scalable algorithms for machine learning and data mining [HOME] [EDUCATION] [PUBLICATIONS] [RESEARCH] [SOFTWARE] [INTERNSHIPS] [PROJECTS] SCALABLE MACHINE LEARING FOR MASSIVE DATASETS: FAST SUMMATION ALGORITHMS [ The art of getting 'good enough' solutions 'as fast as possible' ] Huge data sets containing millions of training examples with a large number of attributes (tall fat data) are relatively easy to gather. However one of the bottlenecks for successful inference of useful information from the data is the computational complexity of machine learning algorithms. Most state-of-the-art nonparametric machine learning algorithms have a computational complexity of either O(N^2) or O(N^3), where N is the number of training examples. This has seriously restricted the use of massive data sets. The bottleneck computational primitive at the heart of various algorithms is the multiplication of a structured matrix with a vector, which we refer to as matrix-vector product (MVP) primitive. The goal of my thesis is to speedup up these MVP primitives by fast approximate algorithms that scale as O(N) and also provide high accuracy guarantees. I use ideas from computational physics, scientific computing, and computational geometry to design these algorithms. Currently the proposed algorithms have been applied in kernel density estimation, optimal bandwidth estimation, projection pursuit, Gaussian process regression, implicit surface fitting, and ranking. [ Research Summary ] [ Thesis ] [ Slides ] - Fast kernel density estimation - Fast Computation of Kernel Estimators Vikas C. Raykar, Ramani Duraiswami, and Linda H. Zhao, Journal of Computational and Graphical Statistics. March 2010, Vol. 19, No. 1: 205-220 [abstract] [paper] Fast optimal bandwidth selection for kernel density estimation. Vikas C. Raykar and Ramani Duraiswami, In Proceedings of the sixth SIAM International Conference on Data Mining, Bethesda, April 2006, pp. 524-528. [paper] [brief slides] [code] [bib] [ Detailed version available as CS-TR-4774 ] Very fast optimal bandwidth selection for univariate kernel density estimation. Vikas C. Raykar and R. Duraiswami, CS-TR-4774, Department of computer science, University of Maryland, CollegePark. [ abstract] [TR] [slides] [code] [bib] - Improved Fast Gauss Transform - Automatic online tuning for fast Gaussian summation Vlad I. Morariu, Balaji V. Srinivasan, Vikas C. Raykar, Ramani Duraiswami, and Larry Davis, In Advances in Neural Information Processing Systems (NIPS 2008), vol. 21, pp.1113-1120, 2009. [paper] [spotlight slide] [bib] [code] The Improved Fast Gauss Transform with applications to machine learning Vikas C. Raykar, and Ramani Duraiswami, In Large Scale Kernel Machines L. Bottou, O. Chapelle, D. Decoste, and J. Weston (Eds), MIT Press 2006. [chapter] The improved fast Gauss Transform with applications to machine learning. Vikas C. Raykar and Ramani Duraiswami, Presented at the NIPS 2005 workshop on Large scale kernel machines. [slides] [code] [ Fast computation of sums of Gaussians in high dimensions. Vikas C. Raykar, C. Yang, R. Duraiswami, and N. Gumerov, CS-TR-4767, Department of computer science, University of Maryland, Collegepark. [ abstract] [TR] [slides] [code] [bib] Fast large scale Gaussian process regression using approximate matrix-vector products. Vikas C. Raykar and Ramani Duraiswami, Presented at the Learning workshop 2007, San Juan, Peurto Rico, March 2007. [abstract] [detailed paper] [slides] Efficient Kriging via Fast Matrix-Vector Products Nargess Memarsadeghi, Vikas C. Raykar, Ramani Duraiswami, and David M. Mount. In IEEE Aerospace Conference, Big Sky, Montana, March 2008. [paper] - Fast ranking algorithms - A fast algorithm for learning a ranking function from large scale data sets Vikas C. Raykar, Ramani Duraiswami, and Balaji Krishnapuram, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 7, pp. 1158-1170, July 2008. [paper] A fast algorithm for learning large scale preference relations. Vikas C. Raykar, Ramani Duraiswami, and Balaji Krishnapuram, In Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, San Juan, Peurto Rico, March 2007, pp. 385-392. [paper] [slides] [code] [bib] [ More details can be found in CS-TR-4848 ] [oral presentation] Fast weighted summation of erfc functions. Vikas C. Raykar, R. Duraiswami, and B. Krishnapuram, CS-TR-4848, Department of computer science, University of Maryland, CollegePark. [abstract] [TR] [slides] [code] [bib] - Unpublished Reports- Scalable machine learning for massive datasets: Fast summation algorithms Vikas C. Raykar A summary of the thesis contributions. [summary] [two page summary] Computational tractability of machine learning algorithms for tall fat data. The prelimnary oral examination for the degree of Ph.D in computer science, University of Maryland, CollegePark, May 4, 2006. [proposal] [slides] [reading list] The fast Gauss transform with all the proofs. [ pdf ] Correction to Lemma 2.2 of the fast Gauss transform. [ pdf ] A short primer on the fast multipole method. [ pdf ] - Software - Fast summation of erfc functions and ranking [code] Fast optimal bandwidth selection for kernel density estimation. [code] [slides] The improved fast Gauss Transform. [code] The copyrights of publications are with the respective publishers. The papers are being reproduced here for timely dissemination of scholarly information.
{"url":"http://www.umiacs.umd.edu/labs/cvl/pirl/vikas/Current_research/scalable_machine_learning/scalable_machine_learning.html","timestamp":"2014-04-19T17:02:40Z","content_type":null,"content_length":"20266","record_id":"<urn:uuid:c992d561-719b-40c7-8ce1-042aa12f3b46>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
February 1st 2007, 04:30 PM #1 Junior Member Nov 2006 If you could help it would be appreciated!!! #1 I need to show an algebraic expression to show, "The distance (in miles) traveled in h hours at an average speed of 55 mph?" #2 The quotient of two consecutive integers, the smallest being n. #3 The perimeter of a rectangle whose width is half its length L I think it is P= L* (1/2L) If you could help it would be appreciated!!! #1 I need to show an algebraic expression to show, "The distance (in miles) traveled in h hours at an average speed of 55 mph?" Distance = Speed x Time Distance (in miles) = 55 x h #2 The quotient of two consecutive integers, the smallest being n. Smallest = n, the next consecutive number is n + 1 and the quotient of the two consecutive integers would be expressed as: $\frac{n + 1}{n}$ #3 The perimeter of a rectangle whose width is half its length L I think it is P= L* (1/2L) Length = L Width = half its length = 0.5 L (or 1/2 L) Perimeter = (Length + Width) x 2... that is, (L + 0.5L)x2 = 3L February 1st 2007, 07:28 PM #2
{"url":"http://mathhelpforum.com/algebra/11016-expressions.html","timestamp":"2014-04-16T05:34:35Z","content_type":null,"content_length":"31552","record_id":"<urn:uuid:9ee36c45-88e0-4bcd-b406-5e9c78e54e31>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Monica on Tuesday, October 23, 2012 at 3:15pm. Two fewer than a number doubled is the same as the number decreased by 38. Find the number. If n is "the number," which equation could be used to solve for the number? • Algebra 1 - Steve, Tuesday, October 23, 2012 at 3:20pm "Two fewer than a number" doubled 2(n-2) = n-38 Two fewer than "a number doubled" 2n-2 = n-38 • Algebra 1 - Monica, Tuesday, October 23, 2012 at 3:22pm Thank Yu. Unknowns seem to get me stuck. I have another one. Separate 846 into 3 parts so that the second part is twice the first part and the third part is triple the second part. Which of the following equations could be used to solve the problem? • Algebra 1 - Steve, Tuesday, October 23, 2012 at 3:46pm call the parts a,b,c a+b+c = 846 b = 2a c = 3b substitute in for b and c to get a+2a+6a = 846 9a = 846 a = 94 so, b=188 To solve this kinda stuff, just take it one step at a a time. First, translate the words into symbols. Then substitute things as needed. After a thousand or so, you'll get the hang of it. :-) • Algebra 1 - Monica, Tuesday, October 23, 2012 at 4:21pm Thanks :) Related Questions fundamentals of math - if 3 is added to a number and this sum is doubled, the ... maths linear eqautions - i have a number of sweets in a bag. if i removed three ... algebra1 - I will be happy to critique your thinking. can u use one of them and ... algebra - i need help with my math problem: x varies directly as the square of s... astronomy - Suppose teh distance between two stars doubled while the mass of ... Algebra - There are 21 people. The number of adults is 3 fewer than 5 times the ... physical science - What happens to the gravitational force between two objects ... algebra - The sum of two number is 40.Two times the smaller number exceeds the ... algebra - two sisters together have 20 books. if the younger sister lost 3 books... algebra help - Solve the problem. Joe has a collection of nickels and dimes that...
{"url":"http://www.jiskha.com/display.cgi?id=1351019750","timestamp":"2014-04-20T11:37:50Z","content_type":null,"content_length":"9587","record_id":"<urn:uuid:bc8ec619-61a7-460d-8d5d-d1893b0da69a>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: (non)computability Joe Shipman shipman at savera.com Wed Sep 2 14:54:42 EDT 1998 Fenner writes: At the heart of almost any good proof in recursion theory is one or more algorithms. These algorithms may be showing that one particular degree is higher than another, and both may be noncomputable. I consider *relative* computation to be computation just as much as absolute computation, and so much of recursion theory (at least recursion) is still about computation. Very interesting remark. Of course the concept of "subroutine" is fundamental to computation, so in that sense "relative" computation is still computation. But if you replace "subroutine" with "oracle" the connotations become different, because the "oracle" isn't necessarily a computable function. Since "oracle arguments" are at the heart of much computational complexity theory, the technical resemblance between complexity theory and recursion theory is still strong (though there is much more to complexity theory that has no analogue in recursion theory). However, the *practical* relevance is utterly different. Does ANY result in the theory of degrees of unsolvability have practical This is not meant as a criticism of recursion theory, I just want to point out that betwen recursion theory and complexity theory there is an important sociological gap (the old pure-applied distinction in rather stark form). The gap is not unbridgeable. Complexity theory certainly includes work with no practical application (for example on problems with super-exponential complexity) which makes the division between "computable" and "noncomputable" seem somewhat arbitrary (you could draw the boundary at primitive recursion instead of general recursion for example). But the existence of this gap and the rather clear partition it induces on the set of researchers suggests that it makes pragmatic sense to speak of two subjects rather than one ("complexity theory" and "recursion theory" rather than "computability theory"). -- Joe Shipman More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-September/002044.html","timestamp":"2014-04-20T03:15:06Z","content_type":null,"content_length":"4216","record_id":"<urn:uuid:a764a5d3-c2a2-4c09-8197-a42eda8e54e1>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
How Many Points is a Turnover Worth? Most Recent FO Features Want to see what good middle linebacker play looks like? Watch this Alabama prospect work. 25 Sep 2003 How Many Points is a Turnover Worth? by Aaron Schatz How many points is a turnover worth? If it is returned for a touchdown, clearly the answer is six, right? But what if it isn't returned for a touchdown? And even if it is returned for a touchdown, what about taking into account how many points the offense would have scored if they had not turned the ball over? You would expect that a turnover is worth more the closer you are to the goal line. After all, give the other team the ball closer to your own goal line, and it is easier for them to score. Lose the ball closer to the opposing goal line, and you've squandered a chance to score yourself. In Hidden Game of Football, Pete Palmer and Bob Carroll propose the theory that this seemingly common sense belief is wrong. According to them, a turnover is always worth -4 points. How did they figure this out? Well, they ran a number of seasons worth of data to determine the answer to this question: "If I'm on yard line X, what will be the next score in the game, on average." It didn't matter whether this score took place on the current drive, or the next drive, or in the next quarter. They only plays they didn't count were those when halftime or the end of the game came before the next score. They started with the idea that having the ball on your own 0 yard line is worth -2 points, since you've just given up a safety. The ball on the opposing yard line is worth 6 points, leaving out the extra point which is pretty much guaranteed anyway. Graph the results on the rest of the field, and you end up with a graph that looks like this: The yard lines here are given from 0 to 100, with 51-100 representing the opponent's half of the field, and a negative score reflects that, on average, the defense was more likely to score next than the offense. Flip the chart around to get the average next score when the opponent has the ball, and you get this table: │ Distance from │ Team on │ Team on │ Turnover │ │ Your Goal │ Offense │ Defense │ Value │ │ 0 │ -2 │ -6 │ -4 │ │ 25 │ 0 │ -4 │ -4 │ │ 50 │ 2 │ -2 │ -4 │ │ 75 │ 4 │ 0 │ -4 │ │ 100 │ 6 │ 2 │ -4 │ So a turnover is always worth 4 points -- well, -4 points to the offense and 4 points to the defense -- assuming that you have an average offense and an average defense. This chart is also important to help explain two of the main precepts of the still-young science of football stat analysis. First, the idea that field position is fluid. In fact, if someone asked me, "What is the most important thing you have learned so far that will help people understand why teams win or lose football games," I would answer "Field position is fluid." While the chances of scoring (or allowing a score) change based on a team's position on the field, it doesn't matter how a team gets to that position. This is why special teams are so underrated. When Carolina sends Todd Sauerbrun out there to punt, they are consistently leaving the opposing team about five yards further back, which means that the average value of the next score is, according to Palmer and Carroll, 0.4 points lower. Punt 100 times in a season -- and when your offense sucks, like Carolina's, you will -- and that adds The idea that field position is fluid also explains why some teams that score a high number of points come out much lower on our Value Over Average ratings for offense. (What's VOA? Read here.) Imagine a team where the offense always gained between 15-20 yards and then punted, but the defense was so good that they always limited the other team to 3-and-out. If each team had average special teams, our team would gradually move the start of each drive towards the opposing goal line, eventually scoring a field goal or touchdown and starting the whole cycle over again. Perhaps the opposing defense would score a couple of turnovers, and the game would end 26-14. But clearly our team's defense is far outplaying the offense, even though the offense scored a lot of points. So far in 2003, Buffalo is a good example of this. After three weeks, they are ranked #7 in Defense VOA but only #17 in Offense VOA, despite ranking #6 in the NFL in points scored. That's because their defense is stopping teams quickly and getting the ball back for the offense in good field position. Unfortunately, Buffalo's defensive turnovers are balanced by their offensive turnovers. Against the Patriots in Week 1, all of Buffalo's points, except for an opening drive touchdown, either followed an interception that put the offense in good field position or came from an interception return. (The ground is still shaking in southern Ontario after that Sam Adams runback.) Even in their abysmal game against Miami, Buffalo's defense comes out with a -30% VOA. Miami had the ball 12 times which included three 3-and-outs and three turnovers. But every time the Buffalo offense took the ball back, they fell flat on their faces with four 3-and-outs, three turnovers, and another three drives with fewer than 20 yards. Their only points came from the defense on an interception return for a touchdown. The second idea that comes from this discussion is go for it on fourth down more often. The conventional wisdom is that if you go for it on fourth down and miss, the other team now has the ball and you've lost your opportunity to score. The chart above shows that this isn't necessarily so. Once you are past the opponent's 25-yard line, your team is still more likely to score the next points despite a change of possession. Less likely to score as many points, sure, but if you go for it on fourth-and-goal and fail, you've backed the other team up in their own end, and most of the time they will need to punt, giving your team the ball back with a loss of 40-60 yards in field position. Depending on the chances of failing to get that fourth down vs. the advantage of 7 points over 3 points, the odds favor going for the touchdown far more often than current coaches normally try. No, obviously not with one minute left in a tied game, but in general. That's the simple version, here's the complicated version. But let's get back to evaluating Carroll and Palmer's original chart. The ideas seemed great, but a couple of things struck me as not quite right. First of all, shouldn't a touchdown count as 7 points? After all, the extra point is part of the reason you're trying to get down to that goal line. What about the two-point conversion? It turns out that, in 2002 at least, slightly more than half of the two-point conversions were converted successfully, and and that average slightly higher than one point per conversion balanced out the very small number of mixed extra points. So a touchdown should be 7 points. The second problem was the idea that getting closer to the goal line didn't make a turnover any more dangerous. Is just didn't seem to make sense. Field goals get easier as you get closer to the goal line, right? So even if the chances of a touchdown increased in a linear fashion, shouldn't the average next score curve a bit because the chances of getting three are gradually moving upwards? Faced with these dilemmas, I decided to re-run the numbers using my 2002 database. Like Palmer and Carroll, I left out plays when there was no next score because of halftime or the game's end. But this time, if the next score was a touchdown, it was worth 7 points instead of 6 points. We ended up with the crooked blue line on this next chart (we'll get to those other lines in a second): As you see, there's still a pretty clear trend here, although something very strange occurs near a team's own goal line -- the opponent's likelihood of scoring next actually goes down as a team gets backed up to the last 8 yards or so before its own goal. I'm not sure if this is a one-year glitch, or a general trend. There are reasons why it would be the case. Backed up to their own goal lines, teams generally play extremely conservatively, running just to get some room for a successful punt rather than attempting passes. That cuts down on turnovers, which moves the opposing team's next starting field position back a bit, which means fewer points for them on average. You also may notice that the average next score stays under 5 points until the last five yards or so. I guess defenses are better at defending at the goal line than you might expect, meaning lots of fun short field goals. And we've already established that most coaches will kick that field goal most of the OK, so those lines. Excel balances out our desire to throttle that annoying talking paper clip by being chock-full of useful features. One of those features allows us to create a trendline for our little chart. I started with creating a linear equation just like Palmer and Carroll used. That's the black line. You may notice it is a bit different from the line in Hidden Game. Where the line in Hidden Game runs from -1.96 on your own one-yard line to 5.96 on the opponent's one-yard line, our line from 2002 goes from -1.46 to 5.26. That means that the value of a turnover based on 2002 numbers is actually a bit lower, 3.8 points. Somewhere, Ryan Leaf is feeling a little more self-worth. It's important to note that this gives the value of a turnover without considering the runback by the defense. For example, this is the value of a fumble at the line of scrimmage where the defense pounces on the ball and goes nowhere. An interception of a 50-yard pass that only gets run back for a few yards has a lower value, obviously, since the line of scrimmage on the next play has moved down the field. For the same reason, an interception of a 5-yard pass by a linebacker who gets 20 yards before he's tackled has a higher value. An added bonus of this trendline is that we can figure out how many yards a turnover is worth based on finding which yard line with the ball has the same average next score as each yard line without the ball. The -1.46 expected next score with the ball on the one-yard line corresponds to a point about halfway between the 56 and 57-yard lines (a.k.a. the opponent's 43 and 44-yard lines). That means that a turnover, not counting the length of the runback, is worth on average 55.5 yards. Which, for Jim who asked, provides part of my answer to this question from the PFRA Forum: How many yards should the interception penalty be in the QB rating? My answer is 55.5 yards minus whatever is the average distance from the line of scrimmage when the play ends, or 55.5 yards plus the average length of intercepted passes minus the average length of interception returns. Which I'll get to computing, oh, at some point after I finish fixing my gutter drainers and cooking for Rosh Hashanah. But I digress, yet again (I do that a lot). What about our issue of common sense saying that a score should be easier closer to the goal line? Well, Carroll and Palmer used a nice linear equation for simplicity, but we can get a more complicated trendline. Excel creates polynomial equation trendlines up to the sixth power! Unfortunately, all the equations I've gotten from the fourth power and above seem not to work correctly. I blame the stupid talking paper clip for sabotaging the goal of football science. He must be a soccer fan. Nonetheless, our third power equation still provides a more accurate trend that better matches our actual numbers. For you math geeks, the R-sq has gone up slightly from .9615 to .9634. Our new trendline is orange on the above chart. You can see that according to this equation, the average next score is a bit higher closer to the goal lines -- especially near the opponent's goal line, as a touchdown becomes more likely and a field goal becomes child's play for everyone but Todd Peterson -- and a bit lower in the middle of the field. Since the value of one extra yard of field position now changes depending where we are, the value of a turnover now changes as well. Which gives us this chart. Welcome to the Happy Turnover Smile-Time Hour! What's interesting here is that the bottom of this chart isn't much different from the average value of a turnover that we got from the first equation we used, the straight line! This graph says that the value of a turnover bottoms out at 3.77 points between the 48-yard lines. The value hits 3.80 on the 38 and 39-yard lines on each side and goes up gradually. It hits 4.0 points on the edge of each red zone and 4.10 points between the 9 and 10-yard lines, ending up at 4.25 points on the one-yard line. The moral of this chart, however, is that Palmer and Carroll, in the effort to simplify, missed that a turnover isn't worth the same amount of points anywhere on the field. It truly is worse to turn the ball over in the red zone than in the middle of the field. This curve, however, has the same values on each side of the field, just like the linear equation trendline from Hidden Game. That should mean that performance in what I call the DEEP zone (your own goal line to 20-yard line) is as important as performance in the red zone on the other side of the field. Except that it isn't, really, because unless you turn the ball over you are going to either drive out of your own end or punt the ball, giving it back to the opposition not in the red zone but in the middle of the field. But a turnover in your own DEEP zone is just as deadly as a turnover in the red zone. In one case, you are losing almost assured points, and in the other, you are handing the other team almost assured points. There are two more issues here. First, what about the question of how many yards a turnover is worth? It turns into a curve, actually. This is a rough curve, done with estimates instead of a fancy mathematical equation, but you get the idea. To make this chart, I had to figure out the value of the opposition having the ball prior to their own goal line, which was sort of silly. I am willing to entertain comments that this chart shouldn't look like this, peaking at the 20-yard line at around 57.25 yards and dropping on either side. I think a turnover is worth fewer yards as you go farther down the field because it gets harder for the opposition to score on the ensuing possession. Why it should go down closer to your own goal line, I'm not sure. We're not done yet, true believers, because in the comments on the Week 3 team efficiency tables, Vince asked if we knew expected scoring for down and distance. I had planned to write this article about expected scoring by distance, but it was easy enough to go back and sort by down. Get out your 3-D glasses for this one, and try not to squint: Note that we're including punts here. You can see how the difference between the 2nd down trend and the 3rd down trend is larger than the difference than the 1st down trend and the 2nd down trend, and the difference between the 3rd down and the 4th down trend is even larger. As an added bonus, the correlation between the trendline and the actual data gets smaller the closer you get to 4th down. On 1st down, the expected next score becomes positive between the 13 and 14-yard lines; on 4th down, the expected next score isn't positive until just before the 50-yard line because you are almost always turning the ball over for the next down. One more fun graph, just because I can. This table shows the value of a turnover in points, only this time it's been split out by down. 4th down isn't here, since that's almost always a "turnover" anyway. While the point value of field position for the offense changes with the down, the point value of field position for the defense (the team that gets the ball on the turnover) is always the same, since they always start on first down. That's why the top line is another happy symmetric smile, but the other two lines are not. OK, so what's the moral of the story? Having the ball doesn't necessarily mean you are likely the next team to score; losing the ball doesn't necessarily mean you are going to give up the next score. The likelihood that your team will score -- and the amount of points you are likely to score -- improves gradually as you move towards the goal line, but it only improves slightly with every yard until the last few yards. Sticking your opponent with the ball near his own goal line is worth about as much as having the ball yourself around your own 40-yard line. Sometimes a team looks like it has a great offense when what it really has is a great defense that gets the offense the ball in good field position where it is easier to score. As for the value of a turnover, giving the ball up is worth about 3.8 points in the middle of the field, about 4 points at the 20-yard lines, and 4.25 points at the goal line. If you want to get technical, it's worth more on first down than second down, and more on second down than third down. And Excel makes neato graphs. 4 comments, Last at 08 Nov 2011, 2:09pm by bravehoptoad by The Ninjalectual (not verified) :: Mon, 10/02/2006 - 7:51pm First comment in nearly 3 years! I miss graphs like these being posted in articles. Is this research factored into the modern DVOA formula? It may or may not make a difference... Chad Pennington threw his first career red zone interception this week, on fourth down, so does that single play affect his PAR more or less than, say, Matt Hasslebeck, who threw one around midfield? Great research though, and I'm grateful to learn from it. by V (not verified) :: Mon, 12/03/2007 - 1:14am Just wanted to let you know that this is my favorite articles on the entire site. Totally changed the way I thought about football. You guys are amazing. by PatsFan :: Tue, 10/05/2010 - 9:25pm Somehow I had missed this article all these years. But a commenter in the 2010 Week 4 DVOA thread pointed to it. Very interesting! by bravehoptoad :: Tue, 11/08/2011 - 2:09pm 3 comments? Why aren't there 500? This article is great stuff -- good enough to deserve that.
{"url":"http://www.footballoutsiders.com/stat-analysis/2003/how-many-points-turnover-worth","timestamp":"2014-04-17T00:48:44Z","content_type":null,"content_length":"51523","record_id":"<urn:uuid:8df77a74-8dfa-4504-a8b4-19e9582a916b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
Portability Haskell 98 Stability stable Maintainer haskell@henning-thielemann.de A type class for non-negative numbers. Prominent instances are Numeric.NonNegative.Wrapper.T and peano numbers. This class cannot do any checks, but it let you show to the user what arguments your function expects. Thus you must define class instances with care. In fact many standard functions (take, '(!!)', ...) should have this type class constraint. class (Ord a, Monoid a) => C a whereSource Instances of this class must ensure non-negative values. We cannot enforce this by types, but the type class constraint NonNegative.C avoids accidental usage of types which allow for negative The Monoid superclass contributes a zero and an addition. split :: a -> a -> (a, (Bool, a))Source split x y == (m,(b,d)) means that b == (x<=y), m == min x y, d == max x y - min x y, that is d == abs(x-y). We have chosen this function as base function, since it provides comparison and subtraction in one go, which is important for replacing common structures like if x<=y then f(x-y) else g(y-x) that lead to a memory leak for peano numbers. We have choosen the simple check x<=y instead of a full-blown compare, since we want Zero <= undefined for peano numbers. Because of undefined values split is in general not commutative in the sense let (m0,(b0,d0)) = split x y (m1,(b1,d1)) = split y x in m0==m1 && d0==d1 The result values are in the order in which they are generated for Peano numbers. We have chosen the nested pair instead of a triple in order to prevent a memory leak that occurs if you only use b and d and ignore m. This is demonstrated by test cases Chunky.splitSpaceLeak3 and Chunky.splitSpaceLeak4. (Ord a, Num a) => C (T a) C a => C (T a) This instance is not correct with respect to the equality check if the involved numbers contain zero chunks. splitDefault :: (Ord b, Num b) => (a -> b) -> (b -> a) -> a -> a -> (a, (Bool, a))Source Default implementation for wrapped types of Ord and Num class. (-|) :: C a => a -> a -> aSource x -| y == max 0 (x-y) The default implementation is not efficient, because it compares the values and then subtracts, again, if safe. max 0 (x-y) is more elegant and efficient but not possible in the general case, since x-y may already yield a negative number. maximum :: C a => [a] -> aSource Left biased maximum of a list of numbers that can also be empty. It holds maximum [] == zero switchDifferenceNegative :: C a => a -> a -> (a -> b) -> (a -> b) -> bSource In switchDifferenceNegative x y branchXminusY branchYminusX the function branchXminusY is applied to x-y if this difference is non-negative, otherwise branchYminusX is applied to y-x.
{"url":"http://hackage.haskell.org/package/non-negative-0.1/docs/Numeric-NonNegative-Class.html","timestamp":"2014-04-21T15:46:55Z","content_type":null,"content_length":"10812","record_id":"<urn:uuid:8feea5fe-8dfe-4b28-af79-7191ef9b95ec>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
set operations November 27th 2006, 12:10 PM #1 Oct 2006 set operations What is the difference between the union of two sets and the intersection of two sets. Give an example of each A union of two sets is a set that contains all the elements of the two sets. $A=\{ \mbox{ all men } \}$ $B=\{ \mbox{ all women} \}$ $A\cup B=\{ \mbox{ all people (except TPH for he is not human) } \}$ The intersection of two sets is a set that contains all elements that are in both sets. $A=\{ \mbox{ all people with hair } \}$ $B= \{ \mbox{ all people with eyes} \}$ $A\cap B=\{ \mbox{all people that have hair and eyes} \}$ you're the best thanx November 27th 2006, 12:19 PM #2 Global Moderator Nov 2005 New York City November 27th 2006, 12:29 PM #3 Oct 2006
{"url":"http://mathhelpforum.com/discrete-math/8072-set-operations.html","timestamp":"2014-04-20T17:56:24Z","content_type":null,"content_length":"35729","record_id":"<urn:uuid:e296af45-ce6b-446f-a591-0576cd5eb3a0>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
Isometric embedding of a Kaehler manifold as a special Lagrangian in a Calabi-Yau manifold up vote 2 down vote favorite I am reading the paper "Hyperkaehler structures on the total space of holomorphic cotangent bundles" by D.Kaledin and I am asking if it is possible to embedd a real-analytic Kähler manifold, isometrically, as a special Lagrangian in a Calabi-Yau manifold. Acctualy what I am looking for is the following: Start with a compact real-analytic Kähler manifold $(M, I, \omega)$ and in a neigbourhood of the zero-section in the cotangent bundle $T^{*}M$ there should exists a holomorphic $(n,0)-$form $\Omega$ (with respect to some complex structure on this neigbourhood) and a Kaehler form $\tilde{\omega}$ such that the forms $Im(\Omega)$ and $\tilde{\omega}$ vanishes when restricted to $M$ (the zero section) and $\tilde{\omega}^{2n} = C_{n} \Omega \wedge \bar{\Omega}$ for some constant $C_{n}$ that depends only on $n$. I know that one can do this. But I don't know some references where I can find a explanation of this. Is it sufficient just to read the paper of Kaledin or do I have also to switch to other references? By using Kaledin's paper what ingredients are necessary for a proof of this embedding problem? I am a beginner in Calabi-Yau manifolds and Hyperkaeler manifolds and I would be very thankfull if someone has the answers. I hope for a lot of replys and also hope that this question is not too trivial. Best Regards, Pavel dg.differential-geometry complex-geometry smooth-manifolds fa.functional-analysis try looking at papers of Stenzel: math.osu.edu/~stenzel.3/research/publications/ricci-flat.pdf – Spiro Karigiannis Oct 20 '12 at 12:48 or by Calabi: archive.numdam.org/ARCHIVE/ASENS/ASENS_1979_4_12_2/… – Spiro Karigiannis Oct 20 '12 at 12:49 Your question suggests that you are looking for an isometric embedding of the given Kähler manifold as a special Lagrangian in a Calabi-Yau manifold, but you don't mention this requirement in the text. I'll just point out that the induced metric on any special Lagrangian submanifold of a Calabi-Yau manifold is necessarily real-analytic, so it follows that it is not possible, in general to isometrically embed a given Kähler manifold as a special Lagrangian in some Calabi-Yau manifold. – Robert Bryant Oct 20 '12 at 22:19 ok, I see. lets assume that the given Kaehler manifold is also real analytic. Is it then possible? how can one explain this? what are the ingredients in showing this? – Pavel Oct 21 '12 at 6:01 I am not exactly sure what you mean by "explanation", but since you're asking for references, have a look at Birte Feix's thesis "Hyperkaehler metrics on cotangent bundles". There she constructs the HK metric in a different way. See also mathoverflow.net/questions/46752/… – Peter Dalakov Oct 21 '12 at 14:30 show 2 more comments 2 Answers active oldest votes Disclaimer: I am not sure what kind of "explanation" you are looking for. I would guess that you are after the observation (due to Hitchin), that complex Lagrangian submanifolds become special Lagrangian after rotating the complex structure. Observation: Let $X$ be a hyperkaehler manifold. Let $\{I,J,K\}$ be a triple of complex structures, satisfying the quaternionic identities, and let $\{\omega_I,\omega_J,\omega_K\}$ be the respective Kaehler forms. Let $M\subset (X,I,\omega_I)$ be a complex-lagrangian submanifold for the complex-symplectic form $\omega^c= \omega_J+i\omega_K$. Then $M$ is a special lagrangian submanifod of $ (X,J, \omega_J,\Omega = (\omega_K+i\omega_I)^{\dim_{\mathbb{C}} M})$. up vote 1 down (Actually, if $\dim_{\mathbb{C}} M$ is odd you must either take $i\Omega$ as your holomorphic volume form, or use the more relaxed definition of special Lagrangian. ) vote accepted Here "complex-Lagrangian" means that $M\subset (X,I)$ is a complex submanifold and $\left. \omega^c\right|_M=0$. So given a real-analytic Kaehler manifold, you embed it as the zero-section of the cotangent bundle, take the Kaledin-Feix metric on a (formal) tubular neighbourhood, and rotate the complex structure. add comment How can you show then that, after a rotation, it satisfies the Calabi-Yau equation? up vote 0 down vote You should not ask a new question as an answer to an existing question. You can either create a new question or post a comment. But it's not at all clear what you mean. A hyperKahler manifold is Calabi-Yau in an $S^2$ worth of ways. This is clear, because the triple $\omega_I$, $\omega_J$, and $\omega_K$ are all parallel with respect to the Calabi-Yau metric, so the $ \Omega$ that Peter defines is parallel, thus the pair $(\omega_J, \Omega = (\omega_K + i \omega_I)^{\dim_{\mathbb C} M}$ is a Calabi-Yau structure. – Spiro Karigiannis Oct 23 '12 at 18:19 Yes but does it follow then that $\omega_{J}^{n} = c_{n} \Omega \wedge \Omega$, where $c_{n}$ is a constant depending only on $n$, where $n = dim_{\mathbb{C}}M$? – Mina Oct 24 '12 at ok I will post it as a question :). – Mina Oct 24 '12 at 16:46 add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry complex-geometry smooth-manifolds fa.functional-analysis or ask your own question.
{"url":"http://mathoverflow.net/questions/110140/isometric-embedding-of-a-kaehler-manifold-as-a-special-lagrangian-in-a-calabi-ya/110432","timestamp":"2014-04-18T14:01:51Z","content_type":null,"content_length":"66769","record_id":"<urn:uuid:71d71e04-bbdf-489a-b03a-cb1ae587b9dd>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] Complex rational expressions HELP! November 6th 2007, 11:05 AM #1 Nov 2007 [SOLVED] Complex rational expressions HELP! I do not understand how to do these problems. Is there anyone who understands these? 3x/y - x 2y - y/x Yes. I am new to this site and am not exactly sure how to type it that way, but yes I need to simplify. THANKS! What about if you multiply the complex fraction (top & bottom) by $xy\,?$ the easiest way i'm seeing is to just combine the fractions in the numerator and the denominator and then simplify: $\frac {\frac {3x}y - x}{2y - \frac yx} = \frac {\frac {3x - xy}y}{\frac {2xy - y}x} = \frac {x(3 - y)}y \cdot \frac x{y(2x - 1)} = \frac {x^2 (3 - y)}{y^2 (2x - 1)}$ ...or yes, you can do it Krizalid's way. i thought about it, but for some reason i decided to do it the hard way. November 6th 2007, 11:06 AM #2 November 6th 2007, 11:08 AM #3 Nov 2007 November 6th 2007, 11:18 AM #4 November 6th 2007, 11:18 AM #5
{"url":"http://mathhelpforum.com/algebra/22130-solved-complex-rational-expressions-help.html","timestamp":"2014-04-17T21:01:10Z","content_type":null,"content_length":"44541","record_id":"<urn:uuid:ccbc1539-7656-4d35-a2dd-0b752d523542>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
Dataplot Vol 1 Auxiliary Chapter Compute the bias corrected log odds ratio between two binary variables. Given two variables where each variable has exactly two possible outcomes (typically defined as success and failure), we define the odds ratio as: o = (N[11]/N[12])/ (N[21]/N[22]) = (N[11]N22)/ (N[12]N[21]) N[11] = number of successes in sample 1 N[21] = number of failures in sample 1 N[12] = number of successes in sample 2 N[22] = number of failures in sample 2 The first definition shows the meaning of the odds ratio clearly, although it is more commonly given in the literature with the second definition. The log odds ratio is the logarithm of the odds ratio: l(o) = LOG{(N[11]/N[12])/ (N[21]/N[22])} = LOG{(N[11]N22)/ (N[12]N[21])} Alternatively, the log odds ratio can be given in terms of the proportions l(o) = LOG{(p[11]/p[12])/ (p[21]/p[22])} = LOG{(p[11]p[22])/ (p[12]p[21])} p[11] = N[11]/ (N[11] + N[21]) = proportion of successes in sample 1 p[21] = N[21]/ (N[11] + N[21]) = proportion of failures in sample 1 p[12] = N[12]/ (N[12] + N[22]) = proportion of successes in sample 2 p[22] = N[22]/ (N[12] + N[22]) = proportion of failures in sample 2 Success and failure can denote any binary response. Dataplot expects "success" to be coded as "1" and "failure" to be coded as "0". Dataplot actually returns the bias corrected version of the statistic: l'(o) = LOG[{(N[11]+0.5) (N[22]+0.5)}/ {(N[12]+0.5) (N[21]+0.5)}] In addition to reducing bias, this statistic also has the advantage that the odds ratio is still defined even when N[12] or N[21] is zero (the uncorrected statistic will be undefined for these In practice, the log odds ratio is more often used than the odds ratio. LET <par> = LOGODDS RATIO <y1> <y2> <SUBSET/EXCEPT/FOR qualification> where <y1> is the first response variable; <y2> is the second response variable; <par> is a parameter where the computed log odds ratio is stored; and where the <SUBSET/EXCEPT/FOR qualification> is optional. LET A = LOG ODDS RATIO Y1 Y2 LET A = LOG ODDS RATIO Y1 Y2 SUBSET TAG > 2 The two variables need not have the same number of elements. There are two ways you can define the response variables: 1. Raw data - in this case, the variables contain 0's and 1's. If the data is not coded as 0's and 1's, Dataplot will check for the number of distinct values. If there are two distinct values, the minimum value is converted to 0's and the maximum value is converted to 1's. If there is a single distinct value, it is converted to 0's if it is less than 0.5 and to 1's if it is greater than or equal to 0.5. If there are more than two distinct values, an error is returned. 2. Summary data - if there are two observations, the data is assummed to be the 2x2 summary table. That is, Y1(1) = N11 Y1(2) = N21 Y2(1) = N12 Y2(2) = N22 The following additional commands are supported TABULATE LOG ODDS RATIO Y1 Y2 X CROSS TABULATE LOG ODDS RATIO Y1 Y2 X1 X2 ODDS RATIO PLOT Y1 Y2 X CROSS TABULATE LOG ODDS RATIO PLOT Y1 Y2 X1 X2 BOOTSTRAP LOG ODDS RATIO PLOT Y1 Y2 JACKNIFE LOG ODDS RATIO PLOT Y1 Y2 Note that the above commands expect the variables to have the same number of observations. If the two samples are in fact of different sizes, there are two ways to address the issue: 1. Y1 and Y2 can contain the summary data. That is, Y1(1) = N11 Y1(2) = N21 Y2(1) = N12 Y2(2) = N22 This is a useful option in that the data is sometimes only available in summary form. Note that this will not work for the BOOTSTRAP PLOT and JACKNIFE PLOT commands (these require raw data). 2. You can specify a missing value for the smaller sample. For example, if Y1 has 100 observations and Y2 has 200 observations, you can do something like SET STATISTIC MISSING VALUE -99 LET Y1 = -99 FOR I = 101 1 200 LOGIT is a synonym for LOG ODDS RATIO Related Commands: LOG ODDS RATIO STANDARD ERROR = Compute the standard error of the bias corrected log(odds ratio). ODDS RATIO = Compute the bias corrected odds odds ratio. ODDS RATIO STANDARD ERROR = Compute the standard error of the bias corrected odds ratio TRUE POSITIVES = Compute the proportion of true positives. TRUE NEGATIVES = Compute the proportion of true negatives. FALSE NEGATIVES = Compute the proportion of false negatives. FALSE POSITIVES = Compute the proportion of false positives. TEST SENSITIVITY = Compute the test sensitivity. TEST SPECIFICITY = Compute the test specificity. RELATIVE RISK = Compute the relative risk. TABULATE = Compute a statistic for data with a single grouping variable. CROSS TABULATE = Compute a statistic for data with two grouping variables. STATISTIC PLOT = Generate a plot of a statistic for data with a single grouping variable. CROSS TABULATE PLOT = Generate a plot of a statistic for data with two grouping variables. BOOTSTRAP PLOT = Generate a bootstrap plot for a given statistic. Fleiss, Levin, and Paik (2003), "Statistical Methods for Rates and Proportions", Third Edition, Wiley, chapter 6. Categorical Data Analysis Implementation Date: let n = 1 let p = 0.2 let y1 = binomial rand numb for i = 1 1 100 let p = 0.1 let y2 = binomial rand numb for i = 1 1 100 let p = 0.4 let y1 = binomial rand numb for i = 101 1 200 let p = 0.08 let y2 = binomial rand numb for i = 101 1 200 let p = 0.15 let y1 = binomial rand numb for i = 201 1 300 let p = 0.18 let y2 = binomial rand numb for i = 201 1 300 let p = 0.6 let y1 = binomial rand numb for i = 301 1 400 let p = 0.45 let y2 = binomial rand numb for i = 301 1 400 let p = 0.3 let y1 = binomial rand numb for i = 401 1 500 let p = 0.1 let y2 = binomial rand numb for i = 401 1 500 let x = sequence 1 100 1 5 let a = log odds ratio y1 y2 subset x = 1 tabulate log odds ratio y1 y2 x label case asis xlimits 1 5 major xtic mark number 5 minor xtic mark number 0 xtic mark offset 0.5 0.5 y1label Bias Corrected Log Odds Ratio x1label Group ID character x blank line blank solid log odds ratio plot y1 y2 x Date created: 7/20/2007 Last updated: 7/20/2007 Please email comments on this WWW page to alan.heckert@nist.gov.
{"url":"http://www.itl.nist.gov/div898/software/dataplot/refman2/auxillar/logoddra.htm","timestamp":"2014-04-20T23:27:06Z","content_type":null,"content_length":"16599","record_id":"<urn:uuid:1d095b36-0dd6-4732-a017-01db1ed76986>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
Tucker, GA Statistics Tutor Find a Tucker, GA Statistics Tutor ...I have used various releases for those 10 years. I am a licensed professional Civil Engineer in the state of Georgia. My license number is PE033191. 16 Subjects: including statistics, chemistry, physics, calculus ...After undergrad I scored in the 99th percentile on the GMAT and successfully taught GMAT courses for the next 3 years. Helped individuals with the math sections of the GRE, SAT, and ACT. Awarded a full tuition scholarship for graduate work in Business Administration at the College of William and Mary in Virginia. 28 Subjects: including statistics, calculus, GRE, physics ...I feel comfortable tutoring in a wide range of subject areas in these fields. I can also aid in writing APA-style research reports. I am very happy to help with APA formatting/citation issues! 3 Subjects: including statistics, SPSS, psychology I have a Ph.D. in sociology and am proficient in SPSS. Not only can I help with data analysis and writing syntax, but also I can explain the logic behind the statistical analyses. With strong background in mathematics, I can explain the concepts easily. 9 Subjects: including statistics, Japanese, algebra 1, precalculus ...I believe I would be able to help a student understand the concepts and requirements necessary to exceed in such a course. One of my first tutoring jobs was a junior high student who needed help with geometry and algebra I. I love both subjects and have tutored them rather extensively. 65 Subjects: including statistics, reading, English, calculus Related Tucker, GA Tutors Tucker, GA Accounting Tutors Tucker, GA ACT Tutors Tucker, GA Algebra Tutors Tucker, GA Algebra 2 Tutors Tucker, GA Calculus Tutors Tucker, GA Geometry Tutors Tucker, GA Math Tutors Tucker, GA Prealgebra Tutors Tucker, GA Precalculus Tutors Tucker, GA SAT Tutors Tucker, GA SAT Math Tutors Tucker, GA Science Tutors Tucker, GA Statistics Tutors Tucker, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/Tucker_GA_Statistics_tutors.php","timestamp":"2014-04-19T23:59:13Z","content_type":null,"content_length":"23715","record_id":"<urn:uuid:e413344b-e772-413d-bfc7-6fadf620decf>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Algebra Toolbox 3 The last part had a bit of theory and terminology groundwork. This part will be focusing on the computational side instead. Slicing up matrices In the previous parts, I’ve been careful to distinguish between vectors and their representation as coefficients (which depends on a choice of basis), and similarly I’ve tried to keep the distinction between linear transforms and matrices clear. This time, it’s all about matrices and their properties, so I’ll be a bit sloppier in this regard. Unless otherwise noted, assume we’re dealing exclusively with vector spaces $\mathbb{R}^n$ (for various n) using their canonical bases. In this setting (all bases fixed in advance), we can uniquely identify linear transforms with their matrices, and that’s what I’ll do. However, I’ll be switching between scalars, vectors and matrices a lot, so to avoid confusion I’ll be a bit more careful about typography: lowercase letters like $x$ denote scalars, lowercase bold-face letters $\mathbf{v}$ denote column vectors, row vectors (when treated as vectors) will be written as the transpose of column vectors $\mathbf{v}^T$, and matrices use upper-case bold-face letters like $\mathbf{A}$. A vector is made of constituent scalars $\mathbf{v} = (v_i)_{i=1..n}$, and so is a matrix (with two sets of indices) $\mathbf{A} = (a_ {ij})_{i=1..m, j=1..n}$. Note all these are overlapping to a degree: we can write a 1×1 matrix as a scalar, vector or matrix, and similarly a nx1 (or 1xn) matrix can be written either as vector or matrix. In this context, matrices are the most general kind of object we’re be dealing with, so unless something needs to be a vector or scalar, I’ll write it as a matrix. All that said, let’s take another look at matrices. As I explained before (in part 1), the columns of a matrix contain the images of the basis vectors. Let’s give those vectors names: $\mathbf{A} = \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{pmatrix} = \begin {pmatrix} \mathbf{a}_1 & \mathbf{a}_2 & \cdots & \mathbf{a}_n \end{pmatrix}$. This is just taking the n column vectors making up A and giving them distinct names. This is useful when you look at a matrix product: $\mathbf{B}\mathbf{A} = \mathbf{B} \begin{pmatrix} \mathbf{a}_1 & \mathbf{a}_2 & \cdots & \mathbf{a}_n \end{pmatrix} = \begin{pmatrix} \mathbf{B}\mathbf{a}_1 & \mathbf{B}\mathbf{a}_2 & \cdots & \ mathbf{B}\mathbf{a}_n \end{pmatrix}$. You can prove this algebraically by expanding out the matrix product and doing a bunch of index manipulation, but I’d advise against it: don’t expand out into scalars unless you have exhausted every other avenue of attack – it’s tedious and extremely error-prone. You can also prove this by exploiting the correspondence between linear transforms and matrices. This is elegant, very short and makes for a nice exercise, so I won’t spoil it. But there’s another way to prove it algebraically, without expanding it out into scalars, that’s more in line with the spirit of this series: getting comfortable with manipulating matrix expressions. The key is writing the $\mathbf{a}_i$ as the result of a matrix expression. Now, as explained before, the i-th column of a matrix contains the image of the i-th basis vector. Since we’re using the canonical basis, our basis vectors are simply $\mathbf{e}_1 = \begin{pmatrix} 1 & 0 & \cdots & 0 \end{pmatrix}^T$ $\mathbf{e}_2 = \begin{pmatrix} 0 & 1 & \cdots & 0 \end{pmatrix}^T$ $\mathbf{e}_n = \begin{pmatrix} 0 & 0 & \cdots & 1 \end{pmatrix}^T$ and it’s easy to verify that $\mathbf{a}_i = \mathbf{A} \mathbf{e}_i$ (this one you’ll have to expand out if you want to prove it purely algebraically, but since the product is just with ones and zeros, it’s really easy to check). So multiplying $\mathbf{A}$ from the right by one of the $\mathbf{e}_i$ gives us the i-th column of a, and conversely, if we have all n column vectors, we can piece together the full matrix by gluing them together in the right order. So let’s look at our matrix product again: we wan’t the i-th column of the matrix product $\mathbf{B}\mathbf{A}$, so we look at $(\mathbf{B}\mathbf{A})\mathbf{e}_i = \mathbf{B}(\mathbf{A}\mathbf{e}_i) = \mathbf{B} \mathbf{a}_i$ exactly as claimed. As always, there’s a corresponding construction using row vectors. Using the dual basis of linear functionals (note no transpose this time!) $\mathbf{e}^1 = \begin{pmatrix} 1 & 0 & \cdots & 0 \end{pmatrix}$ $\mathbf{e}^2 = \begin{pmatrix} 0 & 1 & \cdots & 0 \end{pmatrix}$ $\mathbf{e}^m = \begin{pmatrix} 0 & 0 & \cdots & 1 \end{pmatrix}$ we can disassemble a matrix into its rows: $\mathbf{A} = \begin{pmatrix} \mathbf{a}^1 \\ \mathbf{a}^2 \\ \vdots \\ \mathbf{a}^m \end{pmatrix} = \begin{pmatrix} \mathbf{e}^1 \mathbf{A} \\ \mathbf{e}^2 \mathbf{A} \\ \vdots \\ \mathbf{e}^m \ mathbf{A} \end{pmatrix}$ and since $\mathbf{e}^i (\mathbf{A} \mathbf{B}) = (\mathbf{e}^i \mathbf{A}) \mathbf{B} = \mathbf{a}^i \mathbf{B}$, we can write matrix products in terms of what happens to the row vectors too (though this time, we’re expanding in terms of the row vectors of the first not second factor): $\mathbf{A} \mathbf{B} = \begin{pmatrix} \mathbf{a}^1 \mathbf{B} \\ \mathbf{a}^2 \mathbf{B} \\ \vdots \\ \mathbf{a}^m \mathbf{B} \end{pmatrix}$. Gluing it back together Above, the trick was to write the “slicing” step as a matrix product with one of the basis vectors – $\mathbf{A} \mathbf{e}_i$ and so forth. We’ll soon need to deal with the gluing step too, so let’s work out how to write that as a matrix expression too so we can manipulate it easily. Let’s say we have a m×2 matrix A that we’ve split into two column vectors $\mathbf{a}_1$, $\mathbf{a}_2$. What’s the matrix expression that puts A back together from those columns again? So far we’ve expressed that as a concatenation; but to be able to manipulate it nicely, we want it to be a linear expression (same as everything else we’re dealing with). So it’s gotta be a sum: one term for $\ mathbf{a}_1$ and one for $\mathbf{a}_2$. Well, the sum is supposed to be $\mathbf{A}$, which is a m×2 matrix, so the summands all have to be m×2 matrices too. The $\mathbf{a}_i$ are column vectors (m×1 matrices), so for the result to be a m×2 matrix, we have to multiply from the right with a 1×2 matrix. From there, it’s fairly easy to see that the expression that re-assembles $\mathbf{A}$ from its column vectors is simply: $\mathbf{A} = \mathbf{a}_1 \begin{pmatrix} 1 & 0 \end{pmatrix} + \mathbf{a}_2 \begin{pmatrix} 0 & 1 \end{pmatrix} = \mathbf{a}_1 \mathbf{e}^1 + \mathbf{a}_2 \mathbf{e}^2$. Note that the terms have the form column vector × row vector – this is the general form of the vector outer product (or dyadic product) $\mathbf{u} \mathbf{v}^T$ that we first saw in the previous part. If A has more than two columns, this generalizes in the obvious way. So what happens if we disassemble a matrix only to re-assemble it again? Really, this should be a complete no-op. Let’s check: \begin{aligned} \mathbf{A} & = \mathbf{a}_1 \mathbf{e}^1 + \mathbf{a}_2 \mathbf{e}^2 + \cdots + \mathbf{a}_n \mathbf{e}^n \\ & = \mathbf{A} \mathbf{e}_1 \mathbf{e}^1 + \mathbf{A} \mathbf{e}_2 \mathbf {e}^2 + \cdots + \mathbf{A} \mathbf{e}_n \mathbf{e}^n \\ & = \mathbf{A} (\mathbf{e}_1 \mathbf{e}^1 + \mathbf{e}_2 \mathbf{e}^2 + \cdots + \mathbf{e}_n \mathbf{e}^n) \\ & = \mathbf{A} \mathbf{I}_{n \ times n} = \mathbf{A} \end{aligned} For the last step, note that the summands $\mathbf{e}_i \mathbf{e}^i$ are matrices that are all-zero, except for a single one in row i, column i. Adding all these together produces a matrix that’s zero everywhere except on the diagonal, where it’s all ones – in short, the n×n identity matrix. So yes, disassembling and re-assembling a matrix is indeed a no-op. Who would’ve guessed. Again, the same thing can be done for the rows; instead of multiplying by $\mathbf{e}^i$ from the right, you end up multiplying by $\mathbf{e}_i$ from the left, but same difference. So that covers slicing a matrix into its constituent vectors (of either the row or column kind) and putting it back together. Things get a bit more interesting (and a lot more useful) when we allow more general Block matrices In our first example above, we sliced A into n separate column vectors. But what if we just slice it into just two parts, a left “half” and a right “half” (the sizes need not be the same), both of which are general matrices? Let’s try: $\mathbf{A} = \begin{pmatrix} \mathbf{A}_1 & \mathbf{A}_2 \end{pmatrix}$ For the same reasons as with column vectors, multiplying a second matrix B from the left just ends up acting on the halves separately: $\mathbf{B} \mathbf{A} = \begin{pmatrix} \mathbf{B} \mathbf{A}_1 & \mathbf{B} \mathbf{A}_2 \end{pmatrix}$ and the same also works with right multiplication on vertically stacked matrices. Easy, but not very interesting yet – we’re effectively just keeping some columns (rows) glued together through the whole process. It gets more interesting when you start slicing in the horizontal and vertical directions simultaneously, though: $\mathbf{A} = \begin{pmatrix} \mathbf{A}_{11} & \mathbf{A}_{12} \\ \mathbf{A}_{21} & \mathbf{A}_{22} \end{pmatrix}$ Note that for the stuff I’m describing here to work, the “cuts” between blocks need to be uniform across the whole matrix – that is, all matrices in a block column need to have the same width, and all matrices in a block row need to have the same height. So in our case, let’s say $\mathbf{A}_{11}$ is a p×q matrix. Then $\mathbf{A}_{12}$ must be a p×(n-q) matrix (the heights have to agree and $ \mathbf{A}$ is n columns wide), $\mathbf{A}_{21}$ is (m-p)×q, and $\mathbf{A}_{22}$ is (m-p)×(n-q). Adding block matrices is totally straightforward – it’s all element-wise anyway. Multiplying block matrices is more interesting. For regular matrix multiplication $\mathbf{B} \mathbf{A}$, we require that B has as many columns as A has rows; for block matrix multiplication, we’ll also require that B has as many block columns as A has block rows, and that all of the individual block sizes are compatible as well. Given all that, how does block matrix multiplication work? Originally I meant to give a proof here, but frankly it’s all notation and not very enlightening, so let’s skip straight to the punchline: \begin{aligned} \mathbf{B} \mathbf{A} &= \begin{pmatrix} \mathbf{B}_1 & \mathbf{B}_2 \end{pmatrix} \begin{pmatrix} \mathbf{A}_{11} & \mathbf{A}_{12} \\ \mathbf{A}_{21} & \mathbf{A}_{22} \end{pmatrix} \\ &= \begin{pmatrix} \mathbf{B}_1 \mathbf{A}_{11} + \mathbf{B}_2 \mathbf{A}_{21} & \mathbf{B}_{1} \mathbf{A}_{12} + \mathbf{B}_2 \mathbf{A}_{22} \end{pmatrix} \end{aligned} Block matrix multiplication works just like regular matrix multiplication: you compute the “dot product” between rows of B and columns of A. This is all independent of the sizes too – I show it here with a matrix of 1×2 blocks and a matrix of 2×2 blocks because that’s the smallest interesting example, but you can have arbitrarily many blocks involved. So what does this mean? Two things: First, most big matrices that occur in practice have a natural block structure, and the above property means that for most matrix operations, we can treat the blocks as if they were scalars in a much smaller matrix. Even if you don’t deal with big matrices, working at block granularity is often a lot more convenient. Second, it means that big matrix products can be naturally expressed in terms of several smaller ones. Even when dealing with big matrices, you can just chop them up into smaller blocks that nicely fit in your cache, or your main memory if the matrices are truly huge. All that said, the main advantage of block matrices as I see it are just that they add a nice, in-between level of granularity: dealing with the individual scalars making up a matrix is unwieldy and error-prone, but sometimes (particularly when you’re interested in some structure within the matrix) operating on the whole matrix at once is too coarse-grained. Example: affine transforms in homogeneous coordinates To show what I mean, let’s end with a familiar example, at least for graphics/game programmers: matrices representing affine transforms when using homogeneous coordinates. The matrices in question look like this: $\mathbf{M}_1 = \begin{pmatrix} \mathbf{A}_1 & \mathbf{t}_1 \\ \mathbf{0} & 1 \end{pmatrix}$ where $\mathbf{A}_1$ is an arbitrary square matrix and $\mathbf{t}_1$ is a translation vector (note the 0 in the bottom row is printed in bold and means a 0 row vector, not a scalar 0!). So how does the product of two such matrices look? Well, this has obvious block structure, so we can use a block matrix product without breaking A up any further: $\mathbf{M}_2 \mathbf{M}_1 = \begin{pmatrix} \mathbf{A}_2 & \mathbf{t}_2 \\ \mathbf{0} & 1 \end{pmatrix} \begin{pmatrix} \mathbf{A}_1 & \mathbf{t}_1 \\ \mathbf{0} & 1 \end{pmatrix} = \begin{pmatrix} \mathbf{A}_2 \mathbf{A}_1 & \mathbf{A}_2 \mathbf{t}_1 + \mathbf{t}_2 \\ \mathbf{0} & 1 \end{pmatrix}$ Note this works in any dimension – I just required that A was square, I never specified what the actual size was. This is a fairly simple example, but it’s a common case and handy to know. And that should be enough for this post. Next time, I plan to first review some basic identities (and their block matrix analogues) then start talking about matrix decompositions. Until then!
{"url":"http://fgiesen.wordpress.com/2012/06/25/linear-algebra-toolbox-3/","timestamp":"2014-04-17T18:52:55Z","content_type":null,"content_length":"81768","record_id":"<urn:uuid:beae9ce6-dc49-40e2-9a5e-59e8028242ff>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
An Example on - - Please install Math Player to see the Math Symbols properly Attempt following question by selecting a choice to answer. Use Rolle's Theorem to determine the value of such that f ′ (c) = 0, f(x) = (x - a)^m (x - b)^n are positive integers in the interval [a, b].ee cc View Solution Steps
{"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxaxbgbmexkjfhhxgbhdkxb&.html","timestamp":"2014-04-19T22:05:49Z","content_type":null,"content_length":"48947","record_id":"<urn:uuid:a111859d-f1d6-4768-aca9-513ddbcaf9f5>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
[Haskell-cafe] Haskell.org GSoC Don Stewart dons at galois.com Wed Feb 11 12:51:26 EST 2009 > Hi, > I noticed last year Haskell.org was a mentoring organization for > Google's Summer of Code, and I barely noticed some discussion about it > applying again this year :) > I participated for GCC in 2008 and would like to try again this year; > while I'm still active for GCC and will surely stay so, I'd like to see > something new at least for GSoC. And Haskell.org would surely be a > very, very nice organization. > Since I discovered there's more than just a lot of imperative languages > that are nearly all the same, I love to do some programming in Prolog, > Scheme and of course Haskell. However, so far this was only some toy > programs and nothing "really useful"; I'd like to change this (as well > as learning more about Haskell during the projects). > Here are some ideas for developing Haskell packages (that would > hopefully be of general use to the community) as possible projects: > - Numerics, like basic linear algebra routines, numeric integration and > other basic algorithms of numeric mathematics. I think a lot of the numerics stuff is now covered by libraries (see e.g. haskell-blas, haskell-lapack, haskell-fftw) > - A basic symbolic maths package; I've no idea how far one could do this > as a single GSoC project, but it would surely be a very interesting > task. Alternatively or in combination, one could try to use an existing > free CAS package as engine. Interesting, but niche, imo. > - Graphs. > - Some simulation routines from physics, though I've not really an idea > what exactly one should implement here best. True graphs (the data structure) are still a weak point! There's no canonical graph library for Haskell. > - A logic programming framework. I know there's something like that for > Scheme; in my experience, there are some problems best expressed > logically with Prolog-style backtracking/predicates and unification. > This could help use such formulations from inside a Haskell program. > This is surely also a very interesting project. Interesting, lots of related work, hard to state the benefits to the community though. > What do you think about these ideas? I'm pretty sure there are already > some of those implemented, but I also hope some would be new and really > of some use to the community. Do you think something would be > especially nice to have and is currently missing? Think about how many people would benefit. For example, if all the haddocks on hackage.org were a wiki, and interlinked, every single package author would benefit, as would all -- Don More information about the Haskell-Cafe mailing list
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2009-February/055536.html","timestamp":"2014-04-24T07:02:25Z","content_type":null,"content_length":"5500","record_id":"<urn:uuid:e030f134-9f48-46a3-a904-6b32afe6becb>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
, 1992 "... We introduce the framework of hybrid automata as a model and specification language for hybrid systems. Hybrid automata can be viewed as a generalization of timed automata, in which the behavior of variables is governed in each state by a set of differential equations. We show that many of the examp ..." Cited by 360 (20 self) Add to MetaCart We introduce the framework of hybrid automata as a model and specification language for hybrid systems. Hybrid automata can be viewed as a generalization of timed automata, in which the behavior of variables is governed in each state by a set of differential equations. We show that many of the examples considered in the workshop can be defined by hybrid automata. While the reachability problem is undecidable even for very restricted classes of hybrid automata, we present two semidecision procedures for verifying safety properties of piecewise-linear hybrid automata, in which all variables change at constant rates. The two procedures are based, respectively, on minimizing and computing fixpoints on generally infinite state spaces. We show that if the procedures terminate, then they give correct answers. We then demonstrate that for many of the typical workshop examples, the procedures do terminate and thus provide an automatic way for verifying their properties. 1 Introduction More and...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=4258957","timestamp":"2014-04-18T01:48:49Z","content_type":null,"content_length":"12725","record_id":"<urn:uuid:2c2ca3a8-5083-497e-bb39-47d3afb79c92>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
ASTM D2990 Flexural Creep Testing of CIPP Liner Materials Download PDF Email to a Friend Aug. 02, 2007 ASTM D2990 Flexural Creep Testing of CIPP Liner Materials Author: Steve Ferry In discussing material property testing, it is sometimes best to start with definitions of the various parameters involved. Over the course of this article, we will discuss stress, strain, flexural modulus, and creep (in particular, flexural creep). Roark’s Formulas for Stress and Strain defines stress as the “internal force exerted by either of two adjacent parts of a body upon the other across an imagined plane of separation” and strain is defined as “any forced change in the dimensions of a body.” The equations for calculation of flexural stress and strain can be obtained directly from ASTM D790, Standard Test Methods for Flexural Properties of Unreinforced and Reinforced Plastics and Electrical Insulating Materials with mathematical formulae as follows: Flexural Stress (Strength) = 3PL/2bd2 and Flexural Strain = 6Dd/ L2. Thus, with any modulus generally accepted to be the rate of change of unit stress with respect to unit strain, the mathematical formula for flexural modulus of elasticity is: EB = L3m/4 bd3. For all of these equations, the calculated stress is defined to be at the outer fibers at midspan, P is the load, L is the support span, b and d are the width and depth of the beam specimen respectively, D is the midspan deflection, and m is the slope of the tangent to the initial straight-line portion of the load-deflection curve. The rate of straining as described within ASTM D790 Procedure A is 0.01 in/in/min, or 1% per minute. These tests are typically run to 5% strain, or approximately 5 minutes per test specimen. Within the product specification ASTM D5813, Standard Specification for Cured-In-Place Thermosetting Resin Sewer Piping Systems, the test protocol for the determination of flexural strength and tangent flexural modulus and the product requirements are well defined. Specifically, ASTM D790 Test Method I-Procedure A is called out, with minimum flexural strength of 4,500 psi and minimum flexural modulus of 250,000 psi required. Roark defines creep as “continuous increase in deformation under constant or decreasing stress.” Additionally, creep is defined within ASTM D883, Standard Terminology Relating to Plastics, as “the time-dependent part of strain resulting from stress.” Note that within ASTM D5813 there is no mention of creep, either from a product requirement or test method perspective. However, creep is mentioned within ASTM F1216, Standard Practice for Rehabilitation of Existing Pipelines and Conduits by the Inversion and Curing of a Resin-Impregnated Tube in vague terms in Note A of Table 1 CIPP Initial Structural Properties as “long-term structural properties,” and also in the appendix X1. Design Considerations as EL = long-term (time corrected) modulus of elasticity for CIPP, psi. Flexural Creep Testing of CIPP Materials At this stage in the product specification and installation practice, a breakdown occurs in the instructions to testing laboratories as to how to test and calculate creep resistance of CIPP materials. Currently, there are no details to this process contained within any ASTM document. [Microbac] adheres to the requirements of D2990 wherever applicable, but in general, the testing performed would be considered a limited D2990 protocol in that only one set of five specimens is tested through 10,000 hours duration at 23C. This agrees with the verbiage of ASTM D2990 Section 10, Selection of Test Conditions, specifically Section 10.1 Test Temperatures-“Selection of temperatures for creep and creep-rupture testing depends on the intended use of the test results and shall be made as follows: (sub-referencing) Section 10.1.2: “To obtain design data, the test temperatures and environment shall be the same as those of the intended end-use application”. However, one of the main items required for creep testing – the imposed flexural stress at the start of the creep exposure – is not defined in any of the relevant documents. There exists some guidance in the international literature indicating that an imposed stress equal to 0.25% of the short-term flexural modulus is to be used. Note that this would correspond to the stress required to impose 0.25% initial strain in the test specimens. This 0.25% test criterion is contained within a now-unavailable British Water Research Council fiberglass pipe rehabilitation product specification (with embedded test methods). Creep is typically performed in accordance with ASTM D2990, Standard Test Methods for Tensile, Compressive, and Flexural Creep and Creep-Rupture of Plastics. The basic equipment is quite simple, consisting of a rack to hold the specimens in 3 point flexure, dial indicators to measure deflection at mid-span, and deadweights to load the specimens at mid-span. The testing is performed at constant stress (load), and must be maintained at the prescribed environmental conditions (temperature and humidity) throughout the 10,000 hour duration of the test. Deflections are measured periodically, with moduli calculated using the initial stress and the mid-span deflection at each time period. Per D2990, log strain in percent versus log time in hours is required to be reported, although Microbac typically also reports all raw data, modulus versus time for each individual specimen, and a log/log plot of the average of all specimens tested at identical conditions. Flexural Creep Data Interpretation While numerous data presentation methodologies are given in D2990 Appendix X4, since the creep testing performed is at a single stress and temperature, a simplified approach is sufficient. A single, linear extrapolation to 50-year service life, similar to what is portrayed within Appendix X7.1, using standard trend-line analysis such as that contained within Microsoft Excel®, is used. However, based upon experience gained in extrapolation of hydrostatic design basis data sets for pressure pipe, and since knees are sometimes encountered in the test data, only the most linear portion of the log/log plot is used for the extrapolation (See Figures 1 and 2 in .pdf download). There are two ways for design engineers to use this data. The first is to take a creep reduction approach whereby the 50-year extrapolated modulus is divided by the short-term D790 modulus, resulting in a percentage reduction. The second is to evaluate the 50-year modulus in comparison to some minimum requirement, such as 125,000 psi. In my opinion, the second approach is more typical of a material science-based design approach, and does not unduly penalize a material which may have significantly higher starting modulus, but larger creep response. Microbac has performed creep tests with imposed stresses as low as 400 psi, and as high as approximately 2,000 psi in the past. It was previously reported to Microbac that this 400 psi value was based upon a maximum hydrostatic head (external pressure) design approach. The Current Research Assignment Microbac conducted creep testing on one material, an unsaturated polyester resin with felt laminate (approximately 6mm in thickness). The sample was tested at imposed initial flexural stress levels of 400 psi, 1250 psi, and 0.25% of the short-term modulus of the material. Upon completion of approximately 2,000 hours of testing, the retained creep modulus at the 50-year intercept was evaluated to determine if imposed stress had a significant factor on the retained modulus. For the material tested, which displayed short-term flexural properties of 6,873 psi maximum flexural strength and 662,700 psi flexural modulus, the initial imposed flexural stresses correspond to approximately 6% to 24% of the short term flexural strength (See Figure 3 in .pdf download). Simple inspection of these results would indicate that for a wide range of imposed stresses, the 50-year retained moduli as calculated using trendline analysis in these D2990 data sets are not significantly different. For more information, please contact: microbac_info@microbac.com.
{"url":"http://www.microbac.com/technical-articles/astm-d2990-flexural-creep-testing-of-cipp-liner-materials/","timestamp":"2014-04-20T05:42:51Z","content_type":null,"content_length":"17287","record_id":"<urn:uuid:3190a58a-c81e-447a-a8cb-7edd589248e7>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
Polynomial Exponents Fractions are really just a division problem which is shown in a special form. Since we can just "distribute" in the exponents for an ordinary division problem, we can do the same for a fraction. Look over the example below: We can just distribute in the 3, as in the other problems. As you can see, once the 3 was distributed, the parentheses could be removed. Then the 2^3 was simplified. The next page contains various resources for this lessson.
{"url":"http://algebrahelp.com/lessons/simplifying/polyexp/pg3.htm","timestamp":"2014-04-16T07:49:04Z","content_type":null,"content_length":"5698","record_id":"<urn:uuid:ddfef371-7059-4e3c-b758-6cb75c795e54>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
Making The Connection Patterns and Relations Students often experience difficulty in Math 10 because they have not made connections between formulas and real life situations. The following experiments can be done with students in middle years or as an introduction to functions in Math 10. Some people might say the Math 10 course is too intense and there is not time for activities and games. These experiments can be conducted in three periods. The advantage is that the students better understand relations and make the connections with their world so that abstract math such as y = mx + b becomes more meaningful for them. This allows students to understand rather than memorize procedures. Lesson One (30 minutes) - Teacher directed (works best when the teacher walks students in unison step by step through this lesson). See Experiment 1. Have students practice taking their pulse for 15 seconds and multiplying by 4. Then have them run on the spot for three minutes. They immediately take their pulse and record. They continue to do this every minute. Students then make the t-chart and the graph to observe that there usually is a pattern as their pulse goes back to normal. If they compare with friends they will usually see a tendency towards a straight line; however the slope may vary from student to student. (This is dependent on their level of fitness). Lesson Two (60 minutes) 10 minutes - teacher directed, 50 minutes pairs at work stations. See Experiment 2. Have water boiling in kettles before class starts. Immediately pour 250 mL of water in Pyrex beakers (one per pair). Students record the temperature. Explain to the students that you will interrupt their activities every ten minutes so that one student in the pair can record the temperature of the water as it cools. Students then rotate from station to station to conduct the experiments. (Make sure that you have set up multiples of each station so that all students have a place to work. This is possible because the equipment is minimal). Provide each student with a copy of the experiments so that each one can record the data. Emphasize that they will simply collect data at this point. Once students have completed the data, ask them to answer the remaining questions. Lesson Three Using an overhead of each activity, discuss with students the results of their experiments. Identify those that tended to have a pattern. Change the variables into x and y. Students should begin to see what relations 'look like' in the real world as well as make connections between math and science.
{"url":"http://mathcentral.uregina.ca/RR/database/RR.09.97/gauthier46.html","timestamp":"2014-04-20T16:08:49Z","content_type":null,"content_length":"16104","record_id":"<urn:uuid:905b630f-c570-4b13-b82b-51fd05bb29b6>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
finding center of 2D triangle up vote 8 down vote favorite (Yes, unfortunately, this is a homework question) I've been given a struct for a 2D triangle with x and y coordinates, a rotation variable, and so on. From the point created by those x and y coordinates, I am supposed to draw a triangle around the point and rotate it appropriately using the rotation variable. I'm familiar with drawing triangles in OpenGl with GL_TRIANGLES. My problem is somehow extracting the middle of a triangle and drawing the vertices around it. I hope I've worded this properly, as I'm a little confused. edit: Yes, what I am looking for is the centroid. c opengl rotation triangle add comment 4 Answers active oldest votes There are different "types" of centers of a triangle. Details on: The Centers of a Triangle. A quick method for finding a center of a triangle is to average all your point's coordinates. For example: GLfloat centerX = (tri[0].x + tri[1].x + tri[2].x) / 3; up vote 16 down vote GLfloat centerY = (tri[0].y + tri[1].y + tri[2].y) / 3; When you find the center, you will need to rotate your triangle about the center. To do this, translate so that the center is now at (0, 0). Perform your rotation. Now reverse the translation you performed earlier. The link is especially helpful. I somehow missed all this in my googling. Thanks. – ray Feb 8 '09 at 0:51 add comment I guess you mean the centroid of the triangle!? This can be easily computed by 1/3(A + B + C) where A, B and C are the respective points of the triangle. If you have your points, you can simply multiply them by your rotation up vote 4 down matrix as usual. Hope i got you right. add comment By "middle" do you mean "centroid", a.k.a. the center of gravity if it were a 3D object of constant thickness and density? up vote 1 down If so, then pick two points, and find the midpoint between them. Then take this midpoint and the third point, and find the point 1/3 of the way between them (closer to the midpoint). vote That's your centroid. I'm not doing the math for you. add comment There are several points in a triangle that can be considered to be its center (orthocenter, centroid, etc.). This section of the Wikipedia article on triangles has more information. up vote 0 down Just look at the pictures to get a quick overview. add comment Not the answer you're looking for? Browse other questions tagged c opengl rotation triangle or ask your own question.
{"url":"http://stackoverflow.com/questions/524755/finding-center-of-2d-triangle","timestamp":"2014-04-19T20:27:01Z","content_type":null,"content_length":"75980","record_id":"<urn:uuid:555c802f-f1b9-4caf-a898-5adf6f98e12b>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
How many socks make a pair - solution How many socks make a pair? Unfortunately I am lazy and disorganised. One symptom of this is that I can never be bothered to fold up my socks in pairs when they come out of the wash. I just chuck them in the drawer. Another symptom is that I always wake up late for work and end up having to rush. Given that I only have white and black socks, how many socks do I have to grab out of my drawer at random to make sure the collection I've grabbed contains a matching pair? I need to grab three socks. Either they are all the same colour, or two of them are white and one black, or vice versa. In each case I have a matching pair. This puzzle is part of the Hands-on risk and probability show, an interactive event culminating in Who Wants to be a Mathionaire? workshop sessions, which you can book to perform at your school. The puzzle appeared in the book How any socks make a pair? by Rob Eastaway. Back to main puzzle page For some challenging mathematical puzzles, see the NRICH puzzles from this month or last month.
{"url":"http://plus.maths.org/content/how-many-sock-make-pair-solution","timestamp":"2014-04-19T12:03:16Z","content_type":null,"content_length":"23151","record_id":"<urn:uuid:b035829c-e390-4e6a-925e-f6c557bc0e5f>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
Relative Rain If someone goes from his car to his front door in a rainstorm, will he get more wet, less wet, or equally wet if he runs rather than walks? To develop a quantitative answer, consider a spherical man, and assume he moves to his car in a straight horizontal path with velocity u. The raindrops are falling at an angle such that their velocity is v[z] in the downward direction, v[x] in the horizontal direction (straight into the man's face), and v[y] in the sideways horizontal direction (the man's left to right). The intensity of the rain is such that each cubic foot of air contains r grams of Relative to the rain's frame of reference the raindrops are stationary and the man and his car both have an upward velocity v[z] and a sideways velocity (right to left) of v[y]. In addition, the man has a forward horizontal velocity of u + v[x]. Clearly the amount of rain encountered by the man is equal to r times the volume of space he sweeps out as he moves relative to this stationary mist of raindrops. Since he is spherical with radius R, this swept volume is essentially equal to pR^2L, where L is the distance traveled (relative to the rain's frame of reference). If D is the horizontal distance to the car (in the frame of the ground), and the man moves straight to his car with velocity u, the time it takes him is D/u. His total velocity relative to the falling rain is so the distance he moves relative to the rain is L = (D/u)V[t]. Therefore, the amount of rain he encounters in the general case for arbitrary direction of rainfall is We can easily incorporate other assumptions, such as the man having some non-spherical shape. It's just a matter of geometry to compute how much volume he sweeps out relative to the rain's frame of reference. This is done by replacing pR^2 with the horizontal facing cross-sectional area A of the man in terms of the rest frame of the rain. If v[x] = v[y] = 0 then the rain is falling vertically with a total velocity v = v[z]. In this case equation (2) reduces to which shows that the key parameter is the ratio of the rain's vertical speed to the man's horizontal speed. Of course, if v was zero (which would mean the rain was motionless relative to the ground), then L would always equal D, and W would equal rAD, regardless of how fast the man runs. On the other hand, for any v greater than zero, the amount of rain he encounters will go down as his horizontal velocity u increases. Of course, in this case, if the man is not moving at all (i.e., u = 0) he will get infinitely wet. Return to MathPages Main Menu
{"url":"http://mathpages.com/home/kmath301/kmath301.htm","timestamp":"2014-04-21T04:42:12Z","content_type":null,"content_length":"8447","record_id":"<urn:uuid:8c54373e-f4ad-4b45-9367-6663e2573d5a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No: 308.10019 Autor: Erdös, Paul Title: Problems and results on Diophantine approximations. II. (In English) Source: Repartition modulo 1, Actes Colloq. Marseille-Luminy 1974, Lect. Notes Math. 475, 89-99 (1975). Review: [For the entire collection see Zbl 299.00015.] This paper reports on progress made on problems, mostly in the field of uniform distribution, mentioned in the author's ``Problems and results on diophantine approximations'' [Compositio math. 16, 52-65 (1964; Zbl 131.04803)], and propose some new questions. As usual Professor Erdös is a fount of interesting and amusing pieces of information as well as the generator of manifold problems of varying levels of interest and difficulty. Reviewer: A.J.van der Poorten Classif.: * 11J71 Distribution modulo one 11K06 General theory of distribution modulo 1 11K38 Irregularities of distribution 11-02 Research monographs (number theory) 00A07 Problem books © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag │Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │ │Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │ │Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
{"url":"http://www.emis.de/classics/Erdos/cit/30810019.htm","timestamp":"2014-04-21T14:43:01Z","content_type":null,"content_length":"3708","record_id":"<urn:uuid:5a5524ee-a274-4354-8c11-692915789cb5>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
Flow of a Pyclaw Simulation The basic idea of a pyclaw simulation is to construct a Solution object, hand it to a Solver object with the request new time and the solver will take whatever steps are necessary to evolve the solution to the requested time. The bulk of the work in order to run a simulation then is the creation and setup of the appropriate Solution objects and the Solver needed to evolve the solution to the requested time. Here we will assume that you have run import numpy as np before we do any of the tutorial commands. Creation of a Pyclaw Solution A Pyclaw Solution is a container for a collection of Grid objects in order to support adaptive mesh refinement and multi-block simulations. The Solution object keeps track of a list of Grid objects then and controls the overall input and output of the entire collection of Grid objects. Inside of a Grid object, a set of Dimension objects define the extents and basic grids of the Grid. The process needed to create a Solution object then follows from the bottom up. >>> from pyclaw.solution import Solution, Grid, Dimension >>> x = Dimension('x', -1.0, 1.0, 200) >>> y = Dimension('y', 0.0, 1.0, 100) >>> x.mthbc_lower = 2 This code creates two dimensions, a dimension x on the interval [-1.0, 1.0] with 200 grid points and a dimension y on the interval [0.0, 1.0] with 100 grid points. We then set the boundary conditions in the x direction to be periodic (note that if you set periodic boundary conditions, the corresponding lower or upper boundary condition method will be set as well). Many of the attributes of a Dimension object are set automatically so make sure that the values you want are set by default. Please refer to the Dimension classes definition for what the default values are. Next we have to create a Grid object that will contain our Dimension objects. >>> grid = Grid([x,y]) >>> grid.meqn = 2 Here we create a grid with the dimensions we created earlier to make a single 2D Grid object and set the number of equations it will represent to 2. As before, many of the attributes of the Grid object are set automatically. We now need to set the initial condition q and possibly aux to the correct values. There are multiple convenience functions to help in this, here we will use the method zeros_q() to set all the values of q to zero. >> sigma = 0.2 >> omega = np.pi >> grid.zeros_q() >> q[:,0] = np.cos(omega * grid.x.center) >> q[:,1] = np.exp(-grid.x.center**2 / sigma**2) We now have initialized the first entry of q to a cosine function evaluated at the cell centers and the second entry of q to a gaussian, again evaluated at the grid cell centers. Many Riemann solvers also require information about the problem we are going to run which happen to be grid properties such as the impedence Z and speed of sound c for linear acoustics. We can set these values in the aux_global dictionary in one of two ways. The first way is to set them directly as in: >>> grid.aux_global['c'] = 1.0 >>> grid.aux_global[`Z`] = 0.25 We can also read in the value from a file similar to how it was done in the previous version of Clawpack. The Grid class provides a convenience routine to do this called set_aux_global() which expects a path to an appropriately formatted data file. The method set_aux_global() will then open the file, parse its contents, and use the names of the data as dictionary keys. >> grid.set_aux_global('./setprob.data') Last we have to put our Grid object into a Solution object to complete the process. In this case, since we are not using adaptive mesh refinement or a multi-block algorithm, we do not have multiple We now have a solution ready to be evolved in a Solver object. Creation of a Pyclaw Solver A Pyclaw Solver can represent many different types of solvers so here we will concentrate on a 1D, classic Clawpack type of solver. This solver is located in the clawpack module. First we import the particular solver we want and create it with the default configuration. >>> from pyclaw.evolve.clawpack import ClawSolver1D >>> solver = ClawSolver1D() Next we need to tell the solver which Riemann solver to use from the Riemann solver package . We can always check what Riemann solvers are available to use via the list_riemann_solvers() method. Once we have picked one out, we let the solver pick it out for us via: >>> solver.set_riemann_solver('acoustics') In this case we have decided to use the linear acoustics Riemann solver. You can also set your own solver by importing the module that contains it and setting it directly to the rp attribute to the particular function. >>> import my_rp_module >>> solver.rp = my_rp_module.my_acoustics_rp Last we finish up by specifying the specific values for our solver to use. >>> solver.mthlim = [3,3] >>> solver.dt = 0.01 >>> solver.cfl_desired = 0.9 In this case, because we are using a Riemann solver that passes back two waves, we must choose two limiters. If we wanted to control the simulation we could at this point by issuing the following commands: >>> solver.evolve_to_time(sol,1.0) This would evolve our solution sol to t = 1.0 but we are then responsible for all output and other setup considerations. Creating and Running a Simulation with Controller The Controller coordinates the output and setup of a run with the same parameters as the classic Clawpack. In order to have it control a run, we need only to create the controller, assign it a solver and initial condition, and call the run() method. >>> from pyclaw.controller import Controller >>> claw = Controller() >>> claw.solver = solver >>> claw.solutions['n'] = sol Here we have imported and created the Controller class, assigned the Solver and Solution. These next commands setup the type of output the controller will output. The parameters are similar to the ones found in the classic clawpack claw.data format. >> claw.outstyle = 1 >> claw.nout = 10 >> claw.tfinal = 1.0 When we are ready to run the simulation, we can call the run() method. It will then run the simulation and output the appropriate time points. If the keep_copy is set to True the controller will keep a copy of each solution output in the frames array. For instance, you can then immediately plot the solutions output into the frames array.
{"url":"http://depts.washington.edu/clawpack/users-4.6/pyclaw/tutorial.html","timestamp":"2014-04-20T19:41:24Z","content_type":null,"content_length":"27839","record_id":"<urn:uuid:43bcca08-6131-4f71-83fa-49550c16e213>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
Multi-taper spectral derivative - point process times function [dS,f]=mtdspectrumpt(data,phi,params,t) Multi-taper spectral derivative - point process times Note that all times can be in arbitrary units. But the units have to be consistent. So, if E is in secs, win, t have to be in secs, and Fs has to be Hz. If E is in samples, so are win and t, and Fs=1. In case of spike times, the units have to be consistent with the units of data as well. data (structure array of spike times with dimension channels/trials; also accepts 1d array of spike times) -- required phi (angle for evaluation of derivative) -- required. e.g. phi=[0,pi/2] giving the time and frequency derivatives params: structure with fields tapers, pad, Fs, fpass, trialave tapers : precalculated tapers from dpss or in the one of the following (1) A numeric vector [TW K] where TW is the time-bandwidth product and K is the number of tapers to be used (less than or equal to (2) A numeric vector [W T p] where W is the bandwidth, T is the duration of the data and p is an integer such that 2TW-p tapers are used. In this form there is no default i.e. to specify the bandwidth, you have to specify T and p as well. Note that the units of W and T have to be consistent: if W is in Hz, T must be in seconds and vice versa. Note that these units must also be consistent with the units of params.Fs: W can be in Hz if and only if params.Fs is in Hz. The default is to use form 1 with TW=3 and K=5 pad (padding factor for the FFT) - optional (can take values -1,0,1,2...). -1 corresponds to no padding, 0 corresponds to padding to the next highest power of 2 etc. e.g. For N = 500, if PAD = -1, we do not pad; if PAD = 0, we pad the FFT to 512 points, if pad=1, we pad to 1024 points etc. Defaults to 0. Fs (sampling frequency) - optional. Default 1. fpass (frequency band to be used in the calculation in the form [fmin fmax])- optional. Default all frequencies between 0 and Fs/2 trialave (average over trials when 1, don't average when 0) - optional. Default 0 t (time grid over which the tapers are to be calculated: this argument is useful when calling the spectrum calculation routine from a moving window spectrogram calculation routine). If left empty, the spike times are used to define the grid. dS (spectral derivative in form phi x frequency x channels/trials if trialave=0; function of phi x frequency if trialave=1) f (frequencies) This function calls: • minmaxsptimes Find the minimum and maximum of the spike times in each channel • mtfftpt Multi-taper fourier transform for point process given as times This function is called by: • mtdspecgrampt Multi-taper derivative time-frequency spectrum - point process times SOURCE CODE 0001 function [dS,f]=mtdspectrumpt(data,phi,params,t) 0002 % Multi-taper spectral derivative - point process times 0003 % 0004 % Usage: 0005 % 0006 % [dS,f]=mtdspectrumpt(data,phi,params,t) 0007 % Input: 0008 % Note that all times can be in arbitrary units. But the units have to be 0009 % consistent. So, if E is in secs, win, t have to be in secs, and Fs has to 0010 % be Hz. If E is in samples, so are win and t, and Fs=1. In case of spike 0011 % times, the units have to be consistent with the units of data as well. 0012 % data (structure array of spike times with dimension channels/trials; 0013 % also accepts 1d array of spike times) -- required 0014 % phi (angle for evaluation of derivative) -- required. 0015 % e.g. phi=[0,pi/2] giving the time and frequency derivatives 0016 % params: structure with fields tapers, pad, Fs, fpass, trialave 0017 % -optional 0018 % tapers : precalculated tapers from dpss or in the one of the following 0019 % forms: 0020 % (1) A numeric vector [TW K] where TW is the 0021 % time-bandwidth product and K is the number of 0022 % tapers to be used (less than or equal to 0023 % 2TW-1). 0024 % (2) A numeric vector [W T p] where W is the 0025 % bandwidth, T is the duration of the data and p 0026 % is an integer such that 2TW-p tapers are used. In 0027 % this form there is no default i.e. to specify 0028 % the bandwidth, you have to specify T and p as 0029 % well. Note that the units of W and T have to be 0030 % consistent: if W is in Hz, T must be in seconds 0031 % and vice versa. Note that these units must also 0032 % be consistent with the units of params.Fs: W can 0033 % be in Hz if and only if params.Fs is in Hz. 0034 % The default is to use form 1 with TW=3 and K=5 0035 % 0036 % pad (padding factor for the FFT) - optional (can take values -1,0,1,2...). 0037 % -1 corresponds to no padding, 0 corresponds to padding 0038 % to the next highest power of 2 etc. 0039 % e.g. For N = 500, if PAD = -1, we do not pad; if PAD = 0, we pad the FFT 0040 % to 512 points, if pad=1, we pad to 1024 points etc. 0041 % Defaults to 0. 0042 % Fs (sampling frequency) - optional. Default 1. 0043 % fpass (frequency band to be used in the calculation in the form 0044 % [fmin fmax])- optional. 0045 % Default all frequencies between 0 and Fs/2 0046 % trialave (average over trials when 1, don't average when 0) - 0047 % optional. Default 0 0048 % t (time grid over which the tapers are to be calculated: 0049 % this argument is useful when calling the spectrum 0050 % calculation routine from a moving window spectrogram 0051 % calculation routine). If left empty, the spike times 0052 % are used to define the grid. 0053 % Output: 0054 % dS (spectral derivative in form phi x frequency x channels/trials if trialave=0; 0055 % function of phi x frequency if trialave=1) 0056 % f (frequencies) 0057 if nargin < 2; error('Need data and angle'); end; 0058 if nargin < 3; params=[]; end; 0059 [tapers,pad,Fs,fpass,err,trialave,params]=getparams(params); 0060 clear err params 0061 data=change_row_to_column(data); 0062 dt=1/Fs; % sampling time 0063 if nargin < 4; 0064 [mintime,maxtime]=minmaxsptimes(data); 0065 t=mintime:dt:maxtime+dt; % time grid for prolates 0066 end; 0067 N=length(t); % number of points in grid for dpss 0068 nfft=max(2^(nextpow2(N)+pad),N); % number of points in fft of prolates 0069 [f,findx]=getfgrid(Fs,nfft,fpass); % get frequency grid for evaluation 0070 tapers=dpsschk(tapers,N,Fs); % check tapers 0071 K=size(tapers,2); 0072 J=mtfftpt(data,tapers,nfft,t,f,findx); % mt fft for point process times 0073 A=sqrt(1:K-1); 0074 A=repmat(A,[size(J,1) 1]); 0075 A=repmat(A,[1 1 size(J,3)]); 0076 S=squeeze(mean(J(:,1:K-1,:).*A.*conj(J(:,2:K,:)),2)); 0077 if trialave; S=squeeze(mean(S,2));end; 0078 nphi=length(phi); 0079 for p=1:nphi; 0080 dS(p,:,:)=real(exp(i*phi(p))*S); 0081 end; 0082 dS=squeeze(dS); 0083 dS=change_row_to_column(dS); Generated on Fri 28-Sep-2012 12:34:30 by m2html © 2005
{"url":"http://chronux.org/Documentation/chronux/spectral_analysis/pointtimes/mtdspectrumpt.html","timestamp":"2014-04-16T10:48:18Z","content_type":null,"content_length":"13720","record_id":"<urn:uuid:a80f7f56-162c-49b1-b942-c8f2d7bd0839>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
Reply to comment Thursday, August 17, 2006 Conjecture to theorem to fame to fortune The buzz is building in the mathematical community. It looks more and more likely that Grigori Perelman's proof of the Poincaré conjecture is correct — and that he has solved a problem that has eluded the best mathematical minds for more than a century. When Perelman first posted his proof on the web in 2002 many thought this would be just another failed attempt, but since then it has survived intense mathematical scrutiny and appears to close to being accepted as correct. Now the rumour mill has gone into overdrive. Word on the mathematical street is that he will receive the Fields Medal (thought of as the maths equivalent to the Nobel prize) next week at the International Congress of Mathematics in Madrid. And not only mathematical glory awaits him. The Poincaré Conjecture is one of the seven Millennium Problems named by the Clay Institute, and if Perelman has proved it he is eligible for the $1 million prize. So if the rumours are right, Perelman's fame and fortune are just around the corner. That is all very exciting, but there is something that may get Perelman even more column inches in the press (he has already made the front pages): Perelman has a history of not accepting prizes. It seems that not only may he refuse the $1 million Clay prize, he may refuse the Fields medal too. This would make him the first to refuse the Field's Medal, and the first not only to win the Clay prize, but also the first to turn it down. Regardless of whether Perelman accepts the accolades that may come his way, the biggest news for most mathematicians is whether his work is finally accepted as correct — and whether we can start calling the Poincaré Conjecture, the Poincaré Theorem, after all this time. The world of mathematics waits with baited breath.... • You can read more about the Poincaré Conjecture, the Fields Medal and the Clay Millennium Problems on Plus • There is an excellent article about the history of Perelman's proof in the latest issue of the Notices of the American Mathematical Society • You can read more about the maths rumour mill in an article in the New York Times. posted by Plus @ 1:31 PM 0 Comments:
{"url":"http://plus.maths.org/content/comment/reply/4242","timestamp":"2014-04-17T15:38:53Z","content_type":null,"content_length":"24098","record_id":"<urn:uuid:b30cbaf4-2854-44fa-a248-36ea6b5f2308>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
How many socks make a pair - solution How many socks make a pair? Unfortunately I am lazy and disorganised. One symptom of this is that I can never be bothered to fold up my socks in pairs when they come out of the wash. I just chuck them in the drawer. Another symptom is that I always wake up late for work and end up having to rush. Given that I only have white and black socks, how many socks do I have to grab out of my drawer at random to make sure the collection I've grabbed contains a matching pair? I need to grab three socks. Either they are all the same colour, or two of them are white and one black, or vice versa. In each case I have a matching pair. This puzzle is part of the Hands-on risk and probability show, an interactive event culminating in Who Wants to be a Mathionaire? workshop sessions, which you can book to perform at your school. The puzzle appeared in the book How any socks make a pair? by Rob Eastaway. Back to main puzzle page For some challenging mathematical puzzles, see the NRICH puzzles from this month or last month.
{"url":"http://plus.maths.org/content/how-many-sock-make-pair-solution","timestamp":"2014-04-19T12:03:16Z","content_type":null,"content_length":"23151","record_id":"<urn:uuid:b035829c-e390-4e6a-925e-f6c557bc0e5f>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 49 - DISTRIBUTED COMPUTING , 2001 "... In this paper we present two protocols for asynchronous Byzantine Quorum Systems (BQS) built on top of reliable channels---one for self-verifying data and the other for any data. Our protocols tolerate Byzantine failures with fewer servers than existing solutions by eliminating nonessential work in ..." Cited by 404 (49 self) Add to MetaCart In this paper we present two protocols for asynchronous Byzantine Quorum Systems (BQS) built on top of reliable channels---one for self-verifying data and the other for any data. Our protocols tolerate Byzantine failures with fewer servers than existing solutions by eliminating nonessential work in the write protocol and by using read and write quorums of different sizes. Since engineering a reliable network layer on an unreliable network is difficult, two other possibilities must be explored. The first is to strengthen the model by allowing synchronous networks that use time-outs to identify failed links or machines. We consider running synchronous and asynchronous Byzantine Quorum protocols over synchronous networks and conclude that, surprisingly, "self-timing" asynchronous Byzantine protocols may offer significant advantages for many synchronous networks when network time-outs are long. We show how to extend an existing Byzantine Quorum protocol to eliminate its dependency on reliable networking and to handle message loss and retransmission explicitly. - SIAM Journal on Computing , 1995 "... The paper considers two decision problems on hypergraphs, hypergraph saturation and recognition of the transversal hypergraph, and discusses their significance for several search problems in applied computer science. Hypergraph saturation, i.e., given a hypergraph H, decide if every subset of vertic ..." Cited by 126 (7 self) Add to MetaCart The paper considers two decision problems on hypergraphs, hypergraph saturation and recognition of the transversal hypergraph, and discusses their significance for several search problems in applied computer science. Hypergraph saturation, i.e., given a hypergraph H, decide if every subset of vertices is contained in or contains some edge of H, is shown to be co-NP-complete. A certain subproblem of hypergraph saturation, the saturation of simple hypergraphs, is shown to be computationally equivalent to transversal hypergraph recognition, i.e., given two hypergraphs H 1; H 2, decide if the sets in H 2 are all the minimal transversals of H 1. The complexity of the search problem related to the recognition of the transversal hypergraph, the computation of the transversal hypergraph, is an open problem. This task needs time exponential in the input size, but it is unknown whether an output-polynomial algorithm exists for this problem. For several important subcases, for instance if an upper or lower bound is imposed on the edge size or for acyclic hypergraphs, we present output-polynomial algorithms. Computing or recognizing the minimal transversals of a hypergraph is a frequent problem in practice, which is pointed out by identifying important applications in database theory, Boolean switching theory, logic, and AI, particularly in model-based diagnosis. , 1998 "... A quorum system is a collection of sets (quorums) every two of which intersect. Quorum systems have been used for many applications in the area of distributed systems, including mutual exclusion, data replication and dissemination of information Given a strategy to pick quorums, the load L(S) is th ..." Cited by 89 (12 self) Add to MetaCart A quorum system is a collection of sets (quorums) every two of which intersect. Quorum systems have been used for many applications in the area of distributed systems, including mutual exclusion, data replication and dissemination of information Given a strategy to pick quorums, the load L(S) is the minimal access probability of the busiest element, minimizing over the strategies. The capacity Cap(S) is the highest quorum accesses rate that S can handle, so Cap(S) = 1=L(S). - Artificial Intelligence , 1988 "... Terminological reasoning is a mode of reasoning all hybrid knowledge representation systems based on KL-ONE rely on. After a short introduction of what terminological reasoning amounts to, it is proven that a complete inference algorithm for the BACK system would be computationally intractable. Inte ..." Cited by 61 (11 self) Add to MetaCart Terminological reasoning is a mode of reasoning all hybrid knowledge representation systems based on KL-ONE rely on. After a short introduction of what terminological reasoning amounts to, it is proven that a complete inference algorithm for the BACK system would be computationally intractable. Interestingly, this result also applies to the KANDOR system, which had been conjectured to realize complete terminological inferences with a tractable algorithm. More generally, together with an earlier paper of Brachman and Levesque it shows that terminological reasoning is intractable for any system using a non-trivial description language. Finally, consequences of this distressing result are briefly discussed. 1 Introduction The BACK system 1 [13] belongs to the class of hybrid knowledge representation systems based on KL-ONE (cf. the article by Brachman and Schmolze [4]). As in any other system of this family, a frame-based description language (henceforth FDL), which can be viewed as a ... - Discrete Applied Mathematics , 2002 "... In a clustering problem one has to partition a set of elements into homogeneous and well-separated subsets. From a graph theoretic point of view, a cluster graph is a vertex-disjoint union of cliques. The clustering problem is the task of making fewest changes to the edge set of an input graph so th ..." Cited by 60 (5 self) Add to MetaCart In a clustering problem one has to partition a set of elements into homogeneous and well-separated subsets. From a graph theoretic point of view, a cluster graph is a vertex-disjoint union of cliques. The clustering problem is the task of making fewest changes to the edge set of an input graph so that it becomes a cluster graph. We study the complexity of three variants of the problem. In the Cluster Completion variant edges can only be added. In Cluster Deletion, edges can only be deleted. In Cluster Editing, both edge additions and edge deletions are allowed. We also study these variants when the desired solution must contain a prespecified number of clusters. - IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS , 1998 "... We suggest a method of controlling the access to a secure database via quorum systems. A quorum system is a collection of sets (quorums) every two of which have a nonempty intersection. Quorum systems have been used for a number of applications in the area of distributed systems. We propose a separ ..." Cited by 34 (13 self) Add to MetaCart We suggest a method of controlling the access to a secure database via quorum systems. A quorum system is a collection of sets (quorums) every two of which have a nonempty intersection. Quorum systems have been used for a number of applications in the area of distributed systems. We propose a separation between access servers, which are protected and trustworthy, but may be outdated, and the data servers, which may all be compromised. The main paradigm is that only the servers in a complete quorum can collectively grant (or revoke) access permission. The method we suggest ensures that, after authorization is revoked, a cheating user Alice will not be able to access the data even if many access servers still consider her authorized and even if the complete raw database is available to her. The method has a low overhead in terms of communication and computation. It can also be converted into a distributed system for issuing secure signatures. An important building block in our method is the use of secret sharing schemes that realize the access structures of quorum systems. We provide several efficient constructions of such schemes which may be of interest in their own right. - ACM TRANSACTIONS ON DATABASE SYSTEMS , 2003 "... ... this article, we analyze several quorum types in order to better understand their behavior in practice. The results obtained challenge many of the assumptions behind quorum based replication. Our evaluation indicates that the conventional read-one/write-all-available approach is the best choice ..." Cited by 33 (10 self) Add to MetaCart ... this article, we analyze several quorum types in order to better understand their behavior in practice. The results obtained challenge many of the assumptions behind quorum based replication. Our evaluation indicates that the conventional read-one/write-all-available approach is the best choice for a large range of applications requiring data replication. We believe this is an important result for anybody developing code for computing clusters as the read-one/write-all-available strategy is much simpler to implement and more flexible than quorum-based approaches. In this article, we show that, in addition, it is also the best choice using a number of other selection criteria , 1998 "... We show that for all large n, every n-uniform hypergraph with at most 0:7pn = ln n \Theta 2n edges can be 2-colored. This makes progress on a problem of Erd""os (1963), improving the previous-best bound of n1=3 \Gamma o(1) \Theta 2n due to Beck (1978). We further generalize this to a ..." Cited by 32 (0 self) Add to MetaCart We show that for all large n, every n-uniform hypergraph with at most 0:7pn = ln n \Theta 2n edges can be 2-colored. This makes progress on a problem of Erd&quot;&quot;os (1963), improving the previous-best bound of n1=3 \Gamma o(1) \Theta 2n due to Beck (1978). We further generalize this to a , 1996 "... A quorum system is a collection of sets (quorums) every two of which intersect. Quorum systems have been used for many applications in the area of distributed systems, including mutual exclusion, data replication and dissemination of information In this paper we introduce a general class of quorum ..." Cited by 32 (8 self) Add to MetaCart A quorum system is a collection of sets (quorums) every two of which intersect. Quorum systems have been used for many applications in the area of distributed systems, including mutual exclusion, data replication and dissemination of information In this paper we introduce a general class of quorum systems called Crumbling Walls and study its properties. The elements (processors) of a wall are logically arranged in rows of varying widths. A quorum in a wall is the union of one full row and a representative from every row below the full row. This class considerably generalizes a number of known quorum system constructions. The best crumbling wall is the CWlog quorum system. It has small quorums, of size O(lg n), and structural simplicity. The CWlog has optimal availability and optimal load among systems with such small quorum size. It manifests its high quality for all universe sizes, so it is a good choice not only for systems with thousands or millions of processors but also for systems with as few as 3 or 5 processors. Moreover, our analysis shows that the availability will increase and the load will decrease at the optimal rates as the system increases in size. - SICOMP: SIAM Journal on Computing "... We introduce the notion of covering complexity of a verifier for probabilistically checkable proofs (PCP). Such a verifier is given an input, a claimed theorem, and an oracle, representing a purported proof of the theorem. The verifier is also given a random string and decides whether to accept the ..." Cited by 31 (3 self) Add to MetaCart We introduce the notion of covering complexity of a verifier for probabilistically checkable proofs (PCP). Such a verifier is given an input, a claimed theorem, and an oracle, representing a purported proof of the theorem. The verifier is also given a random string and decides whether to accept the proof or not, based on the given random string. We define the covering complexity of such a verifier, on a given input, to be the minimum number of proofs needed to “satisfy ” the verifier on every random string, i.e., on every random string, at least one of the given proofs must be accepted by the verifier. The covering complexity of PCP verifiers offers a promising route to getting stronger inapproximability results for some minimization problems, and in particular, (hyper-) graph coloring problems. We present a PCP verifier for NP statements that queries only four bits and yet has a covering complexity of one for true statements and a super-constant covering complexity for statements not in the language. Moreover, the acceptance predicate of this verifier is a simple Not-all-Equal check on the four bits it reads. This enables us to prove that for any constant c, it is NP-hard to color a 2-colorable 4-uniform hypergraph using just c colors, and also yields a super-constant inapproximability result under a stronger hardness
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=149262","timestamp":"2014-04-16T05:37:57Z","content_type":null,"content_length":"39685","record_id":"<urn:uuid:3f8046c9-1b4a-44f2-9b3c-e06d047d331d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
Monthly Archives: December 2011 Tweet4.3: Newton’s Second Newton’s second: Force is mass time acceleration. First, let’s get a basic understanding. I prefer to state the law this way: F = m(dv/dt). Verbally, that’s “Force equals mass times change in velocity.” Or, more succinctly “Force … Continue reading TweetInteresting Problems in Section 1.2, Part B NAND and NOR As you might guess, there are lots of POSSIBLE logical operators in Discrete, beyond human-friendly ones like AND, OR, and XOR. As it turns out, they’re also quite useful. First, … Continue reading TweetDerivatives of Inverse Trigonometric Functions I don’t play a lot of video games (does saying “video games” make you sound old yet?) anymore, but I like to think of learning new mathematical tools as “leveling up.” You’re not just gaining ability … Continue reading Tweet4.2: Newston’s First Law Hooooo doggie! Now we’re getting to some good stuff. You’ve probably heard Newton’s First somewhere as “a body in motion tends to stay in motion, and a body at rest tends to stay at rest.” That … Continue reading TweetInteresting Problems in Section 1.2 One of the neat things in this book is that they put new information in the problems. This is nice because it really encourages you to go through the problems to find all the cool … Continue reading TweetSection 3.5: Implicit Differentiation WOOP! Implicit differentation is actually one of the concepts that got me into the idea of textblogging. It’s something that completely baffled me the first time around because I didn’t really understand what a derivative was. … Continue reading TweetChapter 4: Newton’s Laws of Motion We’re getting to more fun stuff. So far, we’ve learned how things behave once in motion. Now, we get to think about why they behave that way. The crucial concept here is called “force.” … Continue reading TweetI kinda liked the crazy eyes after I got this far, so I didn’t fill them in. Biggest issue is I made his chin way too large. So, it looks like a merger between Oppenheimer and Indian Jones. That said, … Continue reading TweetWooh! We’re getting toward more fun stuff. Constructing New Logical Equivalences If you know how the “basic 4″ arithmetic operations work, you can make pretty much any rule about numbers you like. For example, you can show that -(-(x+1)) = … Continue reading TweetSo, now that you understand the chain rule, you can rederive some of the tricks you’ve already learned. The book gives a very cute proof of the fact that . But screw that, we’re onto bigger game. Let’s prove the … Continue reading
{"url":"http://www.theweinerworks.com/?m=201112","timestamp":"2014-04-17T01:07:11Z","content_type":null,"content_length":"35581","record_id":"<urn:uuid:4371b47d-1767-46e3-8e36-8601d4973a55>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: reverse prediction - confidence interval for x at given y in non [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: reverse prediction - confidence interval for x at given y in nonlinear model From "Joseph Coveney" <jcoveney@bigplanet.com> To "Statalist" <statalist@hsphsun2.harvard.edu> Subject Re: st: reverse prediction - confidence interval for x at given y in nonlinear model Date Mon, 29 Oct 2007 01:27:43 -0700 Maarten Buis wrote: --- Joseph Coveney <jcoveney@bigplanet.com> wrote: > If I'm not mistaken, the four-parameter logistic model Rosy used is > for the *logarithm* of dose and *logarithm* of ED50, and not the dose > and ED50, per se (cf. Maarten's y-axis values). I am studying long term trends in inequality of educational opportunity between children of different socioeconomic background, so I am not very up to date on common parameterizations in biological and medical I hope that you didn't misconstrue my reference to your graph as disparagement, because nothing could be further from the truth: I was referring Rosy to the graph because its y-axis scale values nicely illustrate what might not be obvious to a first-time user of -nl log4:-, namely, that the built-in model expects predictor values on the order of ln(100) and not on the order of 100. Rosy's first post described things in terms of "dose" and "ED50", and I just wanted to rule out this possibility at the outset as a cause of Rosy's problem. > Perhaps the parameterization is numerically stabler, too, in some > sense. When I estimated this model on my generated data I found that it wasn't very stable. However it is not very surprising as it uses 4 parameters to estimate a single curve and 2 of these parameters refer to asymptotes, i.e. parts of the curve at the extreme left and the extreme right where there are few observations. I agree that four-parameter nonlinear models like these aren't particularly easy to work with, and my comment wasn't intending claiming that they are. Rather, I was speculating that the preference given the four-parameter logistic parameterization is because it is stabler in some sense (more tractably fit) than the analogous four-parameter sigmoid Emax parameterization. A limited simulation (below) seems to point in this direction in that (i) with robotically chosen initial values, the sigmoid Emax parameterization settles on silly estimates for two of thirty datasets (Datasets 28 and 30) and the logistic in only one (Dataset 28), (ii) and when the fit is difficult (as in Datasets 5, 11, 14, 28 and 30), the sigmoid Emax parameterization requires twofold to fivefold the number of iterations that the logistic does. (It is interesting that, otherwise, the sigmoid Emax parameterization nearly always requires fewer iterations--and never As an aside, it's not uncommon for a designed dose-response study to have deliberately numerous observations to the extreme left (often, zero dose, itself), and at least up onto the Emax plateau. The point of your graph is well taken, however: ED10s and ED90s are going to be estimated with less confidence regardless of whether they're in a range where extrapolation is Joseph asks: > does the -nl log4:- four-parameter logistic model give rise to biased > estimates of ED50 (ED10, ED90, etc.) and confidence intervals in the > original measurement scale with nonasymptotic sample sizes? That is, > should a pharmacologist ever use -nl log4:- in lieu of the model > shown just above? The two models are equivalent. To see this notice first that they have [formulae redacted for brevity] I have no disagreement with your arithmetic, but there is an important difference in the parameterizations, namely: Step 2: Lets call ln(Dose) x, Hill b2, and ln(ED50) b3 I believe that the way numerical methods employed in nonlinear regression software handle this difference has consequences to someone who's interested in estimating ED50. You can see what I mean in the simulation below. Although the four-parameter logistic model gives more accurate estimation of the Hill coefficient, it is poorer than the sigmoid Emax model in estimation of the ED50 in the original measurement scale. The sigmoid Emax parameterization has a median bias (expressed in terms of the ratio) in ED50 of 0.99--near perfect--while the logistic's is only about two-thirds. The interquartile range is tighter with the sigmoid Emax parameterization, too, despite the two misfits, and so an omnibus assessment giving weight to efficiency as well as bias should favor that parameterization, as well, especially if a manual effort is made to choose starting values in misfits in order to assure a good fit. (In the simulation, neither parameterization very reliably estimates ED10 or ED90 with the datasets created in the So, although I don't contend your characterization of the arithmetic relation between the two parameterizations (the predictions are identical to four decimal places in the 28 adequately fit models), I'm not sure that I would really consider them equivalent in at least a couple of important matters (bias in ED50, ability to naturally accommodate zero dose), and that is what I was alluding to in my questions. Joseph Coveney set more off local mask = cond(c(stata_version) >= 10, "YMD", "ymd") set seed `=date("2007-10-26", "`mask'")' tempname memhold Hill log_ED50 tempname sEmx_Emin sEmx_Emax sEmx_ED50 sEmx_Hill sEmx_good tempname sEmx_iterations Good tempname log4_Emin log4_Emax log4_ED50 log4_Hill tempvar Y X tempfile results data save `data', emptyok set obs 13 generate byte dataset = . generate double log_dose = -3 + (_n - 1) * 0.5 generate double dose = 10^log_dose generate double response = . local Emin 1 local Emax 11 postfile `memhold' byte dataset /// double (Hill ED50) /// double (sEmx_Emin sEmx_Emax sEmx_ED50 sEmx_Hill) /// int sEmx_iterations byte sEmx_good /// double (log4_Emin log4_Emax log4_ED50 log4_Hill) /// int log4_iterations byte log4_good /// using `results' forvalues i = 1/30 { quietly replace dataset = `i' scalar define `Hill' = 1 + 4 * uniform() scalar define `log_ED50' = invnormal(uniform()) quietly replace response = `Emin' + `Emax' / (1 + /// exp(-`Hill' * (log_dose - `log_ED50'))) + /// invnormal(uniform()) / 2 * Initial values (From -nllog4.ado-) summarize response, meanonly local Emin_init = r(min) / 1.1 local Emax_init = r(max) * 1.1 quietly { generate double `Y' = ln((response - /// `Emin_init') / (`Emax_init' - response)) generate double `X' = ln(10) * log_dose regress `Y' `X' local Emax_init = `Emax_init' - `Emin_init' local Hill_init = _b[`X'] local ln_ED50_init = _b[_cons] / `Hill' local ED50_init = exp(_b[_cons] / `Hill') drop `Y' `X' quietly nl (response = {Emin} + {Emax} * dose^{Hill} / /// (dose^{Hill} + {ED50}^{Hill})), /// initial(Emin `Emin_init' Emax `Emax_init' /// Hill `Hill_init' ED50 `ED50_init') scalar define `sEmx_Emin' = _b[/Emin] scalar define `sEmx_Emax' = _b[/Emax] scalar define `sEmx_ED50' = _b[/ED50] scalar define `sEmx_Hill' = _b[/Hill] matrix define `Good' = !matmissing(e(V)) scalar define `sEmx_good' = `Good'[1,1] scalar define `sEmx_iterations' = e(ic) predict double response_sEmx, yhat quietly nl log4: response log_dose, /// initial(b0 `Emin_init' b1 `Emax_init' /// b2 `Hill_init' b3 `ln_ED50_init') matrix define `Good' = matmissing(e(V)) post `memhold' (`i') (`Hill') (10^(`log_ED50')) /// (`sEmx_Emin') (`sEmx_Emax') (`sEmx_ED50') /// (`sEmx_Hill') (`sEmx_iterations') (`sEmx_good') /// (_b[/b0]) (_b[/b1]) (exp(_b[/b3])) (_b[/b2]) /// (e(ic)) (!`Good'[1,1]) predict double response_log4, yhat append using `data' quietly { save `data', replace keep if dataset == `i' drop response_sEmx response_log4 display in smcl as result "`i'" postclose `memhold' use `results', clear erase `results' foreach centile in 10 90 { generate double ED`centile' = /// (`centile' / (100 - `centile'))^(1/Hill) * ED50 generate double sEmx_ED`centile' = /// (`centile' / (100 - `centile'))^(1/sEmx_Hill) * sEmx_ED50 generate double log4_ED`centile' = /// (`centile' / (100 - `centile'))^(1/log4_Hill) * log4_ED50 foreach parameter in ED10 ED50 ED90 { generate double sEmx_`parameter'_bias = /// sEmx_`parameter' / `parameter' generate double log4_`parameter'_bias = /// log4_`parameter' / `parameter' format *ED?0 %6.2f format *ED?0_bias %6.2f format *Hill %4.2f format sEmx_Emin sEmx_Emax log4_Emin log4_Emax %3.0f log using log.smcl, smcl list dataset ED50 sEmx_ED50 log4_ED50, /// noobs separator(0) abbreviate(15) list dataset Hill sEmx_Hill log4_Hill, /// noobs separator(0) abbreviate(15) list dataset sEmx_Emin log4_Emin sEmx_Emax log4_Emax, noobs /// separator(0) abbreviate(15) list dataset sEmx_iterations log4_iterations sEmx_good log4_good, /// noobs separator(0) abbreviate(15) list dataset ED10 sEmx_ED10 log4_ED10, /// noobs separator(0) abbreviate(15) list dataset sEmx_ED10_bias log4_ED10_bias, /// noobs separator(0) abbreviate(15) list dataset ED90 sEmx_ED90 log4_ED90, /// noobs separator(0) abbreviate(15) list dataset sEmx_ED90_bias log4_ED90_bias, /// noobs separator(0) abbreviate(15) centile *_bias, centile(25 50 75) log close save Results use `data' erase `data' save Data * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2007-10/msg01089.html","timestamp":"2014-04-19T02:08:45Z","content_type":null,"content_length":"15173","record_id":"<urn:uuid:c0aab5f3-3e58-44f4-b85e-746fd2f913d6>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
Recursive descent and left recursion 14 Jan 1997 20:06:31 -0500 From comp.compilers | List of all articles for this month | From: mfinney@lynchburg.net Newsgroups: comp.compilers Date: 14 Jan 1997 20:06:31 -0500 Organization: Compilers Central Keywords: parse, LL(1) I have noticed the occassional post here, as well as assertions in various texts, that left recursion is not usable with recursive descent (and LR parsers in general). However, I have been using recursive descent with left recursive grammers for more than a decade. All it takes is the trivially obvious check to allow the left recursion. Take, for example... (1) <exp> := <exp> + <term> (2) <exp> := <term> when expanding (1), at the first term I simply check to see if the current token in the input stream is the same as the last time that the rewriting rule was expanded. If it is, then the parse has not advanced and you have the infinite loop situation. I simply fail the expansion and select an alternate rewriting rule for expansion. This approach only requires the storage of a token # per left recursive rewriting rule and single check of that token # against the current token # (which can just be the line and column of the first character in the token since that is usually maintained for error reporting). It is very fast and does not significantly slow down the parsing. It does require backtracking in the sense that a different rewriting rule has to be selected, but there is no work associated with the backtracking since it only occurs at the start of the left recursive rewriting rule. Has anyone else used this technique? Does anyone know if there are any "hidden" problems with it? Could it be applied to LR or LALR Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/97-01-099","timestamp":"2014-04-19T12:15:53Z","content_type":null,"content_length":"6074","record_id":"<urn:uuid:b2ea14a3-c56a-4967-9f9e-1279dfed1562>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
Welcome to the Learning Resources Portal. This Portal is designed to provide you with access to quality learning, teaching and professional development resources. Find information on quantity and quality and location of text ise. Search, download and use these resources on this site and locate them in print and hard copy format stored at the Region, Division, District or Cluster Lead school. Create your own resources using any of the over 5000 photos, illustrations, video and audio files in the Media Gallery. Access Open Education Resources and online learning programs including professional development and alternative delivery mode programs Ideas on learning and teaching resources and provide feedback.
{"url":"http://lrmds.deped.gov.ph/","timestamp":"2014-04-19T22:04:56Z","content_type":null,"content_length":"11573","record_id":"<urn:uuid:04b1ce8f-dd82-4340-ae83-0238135c31ec>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem with a simple program 06-15-2006 #1 C noobie Join Date Jun 2006 Problem with a simple program Alright, I just started teaching myself how to program C a couple days ago, so don't make fun of me too much! #include <stdio.h> //purpous is to have 4 numbers input and the average displayed int a = 1; //counter double b; //numbers to be input double c = 0.0; //total of all b char d; //suffix to number while(a < 5) printf("Input your %d%c number: ",a,d==1?'st':(a==2?'nd':(a==3?'rd':'th'))); //input numbers scanf("%fl",&b); //input numbers to be averaged c = c + b; //add all the numbers up ++a; //tell program to go to next number if (a<5) continue; //if not done inputting numbers, loops printf("The average of the 4 numbers is %fl.",c/4); //displays average of the sum of all 4 numbers inputted My problem is the program goes goofy when displaying the final averaged number. Does it have to do with the "Double"? Im getting a result a 0.0000001. Also it displays "t" at the end of each number when displaying which number to input. Last edited by Salgat; 06-15-2006 at 04:31 PM. Since when are 'st', 'nd', 'rd' and 'th' single characters? = is an assignment. == is an equality test. Hope is the first step on the road to disappointment. I know about = and ==, not sure about how that applies here, but thanks about the char note. Also, any reason why its not reading(or outputting?) the correct double variables? Thats my main avarage.c:14:57: warning: multi-character character constant avarage.c:14:67: warning: multi-character character constant avarage.c:14:77: warning: multi-character character constant avarage.c:14:82: warning: multi-character character constant avarage.c: In function `main': avarage.c:14: warning: overflow in implicit constant conversion avarage.c:15: warning: float format, double arg (arg 2) avarage.c:24:2: warning: no newline at end of file I can't grok your 14 line what a heck do you want to do? If you talking about the first output, Im trying to have it go 1st, then next time around 2nd, then next time 3rd, etc. To be honost, Im not 100% sure you can put the trinary ?: inside another one. But like I said, my main focus is the output being incorrect for the average value. I know about = and ==, not sure about how that applies here printf("Input your %d%c number: ",a,d=1?'st':(a=2?'nd':(a=3?'rd':'th'))); //input numbers Hope is the first step on the road to disappointment. Oops, hehe thanks Quzah, hoorah! But about the double variable, anyone know? If you talking about the first output, Im trying to have it go 1st, then next time around 2nd, then next time 3rd, etc. To be honost, Im not 100% sure you can put the trinary ?: inside another one. But like I said, my main focus is the output being incorrect for the average value. For gods sake take a book like K&R you are guessing the language that is never gonna work I will give you a basic hint of what you need. #include <stdio.h> #include <stdlib.h> int main(void){ · double a,b,c,avarage; · printf("Give me 3 numbers separated by spaces: "); · scanf("%lf %lf %lf",&a,&b,&c); · avarage = (a+b+c)/3; · printf("The avarage is %.2lf\n",avarage); · return 0; Thanks! I was trying to apply a few things I was learning and reduce the amount of variables used, unfortunately I got a few things wrong. I know I may not be doing it all perfectly to the book, and I know C is a percise language, but was expirementing with some new operands and although that didn't work, I'm still wondering why its not outputting correctly. Heres a revision for debugging purpouses. #include <stdio.h> //purpous is to have 4 numbers input and the average displayed int main(void) double b; //numbers to be input double c = 0.0; //total of all b int a = 1; //counter while(a < 5) printf("\nInput your %d number: ",a); //input numbers scanf("%fl",&b); //input numbers to be averaged printf("The number you input is: %fl",b); //show your input c = c + b; //add all the numbers up ++a; //tell program to go to next number if (a<5) continue; //if not done inputting numbers, loops printf("The average of the 4 numbers is %.2fl.",c/4); //displays average of the sum of all 4 numbers inputted I just want to know, why isnt it reading my double variable "b" correctly. Last edited by Salgat; 06-15-2006 at 04:48 PM. scanf("%fl",&b); //input numbers to be averaged Try spelling %lf correctly. And for printf, just use %f. 7. It is easier to write an incorrect program than understand a correct one. 40. There are two ways to write error-free programs; only the third one works.* Oh ya! Haha, thanks (ended up finishing it, thanks guys #include <stdio.h> //Title: Min, Max, and Average //Purpose: Is to have a given amount of numbers input and the average displayed //Creator: Austin Salgat int a = 1; //counter int d; //amount of #s averaged double b; //current # inputted double c = 0.0; //total of all b double e = 0.0; //max # double f = 999.999; //min # printf("**************************************\n*** Find Min, Max, and Average ***\n*** Created by Austin Salgat ***\n**************************************"); printf("\n\nHow many numbers do you wish to put into this average?: "); scanf("%d",&d); //input how many numbers to average while(a <= d) printf("\nInput number %d: ",a); //input numbers scanf("%lf",&b); //input numbers to be averaged e = (b>e) ? b : e; //input new max f = (b<f) ? b : f; //input new min c = c + b; //add all the numbers up printf("\nThe average of the %d number%c is %.2lf.",a,a==1?' ':'s',c/a); //displays average of the sum of all 4 numbers inputted printf("\nThe max is %.2lf and the min is %.2lf.\n",e,f); //displays min and max ++a; //tell program to go to next number printf("\n\n\nTo exit this program, simply type a letter and press enter!: "); Last edited by Salgat; 06-15-2006 at 07:43 PM. 06-15-2006 #2 06-15-2006 #3 C noobie Join Date Jun 2006 06-15-2006 #4 Registered User Join Date Jun 2004 06-15-2006 #5 C noobie Join Date Jun 2006 06-15-2006 #6 06-15-2006 #7 C noobie Join Date Jun 2006 06-15-2006 #8 Registered User Join Date Jun 2004 06-15-2006 #9 C noobie Join Date Jun 2006 06-15-2006 #10 06-15-2006 #11 C noobie Join Date Jun 2006
{"url":"http://cboard.cprogramming.com/c-programming/80129-problem-simple-program.html","timestamp":"2014-04-18T19:24:08Z","content_type":null,"content_length":"82230","record_id":"<urn:uuid:2d498363-1ab8-4fce-a223-e9154ee556f6>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
Write a shell script to find the factorial of a given number, Programming Languages W.A.S.S to find the factorial of a given number. # W.A.S.S to find the factorial of a given number. echo -e "enter number:\c" # \c Suppress trailing newline, –e Backslash-escaped # characters is enabled read number while test $i -le $number f=`expr $f \* $i` # expression is written in back quote because it will produce output at # runtime i=`expr $i + 1` done # ‘done’ is completion of while loop echo "factorial of number is:$f" enter number:4 factorial of number is:24 Posted Date: 9/26/2012 4:24:40 AM | Location : United States Your posts are moderated Write a Prolog predicate has_duplicates(L) that is true if list L contains duplicated elements (that is at least 2 copies of an element). For instance: ?- has_duplicates([a, Implement a two-dimensional table in Prolog. Your program will contain: An insert_entry predicate that takes a table, row, column and an entry and inserts the entry at the g write a procedure to add toolbar in VB application adding icons to toolbar buttons & with the approprite example display the use of each button how to concatinate two strings in assembly Special Matrices There are some "special" matrices out there which we may use on occasion. The square matrix is the first special matrix. A square matrix is any matrix that s Why no Audible support for Linux? Maybe they should use a browser interface? Write a script called 'prob3.m' that does the same thing as the previous question except that it makes use of a switch-case construct instead of an if-else. In the file 'hw4.m' how to store multidimensional array in a single column in mysql database using MATLAB? The languages used in the module are of a type known as high-level languages. There is another set of languages known as low-level languages. (a) Differentiate between high-leve In this project, we have measured and compared the performance given by RTLinux, which is real time system with non-real time Linux operating system. Also, we have measured the
{"url":"http://www.expertsmind.com/questions/write-a-shell-script-to-find-the-factorial-of-a-given-number-30114176.aspx","timestamp":"2014-04-16T22:01:26Z","content_type":null,"content_length":"29648","record_id":"<urn:uuid:79c687eb-f576-4e21-941a-d1850c3c24f7>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
Mission Viejo Algebra 2 Tutor ...We can find math working everywhere from our daily life to stars. Everyone needs to know math and algebra to be able to solve daily life problems. Algebra-1 gives needed skills to move on, and it is needed even for Calculus. 11 Subjects: including algebra 2, calculus, statistics, geometry I have a doctorate in Nuclear Physics and many years of engineering experience designing electronics for industry. For the last two years, I have been teaching at the Community College Level as an Adjunct Professor. I have volunteer tutored math at a local high school and tutored privately. 8 Subjects: including algebra 2, calculus, physics, geometry ...I am an expert in programming language concepts like object oriented programming (OOP) and all high level concepts / syntax. Syntax and languages changes. But the concepts in every language are the same. 19 Subjects: including algebra 2, calculus, geometry, trigonometry ...The GED Mathematics Test tests students' knowledge of arithmetic, algebra and geometry. Many questions are posed as sentence problems so that students will need to understand concepts in these statements. The test is divided into halves, the first half allowing the use of a calculator and the other not allowing the use of the calculator. 18 Subjects: including algebra 2, geometry, GRE, ASVAB Hello, My name is Yu Leo Lu, and I graduated from UC Berkeley with Bachelor of Arts degree major in Mathematics. My overall GPA is 3.75, and my major GPA is 3.80. Subjects that I am experienced in tutoring includes Algebra, Geometry and Calculus. 11 Subjects: including algebra 2, calculus, geometry, precalculus
{"url":"http://www.purplemath.com/Mission_Viejo_algebra_2_tutors.php","timestamp":"2014-04-16T19:27:44Z","content_type":null,"content_length":"24101","record_id":"<urn:uuid:cd15b8ff-60ec-4855-b461-2b47348e80fa>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
12.7.4 Quicksort or Samplesort Algorithm Next: Hierarchical Tree-Structures as Up: 12.7 Sorting Previous: 12.7.3 Shellsort or Diminishing The classic quicksort algorithm is a divide-and-conquer sorting method ([Hoare:62a], [Knuth:73a] pp.118-23). As such, it would seem to be amenable to a concurrent implementation, and with a slight modification (actually an improvement of the standard algorithm) this turns out to be the case. The standard algorithm begins by picking some item from the list and using this as the splitting key. A loop is entered which takes the splitting key and finds the point in the list where this item will ultimately end up once the sort is completed. This is the first splitting point. While this is being done, all items in the list which are less than the splitting key are placed on the low side of the splitting point, and all higher items are placed on the high side. This completes the first divide. The list has now been broken into two independent lists, each of which still needs to be The essential idea of the concurrent (hypercube) quicksort is the same. The first splitting key is chosen (a global step to be described below) and then the entire list is split, in parallel, between two halves of the hypercube. All items higher than the splitting key are sent in one direction in the hypercube, and all items less are sent the other way. The procedure is then called recursively, splitting each of the subcubes' lists further. As in Shellsort, the ring-based labelling of the hypercube is used to define global order. Once d splits occur, there remain no further interprocessor splits to do, and the algorithm continues by switching to the internal quicksort mentioned earlier. This is illustrated in Figure 12.33. Figure 12.33: An Illustration of the Parallel Quicksort So far, we have concentrated on standard quicksort. For quicksort to work well, even on sequential machines, it is essential that the splitting points land somewhere near the median of the list. If this isn't true, quicksort behaves poorly, the usual example being the quadratic time that standard quicksort takes on almost-sorted lists. To counteract this, it is a good idea to choose the splitting keys with some care so as to make evenhanded splits of the list. Figure: Efficiency Data for the Parallel Quicksort described in the Text. The curves are labelled as in Figure 12.30 and plotted against the logarithm of the number of items to be sorted. This becomes much more important on the concurrent computer. In this case, if the splits are done haphazardly, not only will an excessive number of operations be necessary, but large load imbalances will also occur. Therefore, in the concurrent algorithm, the splitting keys are chosen with some care. One reasonable way to do this is to randomly sample a subset of the entire list (giving an estimate of the true distribution of the list) and then pick splitting keys based upon this sample. To save time, all samplesort and consists of the following steps: • each processor picks sample of l items at random; • sort the sample of • choose splitting keys as if this was the entire list; • perform the splits in the d directions of the hypercube; • each processor quicksorts its sublist. Times and efficiencies for the parallel quicksort algorithm are shown in Table 12.4. The efficiencies are also plotted in Figure 12.34. In some cases, the parallel quicksort outperforms the already high performance of the parallel shellsort discussed earlier. There are two main sources of inefficiency in this algorithm. The first is a result of the time wasted sorting the sample. The second is due to remaining load imbalance in the splitting phases. By varying the sample size l, we achieve a trade-off between these two sources of inefficiency. Chapter 18 of [Fox:88a] contains more details regarding the choice of l and other ways to compute splitting points. Before closing, it may be noted that there exists another way of thinking about the parallel quicksort/samplesort algorithm. It can be regarded as a bucketsort, in which each processor of the hypercube comprises one bucket. In the splitting phase, one attempts to determine reasonable limits for the The sorting work began as a collaboration between Steve Otto and summer students Ed Felten and Scott Karlin. Ed Felten invented the parallel Shellsort; Felten and Otto developed the parallel Next: Hierarchical Tree-Structures as Up: 12.7 Sorting Previous: 12.7.3 Shellsort or Diminishing Guy Robinson Wed Mar 1 10:19:35 EST 1995
{"url":"http://www.netlib.org/utk/lsi/pcwLSI/text/node302.html","timestamp":"2014-04-18T13:09:54Z","content_type":null,"content_length":"8344","record_id":"<urn:uuid:cc7ff6b8-6796-4373-b5c2-e470c6fd6601>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
Bounded Operators Suppose that T takes bounded sets to bounded sets. The set of all vectors of the form [tex]\frac{f}{\|f\|}[/tex] is bounded, so there must exist a C such that [tex]\|T\frac{f}{\|f\|}\|\leq C[/tex] This implies [itex]\|Tf\|\leq C\|f\|[/itex]. To prove the converse, suppose instead that there exists a C such that [itex]\|Tf\|\leq C\|f\|[/itex] for all f, and let B be a bounded set with bound M (i.e. [itex]\|g\|\leq M[/itex] for all g in B). We need to show that there exists an upper bound for the set of all [itex]\|Tg\|[/itex] with g in B. [tex]\|Tg\|\leq C\|g\|\leq CM[/tex] If you're also wondering why an operator is continuous if and only if it's bounded, the Wikipedia page titled " bounded operator " has a very nice proof of that.
{"url":"http://www.physicsforums.com/showthread.php?t=377054","timestamp":"2014-04-20T16:01:15Z","content_type":null,"content_length":"29985","record_id":"<urn:uuid:3840b349-8967-49ec-a801-0db5fc6b76a3>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Prayer-7 Dear Lord We thank you a hundredfold for the love and care that you have given us. May we in return to your good works, multiply it with love and respect. Add more faith. Subtract the unwordly behavior and evil works but divide your given talents to others. And to sum it all may we be united as one in your family. In this we pray, A mathematical prayer.. Dear God, We thank you for the blessing you have given to us. May we through your Holy Spirit. Add love to the world. Subtract evils from our lives. Multiply the good news of your son. Divide you gift and share them to others. So we can serve and please you Lord. Most Blessed Jesus, Have mercy on us. Mathematical Prayer.. Mathematical Prayer 5 Our heavenly father, we thank you 4 everything that you have given into our lives.May you watch us 24/7. Forgive our sins more than 70×7 & subtract our wrong deeds. Please help us solve our problems we encounter each day. In this things I pray in your sweet & to your infinite love to us……… Amen Mathematical Prayer.. Mathematical Prayer-4 Dear GOD, Thank you for making me understand the “multiplication table”,I know you want me to learn something in math.May you help me in studying “Sets of Operation”.We know that you are the genius person in our lives.we know you are the genius person in our lives.May you help us in “Divisibility Rules”…. Mathematical Prayer 3 Dear God, Thank you for increasing good attitude here in the world…may you subtract the bad things we have done to you..equal us like a good saint you give…Amen Mathematical Prayer..... Mathematical Prayer 2 dear God, may we through your blessings, add purity to the world, subtract evil from our lives, multiply the good news of your son and divide your gifts and share them with others, amen. Math Prayer Lord , I pray for this person as well as others who want to do well in their math by praying to You. Help me Lord as I help others with their prayers for math. Pray something like this but from your heart. Remember you are really simply asking Jesus for help. You are talking to Him when you pray. Lord, I know that in myself I am not anything good. It is only when I am forgiven and covered with your mercy that I am forgiven and set free able to do well. In You I ask that You help me with my Math. I need Your grace to do well. I am not worthy of Your help, but I want to do well and I know that if you help I can do very well in my math. I thank You that hear my prayers even though I am a sinner. I ask that you take away my sins and forgive me for my faults and I will give You all the glory Lord for enabling me to do well in Math. I will tell other people that You have helped me. To You be all the glory, honor, and praise. Lord, I know that your name is the revealed name of God. I know that you are God and that is why I pray to You. For You alone are able to save me from my sins and help to do well in my Math and every other area in my life. In Your perfect name I pray Lord, Amen. Mathematical Prayer..
{"url":"http://www.mathematicalprayer.blogspot.com/","timestamp":"2014-04-18T23:17:05Z","content_type":null,"content_length":"121544","record_id":"<urn:uuid:1abc11c5-a4da-479e-aba8-34acd7478dd3>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
I can't get time, with formulas, help please. September 27th 2010, 04:54 PM #1 Jan 2009 Ontario Canada I can't get time, with formulas, help please. Hey There, A golf ball rolls off a horizontal cliff with an Vo speed of 11.4 m/s. The ball falls a vertical distance of 15.5 m into a lake below. It asks a) how much time does the ball spend in the air b) what is the speed v of the ball just before it strikes the water? I solved for B the V as 20.82 m/s.w I got Vo = -11.4, d = -15.5 m, a= -9.8 m/ss. I tried every way to get time of 1.78 s using all formulas that I can and for some reason I am coming up short in time. What am I doing wrong. Just pluncking the numbers and nothing. Hey There, A golf ball rolls off a horizontal cliff with an Vo speed of 11.4 m/s. The ball falls a vertical distance of 15.5 m into a lake below. It asks a) how much time does the ball spend in the air b) what is the speed v of the ball just before it strikes the water? I solved for B the V as 20.82 m/s.w I got Vo = -11.4, d = -15.5 m, a= -9.8 m/ss. I tried every way to get time of 1.78 s using all formulas that I can and for some reason I am coming up short in time. What am I doing wrong. Just pluncking the numbers and nothing. In the vertical direction (taking downwards as positive): a = 9.8 m/s^2 u = 0 m/s d = 15.5 m t = ? Use the appropriate uniform straight line motion formula. September 27th 2010, 07:59 PM #2
{"url":"http://mathhelpforum.com/math-topics/157636-i-can-t-get-time-formulas-help-please.html","timestamp":"2014-04-18T07:34:26Z","content_type":null,"content_length":"34788","record_id":"<urn:uuid:ec4bdccf-6b52-4bc6-bc8c-e109b8c7d77d>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: <Simultaneous equation> y=2-x, x(x+y)=5-3y^2 a solution would be helpful please, no idea how these things are solved • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50ed804ce4b07cd2b64937fb","timestamp":"2014-04-19T10:09:12Z","content_type":null,"content_length":"84414","record_id":"<urn:uuid:030f8383-dd17-49c4-8fc1-cc549fcc0ebd>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
Millennium Prize Problems On April 25, 2000, the Clay Mathematics Institute announced that it would give $1,000,000 to anyone who could solve any of these seven outstanding problems in mathematics: All of these problems are classic questions that have resisted solution for many years, and the Clay Institute wishes to stimulate interest in them. There are strict rules on what constitutes a valid David Hilbert did a similar thing a century earlier in 1900, when he proposed his twenty three so-called 'Hilbert Problems' that required solution. To date twenty have been resolved. It is interesting to note that one of the three left is the Riemann Hypothesis which features above, now considered to be the most important outstanding problem in pure mathematics. [Editor's note, 8/31/2006: Fixed link to Poincaré Conjecture and added <p> tags.]
{"url":"http://everything2.com/title/Millennium+Prize+Problems","timestamp":"2014-04-18T22:02:15Z","content_type":null,"content_length":"21451","record_id":"<urn:uuid:143e1a6c-5c8c-4fd4-b51c-9cd1ec8c7018>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
Alpine, NJ Calculus Tutor Find an Alpine, NJ Calculus Tutor ...In math, everything you learn builds on top of what you learned in previous years, and without that strong foundation, students can fall behind. When teachers explain something in class they assume that the students have a certain knowledge about math based on what they learned in previous years... 21 Subjects: including calculus, geometry, statistics, accounting ...ISEE stands for Independent School Entrance Examination. It is offered by ERB, the Educational Records Bureau. The ISEE exam covers Verbal (vocabulary), Reading Comprehension, Quantitative Reasoning (ability to think with and use Math in unique situations) and finally it covers Mathematics. 15 Subjects: including calculus, reading, algebra 2, algebra 1 ...I have students work out solutions by hand, as well as graphically using a graphing calculator for a complete understanding. I have over five years of experience in tutoring calculus at both the high school and college levels. I have tutored students for entire semesters at NVCC, Western CT, Quinnipiac, and Fordham. 22 Subjects: including calculus, physics, geometry, statistics ...Francis College and Berkeley College, overall I have been teaching for 15 years. I have also been tutoring for the past 5 years Elementary Math, Algebra, Precalculus and Calculus students, amongst others, at Hunter College's Dolciani Math Learning tutoring center. I have a Master of Arts and a Bachelor of Science in Pure Mathematics from City College of CUNY, where I also taught for 2 years. 21 Subjects: including calculus, physics, statistics, geometry ...I have been playing softball for the last 12 years. I played varsity softball in high school as a first baseman and left fielder. I led the league in doubles my senior year and ranked among the top in batting average. 19 Subjects: including calculus, geometry, biology, algebra 1
{"url":"http://www.purplemath.com/Alpine_NJ_Calculus_tutors.php","timestamp":"2014-04-21T12:52:45Z","content_type":null,"content_length":"23996","record_id":"<urn:uuid:e5f08afa-f5eb-422e-9994-993b00dbdb8d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 111 7 Comparison with Other Studies CONCORDANCE WITH THE UTAH STUDY OF THYROID DISEASE FROM NEVADA TEST SITE FALLOUT Section TX of the Draft Final Report describes in some detail whether the HTDS study confirms the results of the Utah study of exposure to i3~l from the NTS (Kerber and others, 1993~. With a generally negative dose-response relationship, especially for thyroid carcinoma, the HTDS cannot be regarded as confirming the Utah findings of increased risk of thyroid neoplasia. However, another aspect of the comparison of the two studies is not fully treated: the degree to which the results of the HTDS directly contradict those of the Utah study. One approach to answering that question is to assess the degree to which confidence intervals of risk estimates from the two studies overlap. Even if one study's results are positive and another is negative, it does not mean that they necessarily are irreconcilable. If the positive study is barely positive (p ~ 0.05) and the negative study has wide confidence intervals, there might be no fundamental disagreement. For the HTDS analysis of thyroid carcinoma, the estimate and the confidence interval for the linear slope term for thyroid carcinoma are not reported, because the maximum- likelihood estimates failed to converge. We can, however, work backward from other information in the report to estimate the standard error of the slope term. The attained power to detect a OCR for page 111 I12 Review of the HTDS Draft Final Report slope term of 2.5% per Gy is states} to be 0.96 (section VITT), so we will have (at least approximately) 0.025/(standard error of B) = (A 0.05 - ZI-0.96) = 3 396 Therefore, the standard error of slope term B must have been 0.007 per Gy. To approximate the value of the estimate of B. we note that a logistic mode! did converge and gave an estimated slope that was about ~ standard error below zero. We can assume roughly that the linear slope estimate would also have been about ~ standard error less than zero, or about -0.007 per Gy. If (as hypothesized) males had a background risk of 0.004, the upper confidence limit for the risk at ~ Gy is -0.007 + 2~0.007) = 0.007, so the upper limit of the excess relative risk (ERR) at ~ Gy is 1.75 for males. For females, assuming a background of 0.007, the ERR at ~ Gy is I, so the average of the two is ERR = ~ .375 per Gy. For the Utah study, the estimate was 7.7 with a lower 95% confidence limit of 0.74 per Gy. It seems, then, that the confidence intervals for the risk of thyroid cancer overlap to some degree. Moreover, on the basis of the considerations above, it is evident that the confidence intervals for the HTDS in fact depend on dosimetry- error assumptions; if the pure Berkson mode} of errors in the dosimetry does not hold, the confidence intervals for the HTDS could be considerably wider. Thus, there does not appear to be a fundamental incompatibility between the two studies. CONCORDANCE WITH STUDIES OF EXTERNAL RAI)IATION TO THE THYROID It may be an oversimplification to say that the HTDS, because it found no significant dose-response relationship for any disease end point, is in direct contradiction with the cohort studies of external radiation exposure and risk of thyroid cancer. Of the five cohort studies of external radiation and thyroid cancer that were analyzed by Ron and others (1995), one yielded estimated dose-response relationships considerably stronger than the others. OCR for page 111 Comparison with Other Studies ~3 To combine the results of the five cohort studies, Ron and others used a random-effects model. The dose-response relationship was allowed to vary from study to study, and the average dose-response relationship for a hypothetical population of studies was estimated. The average estimate was equivalent to an ERR of 7.7 times the age-specific baseline per Gy with a confidence interval of 2. I-28.7 times the baseline. As described above, the HTDS is probably consistent with an upper ERR of about T.4 per Gy, which is not statistically compatible with the estimate for external radiation. It is not known whether the two estimates could be statistically compatible if uncertainties in dosimetry were factored into the confidence interval for the HTDS. COMPARISON WITH CHERNOBYL STUDIES The effectiveness of 13~} in causing thyroid cancer has been shown by the Chernobyl experience. The first increases, reported in ~ 992, of thyroid cancer attributed to the accident were challenged as possibly the result of intensive screening. More recently (Astakhova and others, 1998), however, a case-control study in Belarus has found highly significant differences between cases and controls in estimated ]3~} dose to the thyroid, even when controls with similar presenting complaints or screening circumstances were selected. But the durations of exposures were shorter for Chernobyl than for the Hanford downwinders, and the doses were higher, so the dose-rate issue still is unresolved with respect to the epidemiologic data. Furthermore, the dose reconstruction for the study in Belarus was based on actual measurements of ground deposition of ]3~{ and cesium-137, a data bank of 1986 thyroid-radiation measurements, and interviews and questionnaires. Therefore, doses were probably better estimated for individuals in the Belarus study than in the HTDS.
{"url":"http://www.nap.edu/openbook.php?record_id=9738&page=111","timestamp":"2014-04-23T08:12:17Z","content_type":null,"content_length":"41548","record_id":"<urn:uuid:08cbbada-f39c-40fc-8cdf-984d5f0cdafe>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Dependent and independent data Often, when reading a statistics book, you will see some variation on the phrase “independent data“. Many models assume that the data are independent. Sometimes this is abbreviated as part of the acronym iid which means independent and identically distributed. You may get confused between this and the case of independent and dependent variables, which I discussed here. But the two ideas are quite different. When we say data are independent, we mean that the data for different subjects do not depend on each other. When we say a variable is independent we mean that it does not depend on another variable for the same subject. For instance, if we are trying to predict the weight of adult humans, we might gather a sample of adults, and collect various bits of information – height, weight, sex, age, and perhaps many others. Weight is a dependent variable because it depends on the other variables – taller people tend to be heavier; men tend to be heavier than women, and so on. But the data are independent if the weight and other variables for one person aren’t related to those for another. Sometimes, though, the data are dependent . One example is if we measured some variables on a bunch of children, but chose kids who were in particular classes in particular schools: Kids in a class are likely to be more similar to each other than kids in different classes. Another example is when we measure the same person (or other subject) more than once. If I give a bunch of students a midterm and a final, their final grade is likely to depend on their midterm grade, not just because of a general relationship between the two grades, but because it is the same person. Author Bio I specialize in helping graduate students and researchers in psychology, education, economics and the social sciences with all aspects of statistical analysis. Many new and relatively uncommon statistical techniques are available, and these may widen the field of hypotheses you can investigate. Graphical techniques are often misapplied, but, done correctly, they can summarize a great deal of information in a single figure. I can help with writing papers, writing grant applications, and doing analysis for grants and research. Specialties: Regression, logistic regression, cluster analysis, statistical graphics, quantile regression. You can click here to email or reach me via phone at 917-488-7176. Or if you want you can follow me on Facebook, Twitter, or LinkedIn. Comments: 5 Posted by rich 10 Nov 2011 at 7:54 PM So for example, I am doing a study at school and I am looking at information from a leadership survey and an employee engagement survey. I want to know how the employee ranks their supervisor on the leadership survey and separately how they rank themselves on the employee engagement survey. Then I want to see if there is a relationship (correlation). I wanted to use Kendall’s Tau because I understand it can report the strength of the relationship and handle independent ordinal data. Am I thinking about this correctly???? Posted by Peter Flom 11 Nov 2011 at 7:39 AM Sounds right to me! Posted by Jirphan 27 Dec 2012 at 1:24 AM Many thanks ^_^ Posted by Gayane Kira 27 Oct 2013 at 8:52 AM Hello, can you explain independent data in example with plants. For examlple you have 2 fields, each field devided into 3 sections 2 control and 1 gmotreetment. from each section we ll take random samples from different spots in different time. In my opinion the data points within section is dependent because they are living in the same conditions, but if i will take 10 spots in 1 section, and from each spot ll take 1 individual and measure it. If these data will be independent between these 10 spots or not??? and i understand that all these 3 sections have depenent data, because they are growing it the same conditions, but if i ll separate them can these data be independent? Posted by Peter Flom 27 Oct 2013 at 9:07 AM I am not an expert on plants, by any means, but it seems to me that data from a single section will be independent if considered alone – then it would be as if the other sections and fields did not exist. But if you are considering multiple sections, then data within one section is dependent. Leave a Comment! Cancel reply
{"url":"http://www.statisticalanalysisconsulting.com/dependent-and-independent-data/","timestamp":"2014-04-18T13:07:39Z","content_type":null,"content_length":"94323","record_id":"<urn:uuid:bd5bff87-00b1-4bcd-a5e9-b234f1fcbeba>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 15 Your samle is normally distributed with a mean age of 57. The standard deviation iin this sample is 5 years.The 2000 census shows your sample is older than a randomly selected national sample. The median agein your sample is? Clanton Company is financed 75 percent by equity and 25 percent by debt. If the firm expects to earn $30 million in net income next year and retain 40% of it, how large can the capital budget be before common stock must be sold? Recall that very satisfied customers give the XYZ-Box video game system a rating that is at least 42. Suppose that the manufacturer of the XYZ-Box wishes to us the random sample of 65 satisfaction ratings to provide evidence supporting the claim that the mean compo... I have no idea how to do this: h t t p : / / t i n y u r l . c o m / y f e 9 8 s d r^2 the corelation squared measures how well the y hat is than just using y bar or the other way around... I don't remeber what r^2 measures I have what it means on my calculator but I still don't understand it My teacher told me it's basically somehting like how m... A density curve like an inverted letter "V" The first segment goes from the point (0, .6) to the point (0.5, 1.4). The second segment goes from (.05, 1.4) to (1, .6) Find the percent of the observations that lie below .3 I have no idea how to do this If you could tel... is the standard deviation a measure of the center or distribution... my teacher told me center i read the article and i\'ve been lead to believe its a measure of the distribution not the center... if it is a measure of the center how is it? I think it\'s a measure of t... How is finding the standard deviation a measurement of a data\'s center? my teacher said it measured the center... I'm not following this problem at all I understand the terms such as median quartile and such but I just don't understand stocks so can you lend me a hand The rate of return on a stock is its change in price plus any dividends paid, usually measured in percent of the s... I'm not following this problem at all I understand the terms such as median quartile and such but I just don't understand stocks so can you lend me a hand The rate of return on a stock is its change in price plus any dividends paid, usually measaured in percen of the s... I'm not following this problem at all I understand the terms such as median quartile and such but I just don't understand stocks so can you lend me a hand The rate of return on a stock is its change in price plus any dividends paid, usually measaured in percen of the s... using the numbers only 0 through 10 only whole numbers with repeats allowed pick four numbers that would have the smallest standard deviation I said four zeros pick four numbers that would have the largest standard deviation I said 0 0 0 10 is there more than one correct answe... ok i was given a set of data asked to create a graph of my choice to represent the data then i'm asked Does the sahpe of hte distribution allow the use of the mean and the standard deviation to describe it? ok how does the mean and the standard deviation effect the shape o... when the mean is slightly larger than the median does that mean the data is slightly skewed to the left, the right, or neihter? basically i'm at the begining of the text book doing summer work it's talking about mean and stuff asked to find the mean of a set of data and given 14 different numbers the question then goes on and tells me a fifteenth number is added to the set of data with the valu...
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=statistics","timestamp":"2014-04-18T22:55:59Z","content_type":null,"content_length":"10372","record_id":"<urn:uuid:ffbd626d-0ebf-4ab1-af63-8484c4bef5fa>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00128-ip-10-147-4-33.ec2.internal.warc.gz"}