content
stringlengths
86
994k
meta
stringlengths
288
619
Science-8th Grade Posted by Joy on Monday, September 29, 2008 at 11:16pm. 5. Volume is an object is how much ________ it takes up. 6. When we measure the temperature of an object in the metric system, we use _______ 7. Scientist say that measuring mass of an object means measuring how much matter something has. What is matter? 8. You are a space traveler and you go visit another galaxy, where the force of gravity is 10 times the force of gravity on Earth. -If you weigh 150 lbs, how much will you weigh in this other galaxy? -How will your mass be affected? 9. The base unit for measuring mass in the metric sytem is ________ 10. What does the prefix "kilo" mean ? ________ 11. What does the prefic "milli" mean? ________ I have touble with the Metric System and measuring. • Science-8th Grade - DrBob222, Monday, September 29, 2008 at 11:40pm You have posted multiple questions and received help. Your root problem you already know by acknowledging that you have trouble with the metric system. Providing more answers for you will not help you understand it. Here is the basic metric system. unit (grams, liter, second, meter, etc) To convert from one unit to another, just move the decimal point to the left or to the right. Move to the left if going UP the table and move the decimal to the right if going DOWN the table. For example, convert 22.0 mm to cm. Start with your pencil on mm, we go up the table so move the decimal one place to the left to change mm to cm. The answer is 2.2 cm. Change 22 kg to grams. Place your pencil on kg, we go down to table to get to grams so move the decimal to the right 1 place to change to hg, 2 places to change to dkg, and a third place to change to grams; therefore, 22 kg = 22000 grams. I hope this helps get you started on the right path. Memorize the prefixes or practice so much that you dont need to memorize them. Here is a web site that lists all of the prefixes as approved by the IUPAC. Related Questions science - what is the difference between mass and volume? Mass is how much an ... Physical Science - One more worksheet with questions that I need to have checked... Physical Science - One more worksheet with questions that I need to have checked... Physical Science [repost] - One more worksheet with questions that I need to ... Science-8th Grade - In metric, we use_______to measure volume. is the answer ... science - Name a tool that is used to measure the circumference of an object and... calc - The rate that an object cools is directly proportional to the difference ... Physics Fluids - When a crown of mass 14.7 kg is submerged in water, an accurate... Physics - Recently there has been technological advances that allow for 3-D ... science - Calculate Density of Object. Mass of object 34g (picture of 2 G.C's ...
{"url":"http://www.jiskha.com/display.cgi?id=1222744599","timestamp":"2014-04-17T20:18:51Z","content_type":null,"content_length":"10257","record_id":"<urn:uuid:e60a3f23-bf82-4e88-a07f-a24e65ed4732>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
Parameter Recovery 1. Introduction Providing students with good study recommendations begins with understanding what they already know. At Knewton, we’re combining existing work on psychometrics and large scale machine learning to perform proficiency estimation more accurately and at a larger scale than ever before. As Alejandro Companioni [discussed in a previous post], Item Response Theory (IRT) is the core model around which our proficiency estimation is built. IRT proposes that there are latent features of students (proficiency) and questions (difficult, discrimination, and guessability) which explain the accuracy of responses we see on assessments. However, most IRT analyses assume that students bring the same level of proficiency to every question asked of them. This might be fine for a 2-hour standardized test, but we are interested in helping students over an entire course. If we’re doing our job, students should be improving their proficiency as time goes on! This summer, I made our IRT models more sensitive to gains (and losses) in student proficiencies over time. I’ll leave the details of that model to another post, but as a teaser, I’ll show you our first figure. In this figure, the In this post, I will discuss three methods which we used to evalute the performance of our algorithms and discuss their relative strengths and weaknesses. We’ll focus on the mean-squared error, the log-likelihood, and the Kullback-Liebler divergence. A visual representation of our temporal IRT model 2. Evaluating results To tackle the problems I faced, I explored statistical inference algorithms on probabilistic graphical models. To judge my results, I simulated classes of students answering sets of questions, and saw how accurately my algorithms recovered the parameters of the students and questions. One method of quantifying the accuracy of estimates is the mean-squared error (MSE). It takes the mean of the square of the differences between the estimated parameters and their actual values. In symbols, if While the MSE is a good indicator for accuracy in many places, it has problems when models have multiple solutions at different scales. Let’s see why this is through an example. Suppose we have a class of students answering a set of questions. Suppose the questions are actually very hard and that the students happen to be very proficient at the material. Just looking at this set of data, however, we have no way of knowing that these students are actually very proficient. We can only assume the most likely scenario–the students have average proficiences and are answering the questions competently. From the data alone, we can only hope to discern the values of item parameters relative to the proficiency of students and vice-versa. We cannot hope to know their values absolutely. So there are many equally valid interpretations at different scales of the same data. Because the MSE looks at the difference between the two parameters, it will penalize parameters that are scaled differently, even though those parameters might be an equally valid solution given the data! Let’s look at Figure 2. We see that the ordering is basically preserved, which we could measure with Pearson’s Recovered student proficiencies plotted against actual student proficiencies from a simulation. The log-likelihood gives us a more meaningful method of measuring accuracy. The log-likelihood tells us the log-probability of our recovered parameters given our data. At instantiation, our algorithms should have low log-likelihoods–the parameters that we guess first are random and don’t fit our data well. Our algorithms should iterate toward higher log-likelihoods, hopefully converging at the set of parameters with the highest log-likelihood. This is the philosophy behind the Expectation-Maximization algorithm. But the log-likelihood is susceptible to tail events. For instance, if a question is in reality extremely hard but, through random chance, a few students with average proficiency answer the question correctly, then maximizing the log-likelihood will lead to marking these very hard questions as easier than they actually are. This, of course, could be solved with more data from large numbers of extremely proficient students, but this data is often hard to come by. Instead, we introduce another way of measuring model fit: the Kullback-Leibler (KL) divergence. Suppose we have probability distributions In our case, a datum is one student’s response to one question and the If ratio of the likelihoods, it is less susceptible to the influence of tail events than the log-likelihood alone. The log-likelihood and KL divergence both use likelihoods to measure fit, which means that they only care about fit of the parameters to the data, and not the exact convergence to the original. So they often prove to be reliable measures of fit to judge our algorithms on. For instance, even though the MSE of our recovered parameters is large, Figure 2 shows us that our algorithm has likely converged (since the log-likelihood and KL divergences are not changing much) and give us a reliable measure of model fit. The log-likelihood and KL divergence of our recovered parameters through a run of our algorithm. What's this? You're reading N choose K, the Knewton tech blog. We're crafting the Knewton Adaptive Learning Platform that uses data from millions of students to continuously personalize the presentation of educational content according to learners' needs. Sound interesting? We're hiring. Hi Andersen, great post. The images in this blog post are not showing up though. For example, the image at http://www.knewton.com/tech/blog/2012/11/parameter-recovery/student_1-3/, specifically http://s.knewton.com/tech/files/2012/11/student_12.png, is giving a 404.
{"url":"http://www.knewton.com/tech/blog/2012/11/parameter-recovery/","timestamp":"2014-04-21T07:06:28Z","content_type":null,"content_length":"34197","record_id":"<urn:uuid:5f35f92b-7a81-4ebb-90b9-904485cbd7c4>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
Discrete Convex Analysis Kazuo Murota Discrete Convex Analysis is a novel paradigm for discrete optimization that combines the ideas in continuous optimization (convex analysis) and combinatorial optimization (matroid/submodular function theory) to establish a unified theoretical framework for nonlinear discrete optimization. The study of this theory is expanding with the development of efficient algorithms and applications to a number of diverse disciplines like matrix theory, operations research, and economics. This self-contained book is designed to provide a novel insight into optimization on discrete structures and should reveal unexpected links among different disciplines. It is the first and only English-language monograph on the theory and applications of discrete convex analysis. Discrete Convex Analysis provides the information that professionals in optimization will need to "catch up" with this new theoretical development. It also presents an unexpected connection between matroid theory and mathematical economics and expounds a deeper connection between matrices and matroids than most standard textbooks. We haven't found any reviews in the usual places. Convex Functions with Combinatorial Structures 39 Convex Analysis Linear Programming and Integrality 77 Conjugacy and Duality 205 Network Flows 245 Algorithms 281 Application to Mathematical Economics 323 Application to Systems Analysis by Mixed Matrices 347 Bibliography 363 Index 379 Popular passages DD Siljak, Large-Scale Dynamic Systems: Stability and Structure (North-Holland, New York, 1978). References from web pages Algorithms in Discrete Convex Analysis (researchindex) This is a survey of algorithmic results in the theory of discrete convex analysis for integer valued functions defined on integer lattice points. citeseer.ist.psu.edu/ 209185.html Science Links Japan | Discrete Optimization Algorithms based on ... Abstract;The leader of this research group proposed a theoretical system called "discrete convex analysis" in recent years as an attempt for viewing ... sciencelinks.jp/ j-east/ article/ 200402/ 000020040203A0734672.php Discrete convex analysis A theory of "discrete convex analysis" is developed for integer-valued functions ... "discrete convex analysis". To be specific, we give a Lagrange duality ... www.springerlink.com/ index/ G1Q1U3571145151X.pdf Introduction to the Central Concepts “Discrete Convex Analysis” aims at establishing a new theoretical ... The motive for “Discrete Convex Analysis” is explained in general terms of opti- ... www.misojiro.t.u-tokyo.ac.jp/ ~murota/ mybooks/ DCAsiamaimhistory.pdf DROPS - Document This talk describes fundamental properties of M-convex and L-convex functions that play the central roles in discrete convex analysis. ... drops.dagstuhl.de/ opus/ frontdoor.php?source_opus=216 Discrete convex analysis Satoru Fujishige , Akihisa Tamura, A Two-Sided Discrete-Concave Market with Possibly Bounded Side Payments: An Approach by Discrete Convex Analysis, ... portal.acm.org/ citation.cfm?id=303269& dl=GUIDE& coll=GUIDE& CFID=15151515& CFTOKEN=6184618 Discrete Convex Analysis - Cambridge University Press Discrete Convex Analysis, Kazuo Murota, 9780898715408, Cambridge University Press. www.cambridge.org/ us/ catalogue/ catalogue.asp?isbn=0898715407 Satoru Iwata Discrete Convex Analysis. A Capacity Scaling Algorithm for M-Convex Submodular Flow (with S. Moriguchi, K. Murota), Math. Programming, 103 (2005), 181-202. ... www.kurims.kyoto-u.ac.jp/ ~iwata/ Selected publications of K. Murota K. Murota (2003): Discrete Convex Analysis. SIAM Monographs on Discrete ... K. Murota (1998): Discrete convex analysis, Mathematical Programming, 83, ... www.misojiro.t.u-tokyo.ac.jp/ ~murota/ publist.html Discrete convex analysis , by Kazuo Murota, SIAM Monographs on ... The author writes in the preface: “Discrete Convex Analysis is aimed at estab- ... (the name “discrete convex analysis” was, apparently, coined by the ... www.ams.org/ bull/ 2004-41-03/ S0273-0979-04-01015-8/ S0273-0979-04-01015-8.pdf Bibliographic information
{"url":"http://books.google.co.uk/books?id=RjSEs-6dkMoC","timestamp":"2014-04-21T12:33:56Z","content_type":null,"content_length":"130790","record_id":"<urn:uuid:6f87052f-2ef7-4fd2-bb6b-dba84eec7ae7>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
Institute for Mathematics and its Applications (IMA) September 1, 2012 - June 30, 2013 The theory of infinite dimensional dynamical systems is a vibrant field of mathematical development and has become central to the study of complex physical, biological, and societal processes. The most immediate examples of a theoretical nature are found in the interplay between invariant structures and the qualitative behavior of solutions to evolutionary partial differential equations (PDEs) of parabolic or hyperbolic types. Insight has also been gained from the theory of infinite dimensional dynamics into the solution structure for nonlinear elliptic equations, including those arising in geometry. Other important and general topics, besides PDEs and dynamics in abstract spaces, addressed by the theory of infinite dimensional dynamical systems, include delay differential equations, lattice dynamics, and evolutionary systems with spatially nonlocal interaction. Please explore the tabs below to get a fuller description of the program -- the organizing committee, their vision for the year, and the workshops being planned. The IMA will select up to 8 postdoctoral fellows to participate in the program. Annual Program Workshops and Tutorials 9/17-21/12 Tutorial: Infinite-Dimensional Dynamical Systems and Random Dynamical Systems 9/24-28/12 Workshop: Dynamical Systems in Studies of Partial Differential Equations 10/22-26/12 Workshop: Random Dynamical Systems 12/3-7/12 Workshop: Lattice and Nonlocal Dynamical Systems and Applications 1/14-18/13 Workshop: Theory and Applications of Stochastic PDEs 3/11-15/13 Workshop: Stochastic Modeling of the Oceans and Atmosphere 5/13-17/13 Workshop: Stochastic Modeling of Biological Processes 6/3-7/13 Special Thematic Workshop: Joint US-Japan Conference for Young Researchers on Interactions among Localized Patterns in Dissipative Systems
{"url":"http://www.ima.umn.edu/2012-2013/","timestamp":"2014-04-17T12:33:00Z","content_type":null,"content_length":"50720","record_id":"<urn:uuid:97cea9a6-8de8-407a-a340-5d4480dae45d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability for estimate/poisson distribution May 6th 2008, 10:46 AM Probability for estimate/poisson distribution I've searched the forum for topic but I can't seem find one with the same problem, so here goes: I have Xi given as the number of an occurrence pr quarter. X1...Xn is independent and Xi~Poisson(a) I have maximized a for the likelihoodfunction to be [LaTeX ERROR: Convert failed] as an estimation of a. Now to my question: For a=1.25 and n=10, what is the probability that a* equals respectively 1.1 and 1.25? My paper is to be done in a couple of hours, so help would be very much appreciated.. (Smile) Edit: Even though the couple of hours has gone by, I actually still need som help, so if anyone have a hint or two..? May 6th 2008, 01:34 PM I've searched the forum for topic but I can't seem find one with the same problem, so here goes: I have Xi given as the number of an occurrence pr quarter. X1...Xn is independent and Xi~Poisson(a) I have maximized a for the likelihoodfunction to be $a*=\frac{1}{n}\sum{Xi}$ as an estimation of a. Now to my question: For a=1.25 and n=10, what is the probability that a* equals respectively 1.1 and 1.25? My paper is to be done in a couple of hours, so help would be very much appreciated.. (Smile) Edit: Even though the couple of hours has gone by, I actually still need som help, so if anyone have a hint or two..? The sum of n Poisson iid RV's with parameter a, is a Poission RV with parameter na. May 6th 2008, 02:09 PM Thanks for replying Correct me if I'm wrong, but does that mean that I should do it like this: May 6th 2008, 07:51 PM
{"url":"http://mathhelpforum.com/advanced-statistics/37405-probability-estimate-poisson-distribution-print.html","timestamp":"2014-04-17T02:28:29Z","content_type":null,"content_length":"7272","record_id":"<urn:uuid:aa094398-7cdd-435a-b77c-e2d5baaeaa8b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
Maple Shade Science Tutor ...I am flexible with meeting times. The environment we choose to study should be conducive to learning. I look forward to tutoring you.I am currently a nursing instructor for three schools. 9 Subjects: including sociology, grammar, writing, English ...My teaching experience includes two college courses: freshman-level biology; and senior-level advanced nutrition. My tutoring approach varies widely depending on the situation...in other words, I don't take a 'one-size fits all' approach. I have a Bachelor of Science degree in Nutritional Sciences. 8 Subjects: including nutrition, physiology, biology, grammar ...I am hard-working, patient, and able to connect well with students of all abilities and ages. I truly enjoy helping students achieve their goals. Thanks for visiting my page, and best of luck! Scored 780/800 on SAT Math in high school and 800/800 on January 26, 2013 test. 19 Subjects: including ACT Science, calculus, statistics, geometry ...With a physics and engineering background, I encounter math at and above this level every day. With my experience, I walk the student through what a concept in math is about, how to execute it, and how to tackle a problem when it comes time for a test. I am a tutor with a primary focus in math and science who has worked with students at this level of math for multiple years. 9 Subjects: including physics, calculus, geometry, algebra 1 ...I hold a master's degree in special education. I am an adjunct professor of special education at Rutger's University. I have done extensive research into autism and how to work with students with this condition. 43 Subjects: including sociology, psychology, ACT Science, English
{"url":"http://www.purplemath.com/Maple_Shade_Science_tutors.php","timestamp":"2014-04-20T06:27:34Z","content_type":null,"content_length":"23964","record_id":"<urn:uuid:489cf2d6-da72-47d1-9a40-25556ebe5016>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
volume: figuring out integration values. March 21st 2009, 08:39 AM #1 Feb 2009 volume: figuring out integration values. If i have the graph of the following functions y=ln7x,y=0 and x=5. If I want to find the volume for x=-1, how would I figure out the values [a,b] of the integral. I know that this is a washer formula, but that is irrelevant. What if I wanted to find the volume for y=5? What are the integration values now? Ok well I checked my notes and for x=-1 the integral goes from 0 to ln35. I just do not see how that is. March 21st 2009, 08:57 AM #2 Feb 2009
{"url":"http://mathhelpforum.com/calculus/79774-volume-figuring-out-integration-values.html","timestamp":"2014-04-20T02:18:57Z","content_type":null,"content_length":"31479","record_id":"<urn:uuid:87de9516-e30a-483d-b095-7181e3d515a3>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: SF: Progress, double factorization proven Replies: 58 Last Post: Jun 28, 2006 3:42 PM Messages: [ Previous | Next ] Ben Young Re: SF: Progress, double factorization proven Posted: Jun 27, 2006 10:44 AM Posts: 12 From: Carnegie you are a professor, right? I remember looking you up on the internet. would you call one of your students flotsam to their face? what do you think you are on this field? you are Mellon University nothing. Your method is crap, get over it. Things I am working on that are so minor math will not change because of it are more important than your inefficient factoring method. Registered: 6/15/ Date Subject Author 6/24/06 SF: Progress, double factorization proven JAMES HARRIS 6/24/06 Re: SF: Progress, double factorization proven Proginoskes 6/24/06 Re: Progress, double factorization proven Bob Marlow 6/24/06 Re: SF: Progress, double factorization proven JAMES HARRIS 6/24/06 Re: SF: Progress, double factorization proven Nat Silver 6/24/06 Re: SF: Progress, double factorization proven Bob Marlow 6/25/06 Re: SF: Progress, double factorization proven dudalb 6/25/06 Re: SF: Progress, double factorization proven Proginoskes 6/25/06 Re: JSH: SF: Progress, double factorization proven Tim Peters 6/25/06 Re: SF: Progress, double factorization proven rossum 6/25/06 Re: SF: Progress, double factorization proven rossum 6/25/06 Re: SF: Progress, double factorization proven JAMES HARRIS 6/25/06 Re: SF: Progress, double factorization proven The Last Danish Pastry 6/25/06 Re: SF: Progress, double factorization proven Ben Young 6/25/06 Re: SF: Progress, double factorization proven JAMES HARRIS 6/25/06 Re: SF: Progress, double factorization proven mensanator 6/27/06 Re: SF: Progress, double factorization proven Ben Young 6/27/06 Re: SF: Progress, double factorization proven Rick Decker 6/25/06 Re: SF: Progress, double factorization proven Proginoskes 6/25/06 Re: SF: Progress, double factorization proven Tim Peters 6/26/06 Re: SF: Progress, double factorization proven JAMES HARRIS 6/26/06 Re: SF: Progress, double factorization proven mensanator 6/26/06 Re: SF: Progress, double factorization proven Tim Peters 6/26/06 Re: SF: Progress, double factorization proven JAMES HARRIS 6/26/06 Re: SF: Progress, double factorization proven Tim Peters 6/26/06 Re: SF: Progress, double factorization proven JAMES HARRIS 6/26/06 Re: SF: Progress, double factorization proven Tim Peters 6/26/06 Re: SF: Progress, double factorization proven JAMES HARRIS 6/27/06 Re: SF: Progress, double factorization proven Tim Peters 6/27/06 Re: SF: Progress, double factorization proven JAMES HARRIS 6/27/06 Re: SF: Progress, double factorization proven David Moran 6/27/06 Re: JSH: SF: Progress, double factorization proven Tim Peters 6/27/06 Re: SF: Progress, double factorization proven Proginoskes 6/26/06 Re: SF: Progress, double factorization proven JAMES HARRIS 6/27/06 Re: SF: Progress, double factorization proven Proginoskes 6/26/06 Re: SF: Progress, double factorization proven Andrew Poelstra 6/26/06 Re: SF: Progress, double factorization proven David Moran 6/27/06 Re: SF: Progress, double factorization proven Richard Tobin 6/26/06 Re: SF: Progress, double factorization proven David Moran 6/26/06 Re: SF: Progress, double factorization proven ink 6/26/06 Re: SF: Progress, double factorization proven Tim Peters 6/26/06 Re: SF: Progress, double factorization proven ink 6/26/06 Re: SF: Progress, double factorization proven rossum 6/26/06 Re: SF: Progress, double factorization proven Proginoskes 6/26/06 Re: SF: Progress, double factorization proven rossum 6/26/06 Re: SF: Progress, double factorization proven Tim Peters 6/26/06 Re: SF: Progress, double factorization proven JAMES HARRIS 6/26/06 Re: SF: Progress, double factorization proven Justin 6/27/06 Re: SF: Progress, double factorization proven JAMES HARRIS 6/27/06 Re: SF: Progress, double factorization proven Rick Decker 6/27/06 Re: SF: Progress, double factorization proven Tim Peters 6/28/06 Re: SF: Progress, double factorization proven Proginoskes 6/26/06 Re: SF: Progress, double factorization proven avigadl 6/25/06 Re: SF: Progress, double factorization proven Gib Bogle 6/25/06 Re: SF: Progress, double factorization proven Euler 6/25/06 Re: SF: Progress, double factorization proven Gib Bogle 6/26/06 Re: SF: Progress, double factorization proven rossum 6/26/06 Re: Progress, double factorization proven The Last Danish Pastry 6/26/06 Re: Progress, double factorization proven Ryugyong Hotel
{"url":"http://mathforum.org/kb/thread.jspa?threadID=1403532&messageID=4853858","timestamp":"2014-04-21T10:12:35Z","content_type":null,"content_length":"85661","record_id":"<urn:uuid:d9f53b5c-f9b4-4525-8285-c4bfcc826672>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
Transparencies from talks by Dave Witte Morris What is a Coxeter Group? October 2013 at University of Lethbridge PDF file Introduction to Bruhat-Tits buildings October 2013 at University of Chicago PDF file Arithmetic subgroups of SL(n,R) August 2013 in Jeonju, South Korea (KAIST Geometric Topology Fair) May 2013 at University of Chicago PDF file Strictly convex norms on amenable groups November 2013 at University of Utah March 2013 at South Padre Island, Texas (Geometric Groups on the Gulf Coast) April 2012 at University of Lethbridge March 2012 at University of Chicago March 2012 at Indiana University March 2012 at Purdue University Lecture 1: Strictly convex norms on abstract groups (PDF file) Lecture 2: Strictly convex norms on amenable groups (PDF file) Condensed version (PDF file) SL(n,Q) has no volume-preserving actions on (n-1)-dimensional compact manifolds August 2013 at Seoul National University May 2013 at the University of Chicago March 2013 at Rice University July 2012 at Oberwolfach PDF file Introduction to vertex-transitive graphs of prime-power order March 2013 at the University of Lethbridge PDF file Hamiltonian paths in solvable Cayley digraphs October 2012 at the University of Lethbridge July 2011 in Regina June 2011 in Bled (7th Slovenian International Conference on Graph Theory) PDF file Some arithmetic groups that do not act on the circle July 2012 at Park City Mathematics Institute Lecture 1: Introduction (PDF file) Lecture 2: Proof using bounded generation (PDF file) Lecture 3: What is an amenable group? (PDF file) Lecture 4: Introduction to bounded cohomology (PDF file) On interactions of amenability with left orderings February 2012 at workshop in Banff PDF file Does every Cayley graph have a hamiltonian cycle? March 2011 at University of Western Australia PDF file another talk on a similar subject (and others can be found below): Survey of hamiltonian cycles in Cayley graphs May 2010 at AMS meeting in Newark PDF file When do subsets of {0,1}^G ^x ^G contain recurrent points? December 2010 in Vancouver (Canadian Mathematical Society) PDF file Why arithmetic groups are lattices June 2010 at the University of Chicago PDF file Survey of invariant orders on arithmetic groups September 2011 in AMS Sectional meeting at Cornell June 2011 at Oberwolfach June 2010 at the University of Chicago October 2009 in AMS Sectional meeting at Penn State PDF file Introduction to Ratner's Theorems on Unipotent Flows June 2010 at Ohio State University March 2010 at the University of British Columbia April 2009 at the University of Virgina PDF file other talks on a similar subject: Ratner's Theorems June 2010 at Ohio State U (two 1.5-hour lectures) PDF file Introduction to arithmetic groups January 2010 at KAIST in Daejeon, Korea (three 1.5-hour lectures) PDF file What is the Congruence Subgroup Property? September 2009 at Carleton-Ottawa Algebra Day PDF file other talks on a similar subject: The Congruence Subgroup Property and bounded generation May 2008 at the University of Chicago A lattice with no torsion-free subgroup of finite index (after P.Deligne) June 2009 informal discussion at the University of Chicago PDF file Two lectures on bounded cohomology June 2009 at the University of Chicago PDF file Locally symmetric subspaces of locally symmetric spaces (joint work with Vladimir Chernousov and Lucy Lifschitz) January 2009 at AMS meeting in Washington, DC September 2008 at Oberwolfach April 2008 at Indiana University February 2008 at the University of Chicago PDF file of slides from recent talk PDF file of lecture notes from older talk other talks on the same subject: Minimal isotropic simple Q-groups of higher real rank February 2008 at the University of Virginia PDF file Almost-minimal lattices of higher rank May 2006 at the University of Chicago PDF file Using left-invariant orders to study actions on 1-manifolds June 2008 at CIRM, Luminy, France Proof of the Margulis Normal Subgroups Theorem February 2008 at the University of Chicago Amenable groups that act on the line April 2008 at University of Illinois, Chicago April 2008 at Northwestern University April 2008 at AMS meeting in Bloomington, Indiana February 2008 at Ohio State University March 2007 at the University of British Columbia November 2006 at the University of Alberta October 2006 at the University of Karlsruhe PDF file a more elementary talk on the same subject: Using recurrence to study symmetries of the real line March 2012 at the University of Virginia PDF file Dani's contributions to ergodic theory on homogeneous spaces December 2007 at Tata Institute in Mumbai, India PDF file Application of Bost's Theorem to subgroups of algebraic groups (a lecture in a course given by B.Farb and M.Kisin) October 2007 at the University of Chicago PDF file Some discrete groups that cannot act on 1-dimensional manifolds July 2007 in workshop on amenability at Schrodinger Institute in Vienna Part 1: Actions of amenable groups ( PDF file Part 2: Actions of arithmetic groups ( PDF file Part 3: 3 Major Theorems of Margulis ( PDF file (Theorems in Part 3 are not part of the announced topic, but are important and use methods similar to Part 2) Which circulant digraphs are hamiltonian? June 2007 in Koper, Slovenia PDF file Bounded generation of special linear groups (after Carter, Keller, and Paige) April 2008 at Vanderbilt University June 2005 at workshop in Banff PDF file Actions of arithmetic groups on the circle April 2005 at the University of Illinois, Chicago PDF file other talks on the same or a related subject: Some arithmetic groups that cannot act on the line (joint work with Lucy Lifschitz and Vladimir Chernousov) Alternate titles: Some arithmetic groups that cannot act on the circle, Some arithmetic groups that cannot act on 1-manifolds, Some arithmetic groups that cannot be right ordered March 2012 at Purdue University April 2010 at the University of Virginia December 2009 at the University of Lethbridge April 2008 at Vanderbilt University February 2008 at the University of Virginia January 2008 at Mississippi State University January 2008 at the University of Texas July 2007 at the University of Minnesota, Duluth March 2007 at the University of British Columbia July 2006 at Oberwolfach, Germany March 2006 at the University of Hawaii November 2005 at Texas A&M University March 2005 at AMS meeting in Lubbock, Texas March 2005 at Princeton University March 2005 at AMS meeting in Newark, Delaware March 2005 at Rice University February 2005 in Auckland, New Zealand November 2004 at the University of Alberta November 2004 at Lorentz Dynamics Workshop in Banff August 2004 at Alberta Topology Seminar in Banff June 2004 at Caltech April 2004 at the University of Regina PDF file Some arithmetic groups that cannot act on the circle (alternate title: Arithmetic groups that cannot be right-ordered ) December 2001 at Tata Institute (Mumbai, India) February 2002 at Case Western Reserve University and Virginia Tech PDF file Some arithmetic groups that cannot act on the circle March 2002 at Les Diablerets, Switzerland PDF file SL(3,Z) cannot act continuously on the circle October 1998 at the Ecole Normale Superieure - Lyon (France) PDF file Actions of semisimple Lie groups on circle bundles (joint work with Robert J. Zimmer) May 2000 at the Newton Institute (Cambridge, UK) March 2000 at the University of Manchester, England PDF file Cocompact Lattices January 2006 in Workshop on Property RD at the American Institute of Mathematics, Palo Alto PDF file Hamiltonian checkerboards November 2011 at the University of Lethbridge May 2010 at the University of Manitoba (Prairie Discrete Mathematics Conference) PDF file other talks on related subjects: Hamiltonian cycles in Cayley graphs August 2005 at the University of Winnipeg (Prairie Discrete Mathematics Conference) PDF file Open problems on hamiltonian cycles in Cayley graphs June 2006 in Minisymposium in SIAM conference at University of Victoria PDF file Hamiltonian cycles in circulant graphs and digraphs May 2004 at the University of Lethbridge (Combinatorics Day) PDF file Hamiltonian paths and cycles in vertex-transitive graphs and digraphs May 2003 at Simon Fraser University (conference for Brian Alspach's 65th birthday) PDF file Which flows are sums of hamiltonian cycles in abelian Cayley graphs? (joint work with Joy Morris and David Petrie Moulton ) May 2003 in Koper, Slovenia (Algebraic Combinatorics on the Adriatic Coast) PDF file Flows that are sums of hamiltonian cycles in abelian Cayley graphs (joint work with Joy Morris David Petrie Moulton March 2002 at Southeastern Combinatorics Conference in Boca Raton, Florida PDF file Hamiltonian paths in cartesian powers of directed cycles (joint work with David Austin and Heather Gavlas) November 2002 at the University of Lethbridge PDF file Touring a torus (alternate title: Hamiltonian Checkerboards March 2008 at the University of Michigan April 2002 at the University of Lethbridge, Canada November 2001 at Oklahoma State University This is an undergraduate-level talk. PDF file Geometric interpretation of the Q-rank of a locally symmetric space May 2004 at AMS/SMM Meeting in Houston (joint work with Pralay Chatterjee) PDF file another talk on a related subject: Orbits of Cartan subgroups on homogeneous spaces (after George Tomanov and Barak Weiss) December 2001 at Tata Institute (Mumbai, India) PDF file Real representations of sp(n) have Q-forms April 2003 at the University of North Carolina PDF file another talk on a related subject: Q-forms of real representations of compact semisimple Lie groups (after Raghunathan and Eberlein) October 2001 at OSU PDF file Gromov and Piatestski-Shapiro's Nonarithmetic Lattices in SO(1,n) September 2002 at Oklahoma State University PDF file Some ideas in the proof of Ratner's Theorem (alternate title: An Introduction to Unipotent Flows) September 2007 at Pennsylvania State University January 2007 at University of Calgary July 2002 at ETH, Zurich June 2002 at the University of Chicago February 2000 at the Newton Institute (Cambridge, UK) previously at a few other universities PDF file Ergodic actions of semisimple Lie groups on compact principal bundles (joint work with Robert J. Zimmer) April 2001 at the University of Illinois, Chicago PDF file Rigidity of some characteristic-p nillattices (joint work with Lucy Lifschitz) June 2000 at the Newton Institute (Cambridge, UK) PDF file What is a superrigid subgroup? (alternate title: Superrigid subgroups of solvable Lie groups) November 2013 at U of Utah and Idaho State U April 2002 at the University of Regina, Canada February 2002 at Virgina Tech May 2000 at the University of Birmingham, England PDF file Other talks on the same subject: More elementary: What is a superrigid subgroup? (August 1997 at the MAA Mathfest, Atlanta) More advanced: Superrigid subgroups of solvable Lie groups (April 1999 at the U of Chicago) Tessellations of homogeneous spaces of SU(2,n) (joint work with Alessandra Iozzi and Hee Oh ) March 2000 at the Newton Institute (Cambridge, UK) PDF file Cartan-decomposition subgroups (joint work with Hee Oh and Alessandra Iozzi ) September 1999 at the University of Michigan PDF file Transitive permutation groups of prime-squared degree (joint work with Edward Dobson) May 1999 Group Theory Junior Seminar at the University of Chicago PDF file Foliation-preserving maps between solvmanifolds (joint work with Holly Bernstein) March 1998 at Kansas State University AMS meeting PDF file Simple groups of real rank at least two have Kazhdan's property T January 28, 1998 Lie Groups Seminar at Oklahoma State University PDF file Introduction to Kazhdan's property T January 21, 1998 Lie Groups Seminar at Oklahoma State University PDF file
{"url":"http://people.uleth.ca/~dave.morris/talks.shtml","timestamp":"2014-04-19T23:34:26Z","content_type":null,"content_length":"31286","record_id":"<urn:uuid:bfda59dd-6fa0-49bd-9e69-d3ce658a41fa>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
Method and arrangement for generating program clock reference values (PCRS) in MPEG bitstreams A method of generating program clock reference values for a digital data stream is provided. The program clock reference value will preferably include a 33-bit base and a 9-bit extension. The method comprises the steps of receiving an input digital data stream having a pixel clock frequency, dividing the input frequency of the pixel clock and producing a counter clock that increments at a rate proportional to the pixel clock, multiplying the counter clock by a rational number to produce a number that indicates time expressed in 27 MHz periods and inputting the resulting value into a divider which divides every input by 300 to produce a quotient representing the Program Clock Reference value base and the remainder representing the Program Clock Reference value extension. A PCR generator for use in, for example, an MPEG encoder is also provided. Inventors: O'Grady; William J. (Yonkers, NY) Assignee: U.S. Philips Corporation (New York, NY) Appl. No.: 09/107,528 Filed: June 30, 1998
{"url":"http://patents.com/us-6195392.html","timestamp":"2014-04-19T14:39:23Z","content_type":null,"content_length":"41458","record_id":"<urn:uuid:d28087a7-0a05-4f91-85be-a7e37f1439ee>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
Expected Degree of a vertex in Delaunay Triangulations up vote 13 down vote favorite Assume you have a Poisson point process of constant intensity $\lambda$ in the Euclidean plane. From this point process we construct the Delaunay triangulation (or the Voronoi tessellation for that It is known [Stoyan et all] that the expected degree of a typical vertex has degree 6. Moreover, there are several results for the average area of a typical triangle, edge lengths and so on. However, all the results seem to be for constant intensity. My question is: Is it known how the expected degree changes as we change the intensity? More specifically, assume we have a rotationally invariant intensity $\rho(r)$ where $r$ is the distance to the origin and assume that $\rho(r)\to\infty$ as $r$ increases. How, does the expected degree of a node depends on its distance to the origin. The intuition is that the degree is going to increase as $r$ increases but is there any known result or reference in this direction? This is not my research area so I will appreciate any help or comment! pr.probability mg.metric-geometry stochastic-processes 1 Two naive comments. 1) The proof that the expected degree of a typical vertex has degree 6 is not difficult (if I remember correctly, apart from boundary terms, this is just a combinatorial result). So maybe it would not be that difficult to adapt the proof if this is possible. 2) Could it be related to Delaunay triangulation on hyperbolic spaces ? There may be a few results on it – camomille Mar 25 '11 at 23:49 2 @Camomille: Thanks for your comment. You raised a very good question. I don't know what is the expected degree for the Delaunay triangulation on the hyperbolic space but I guess there should be some results there...anyone knows? – ght Mar 26 '11 at 0:56 add comment 3 Answers active oldest votes The expected degree for the Delaunay triangulation on hyperbolic space will depend on the density of the points. With low enough density points, you should get arbitrarily high degree. You should be able to get the expected degree as high as you want for a point distribution in the plane by taking the conformal representation of the hyperbolic plane as a varying metric in a unit disc (the Poincare disc), and placing the points with density proportional to this metric. Then the density goes to infinity at the boundary of the circle. Note that since the Poincare disc model takes circles in the hyperbolic plane to circles in the disc, the Delaunay triangulation of a set of points (which is characterized by the fact up vote 7 that the circumcircle of every triangle does not contain another point) is the same in the disc and the hyperbolic plane. Thus, the expected average degree of the Delaunay triangulation down vote should be the same in both of these scenarios. The above construction satisfies the conditions stated in your question, but I don't know if you'll be satisfied. Did you want the points to be distributed on the entire plane? If so, I expect it's impossible because there's no way to put a conformal metric on all of $\mathbb{R}^2$ that gives the hyperbolic plane (although note that this isn't quite a proof). @Shor: Thank you for your comment. I didn't follow why did you say that for low enough density points, I should get arbitrary large degree. It seems to me that the bigger the density 1 the higher the average degree. More precisely, if the density is rotationally invariant and increases as $r\to 1$ (the boundary of the Poincare disk) then the average degree of nodes in the annulus $A(r,r+dr):=\{x\in\mathbb{D}:r\leq |x|\leq r+dr\}$ should also increase. Isn't this right? Your are absolutely right that the problem in $\mathbb{R}^2$ and the hyperbolic disk are quiet different. – ght Mar 26 '11 at 13:07 I mean the density in the hyperbolic plane. If you look at a small region of the hyperbolic plane, it looks a lot like the Euclidean plane, and the average degree will only be slightly more than 6. If you look at a large region of the hyperbolic plane, it looks very different from the Euclidean plane, and the average degree will be a lot more than 6. In the Euclidean plane, the average degree is determined not by the density, but by something like the rate of change of the density. – Peter Shor Mar 26 '11 at 13:54 I'm confused by your remark that "there's no way to put a metric on all of $\mathbb R^2$ that gives the hyperbolic plane." Did you mean for the word "conformal" to be in there somewhere? – Kevin Walker Mar 26 '11 at 15:16 @Kevin: Yes, "conformal" should have been in there. And for this question, we could add "rotationally invariant" as well, which I suspect leaves the Poincare metric as the only example. Thanks – Peter Shor Mar 26 '11 at 16:58 add comment Let's speak momentarily about the space average, rather than the expected degree. That is, consider the (expectation of the) average degree over all vertices in the disc of radius $R$ around the origin and take $R$ to infinity. I claim that unless the intensity increases really fast (exponentially?) this average will stay 6. The reason is that any simple finite planar graph has average degree less than 6. Hence, if the ball (in the graph metric) of radius $n$ in a planar graph has average degree, say, 7 then the size of the boundary of that ball is some constant fraction of the size of the ball and this holds for any $n$, thus requiring exponential growth (in the graph metric). I have not checked up vote it, but it seems that this implies exponential growth of the density function, and perhaps much more than this. Actually, right now I'm not sure whether you can get average degree 7 with any 9 down $\rho$ that does not blow to infinity in a finite radius (but maybe it's trivial in one way or another - it's late). As a side note, $\rho$ being monotone is not enough to guarantee that the expected degree is monotone. Since the expected degree is 6 for any constant density, if the density is almost constant in some large region then the expected degree there is roughly 6 in that region. 1 @ Ori: did you mean to say that "...any simple finite planar graph has average degree less than 6"? This is clearly not true! – ght Mar 26 '11 at 12:44 2 I did mean that - it's a classic. The key word here is "simple" - no loops or multiple edges. Assume the graph is a triangulation (otherwise add edges). Then use Eular's formula + 2E=3F to get a bound on 2E/V which is the average degree. – Ori Gurel-Gurevich Mar 26 '11 at 15:04 add comment There are also articles available online, that are related to that topic: up vote 0 down vote http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=879ADD299F6B2BF663BAA98F334D30A1?doi=10.1.1.40.1419&rep=rep1&type=pdf and a lot more are reported when feeding "distribution of vertex degrees in triangulations" into a popular search engine. add comment Not the answer you're looking for? Browse other questions tagged pr.probability mg.metric-geometry stochastic-processes or ask your own question.
{"url":"http://mathoverflow.net/questions/59580/expected-degree-of-a-vertex-in-delaunay-triangulations/59652","timestamp":"2014-04-20T11:18:50Z","content_type":null,"content_length":"71926","record_id":"<urn:uuid:e8f88091-7595-4a67-90ac-4bfe60ed1e04>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
IGEM:IMPERIAL/2007/Experimental Design/Phase1/Results 3.1 From OpenWetWare ← Previous diff m Line 18: Line 18: ==Results== ==Results== [[ Image:IC2007 Experimental Design Phase1 Protocol31Window-experiment.PNG|thumb|800px|left| [[ Image:IC2007 Experimental Design Phase1 Protocol31Window-experiment.PNG|thumb|800px|left| Fig.1: Variability of Fluorescence Measurement Using Different Counting Times]] Fig.1: Variability of Fluorescence Measurement Using Different Counting Times]] - [DEL:<br><br><br>:DEL] + From Fig.1, the most optimum window length to be used is found to be the 0.60 seconds counting From Fig.1, the most optimum window length to be used is found to be the 0.60 seconds counting time. The 0.15 secs counting time is, as expected, very random but as the counting times are time. The 0.15 secs counting time is, as expected, very random but as the counting times are increased by 0.15 secs at a time, the variability between the repeats decreased. When 0.75 secs increased by 0.15 secs at a time, the variability between the repeats decreased. When 0.75 secs counting time was reached, this variability starts to rise again, indicating that the window counting time was reached, this variability starts to rise again, indicating that the window length had surpassed optimum amount. length had surpassed optimum amount. ==Discussion== ==Discussion== Current revision Optimum Counting Time for the 'Twinkle' Fluorometer To determine the optimum counting time for the fluorometer while avoid fluorescent bleaching. The counting time is the time the fluorometer detector stays on top of each well. The fluorometer we are using is a Twinkle LB970 from Berthold Technologies. A small window time will only account for very discrete levels of fluorescence. These might include sudden spikes of radiation since fluorescence is not a uniform process. Hence we will get variation between samples of equal expression rates. A larger counting time results in a larger window size and hence a more average reading is taken from each sample smoothing out the variation due to the randomness of fluorescence emission. Care must be taken however because larger counting times will lead to faster fluorescence bleaching. A compromise between the two must therefore be found. Materials and Methods Refer to protocols page. From Fig.1, the most optimum window length to be used is found to be the 0.60 seconds counting time. The 0.15 secs counting time is, as expected, very random but as the counting times are increased by 0.15 secs at a time, the variability between the repeats decreased. When 0.75 secs counting time was reached, this variability starts to rise again, indicating that the window length had surpassed optimum amount. It was noticed that by changing the counting time(window) for which the detector remains on top of each well, the variation between repeated measurements varied. It is thus ideal to have as little variation as possible between repeats while maintaining a relatively low counting time to prevent excessive bleaching of the samples. Therefore a range of the smallest counting times possible (0.15 - 0.75 sec) was examined only. For each time point examined, different samples of the same stock solution where measured repeatedly(4 times) at different windows(counting time). The percentage(%) variability between the repeated measurements(4) of the same sample was then calculated for each window. Ideally, since the same sample is measured repeatedly, it is expected that the variability between the repeats to be zero(0). This however does not take into consideration the inherent randomness within fluorescence. From the results, a 0.60 window would allow the least variability across measurements over a 90 minute period while minimizing photobleaching of fluorescent proteins. • A counting time of 0.60sec would provide the least variability across measurements while minimizing photobleaching of the fluorescent proteins expressed.
{"url":"http://www.openwetware.org/index.php?title=IGEM:IMPERIAL/2007/Experimental_Design/Phase1/Results_3.1&diff=prev&oldid=158361","timestamp":"2014-04-25T04:10:34Z","content_type":null,"content_length":"23211","record_id":"<urn:uuid:79960438-1a93-4efb-9fdc-7c7084266225>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
he EPR experiment Locality in the EPR experiment I. The von Neumann collapse postulate In this section, we show that the postulate of von Neumann, that on measurement the wave function collapses to an eigenstate of the observable being measured, follows from Bayes's rule for conditioning probabilities in classical probability. Suppose a system is described by the unit rays of the Hilbert space H and the self-adjoint operators on it. In some works, it is said that a measurement of an observable, say X, in quantum mechanics causes the state to jump to one of the eigenstates of X. To understand this, it is better to divide the measurement, and the state reduction, into two stages. Suppose for the time-being that the initial state of the system is given by the pure vector state (phi>, and that the eigenstates of X are (psi(n)>, with no multiplicity. Suppose that n runs over the set J of labels. The first stage is the interaction of the system with the measuring device, and the second stage is the seeing of the reading on the device by the observer. After just the first stage, the state of the system is the mixture of all the eigenstates of X, with weight given by the quantum mechanical transition probability Thus, the system is clearly in a mixed state. A good measuring device is a classical system in which the pointer of the device is 100% correlated with the eigenstate into which the system is projected. More, the details of the device do not affect the reading. Thus, a complete description of the device is given by the label n, an element of J. Although a classical concept, the observables of the measuring device (the readings) have a quantum description as multiplication operators chi(n) on the Hilbert space L^2(J). Here, chi(n) is 1 at n, and zero elsewhere. The result of first stage of the measurement is thus described by the density operator sum[n]p(n) chi(n) tensor |psi(n))(psi(n)| on the tensor product of L^2(J) and H. The classical nature of the measuring device means that the natural basis vectors labelled by j in J of L^2(J) are separated by a superselection rule: no phase between these subspaces is observable. The second stage of the measurement process is the reading of the instrument by the observer. We can imagine the case when the system itself has left the neighbourhood of the instrument. Since the instrument is classical, it can be observed without affecting the system further. Only classical probability is involved in this stage: the label n is observed with the probability p(n). After the observation of say the label m, the observer, say Alice, A, knows that the system must be in the state psi(m), since there was a 100% correlation between states and readings. By reading the instrument, A replaces (by Bayes's formula) the probability p(n) by the conditioned probability p(. |given m), and the state of the system is reduced to psi(m). Thus, von Neumann's projection postulate follows from the usual Bayes rule for classical probability. More generally, if A had made an incomplete observation of the reading, and could only be sure that n lies in some subset K of J , the classical probability of the reading would change from p to p(.|K). Then Bayes's theorem shows that the density matrix of the system, as observed by A, is reduced to sum_{n\in K} p(n)|psi(n))(psi(n)|/[sum_{n\in K}p(n)] as given by von Neumann. A similar argument can be given if some eigenvalues of X are not simple. If X has a continuous part to its spectrum, the classical theory (of conditional probability) is harder, but the argument can be given; it leads to von Neumann's projection postulate as well. Thus, the first stage of the measurement process is caused by the physical interaction of the system with the device; the second stage, the reading, involves the conciousness of the observer, as remarked by Wigner. This does not mean that physics is subjective, any more than it does in classical probability. Indeed, once X has been singled out for measurement, and the measurement has been done, the theory reduces to classical probability in its techniques and interpretation. II. Completely positive unital maps Any self-adjoint operator X possesses a spectral measure P(lambda), which defines a projection-valued measure on the line R. This means that to any Borel subset E of the real line, is given a P(E)=int[E] dP(lambda) In the case of discrete spectrum, this becomes a family of projections, P(lambda), one for each point of the spectrum of X. Then we can summarise the von Neumann collapse postulate, derived above from Bayes's rule, by saying that the first stage of the measurement causes the state |phi), with density matrix |phi)(phi|, to change to sum P(n)|phi)(phi|P(n) More generally, by linearity, if the density matrix of a state before measurement is rho, then that after the first stage of measuring A is sum[n] P(n) rho P(n) By observing the reading, to be m \in J say, the observer picks out the projection onto the eigenstate |psi(m)). If instead of measuring one observable, the device is designed to simultaneously measure commuting observables X(1), X(2), ..., X(r). then we make use of the joint spectral measure P(lambda(1),..., lambda(r)) of this abelian set, and the first stage of the process is given by the map rho mapsto sum_{lambda(1),...,lambda(r)}P{lambda(1),...) rho P(lambda(1),...} This defines a completely positive unital map on the space of operators. A unital map is simply one that maps the unit operator to itself; that this is so here follows at once by putting rho = I, and using the fact that P^2=P for any projection. Then we complete the proof by noting that for any spectral family, the sum adds up to one: sum_{lambda(1),..., lambda(r)}P{lambda(1),...,lambda(r)}=1. A positive map is a linear map that takes a positive operator to a positive operator (positive means positive semidefinite). It is said to be completely positive if its tensor with the identity I[n] is positive for all natural numbers n (here, I[n] denotes the unit operator on the n-dimensional complex Hilbert space C[n]). Thus, experts in measurement theory postulate that the first stage of any measurement of a compatible set of observables is, generally, a completely positive unital map on the set of density operators. To illustrate the EPR experiment (I do not call it a paradox), we do not need the generalisation proposed by Davies and Lewis, which involves the introduction of positive-operator-valued III. The observation of entangled states. States consisting of two parts, that are spatially distant but nevertheless are correlated, can occur in classical probability. Bell calls this the paradox of Dr. Bertelmann's socks. If Dr. Bertelmann shows us the colour of one of his socks, say it is red, by raising his trouserleg, we can with some certainty guess that the colour of his other sock is red. Our guess will be more reliable after he has shown us the info than before. We would not try to argue that our seeing the red of one sock had any physical influence on the other. No; our seeing it simply reveals what was there; it is our assessment of the probability that is changed. See my experience of a lecture on this by J. S. Bell. Similarly, in a game of cards, when I see that my hand contains an ace of spades, I change the probabilities, from what they were before I saw the hand. This information does not change the objective fact of my opponent's hand as viewed by him. My seeing the ace of spades does NOT change the assessment that my opponent makes of my chances of holding this card (unless I send him some signal concerning my hand). In the same way, a measurement of the spin of one of a pair of EPR -electrons does not alter the state of the other, as seen by the other observer. We can use this classical argument because the measurement of a complete commuting set of observables (A's spin S(1) and B's spin S(2) ) has set up a classical probability model in the sense of Kolmogorov. In his book, Mind, Matter and Quantum Mechanics [Springer-Verlag, 1993, 2004], p. 29, Stapp says that this result in classical probability is entirely non-problematic, and does not require that there be any communication between the players to ensure that the hands are correlated. He says that the situation is quite different in quantum mechanics, but does not explain why. Penrose also gives a similar example, and also says that the classical situation does not require any instantaneous signal to travel between the observers. Again, he claims that the situation in quantum measurement is different. Our view is that the interpretation of quantum mechanics is precisely that of the classical probability set up by ther measurement using the chosen complete set of commuting observables. As remarked by Peierls, quantum mechanics is the Copenhagen interpretation. We now prove in detail that B's assessment of his state is unchanged by A's measurement. The reader will see that an attitude similar to that necessary in the theory of games has been adopted. Indeed, the analysis below is a simple example of a quantum game. This point of view is outlined in my review of Mielnik's article `The paradox of two bottles in quantum mechanics', Found. Phys. 20, 745-755, 1990, and has been advocated in my book, Statistical Dynamics. Suppose that an atom emits two electrons in a singlet spin state in opposite directions, which are observed by Alice (A) and Bob (B) at two far-separated sites. The state of the spin is pure, and is given by the vector psi = 2^(-1/2){ |+)|-) - |-)|+) } The spin measurement, in the third direction, by A is achieved by the completely positive unital map which, on a state rho, is given by M(3) rho = P(+) rho P(+) + P(-) rho P(-) where P(+) and P(-) are the projections onto the two eigenstates of J(3), the operator representing the spin of A's particle in the third direction. From the point of view of the total system, A's measurement uses the completely positive stochastic map M(3) tensor I. Because this operator consists of the identity on B's Hilbert space, the measurement by A does not alter the partial state seen by B, namely, the restriction, rho| alg(B), of rho to alg(B), the observable algebra of B: [(M(3) tensor I) rho]| alg(B) = rho| alg(B) As for the total state, the density operator rho(psi) of the pure state psi, on measurement it changes as follows: (M(3)tensor I)rho(psi) = 1/2[ P(+) tensor P(-) + P(-) tensor P(+) ]........................(1) Thus the measurement by Alice, of J(3) of her particle, without observing the result, leads to the classical mixture with equal weights, of the two possible results, spin up and spin down, 100% anti-correlated with the spin at B. The pointer of her instrument is a classical random variable, not in the microsystem; it is 100% correlated with Alice's spin. By looking at the reading of her pointer, Alice will condition the microstate by knowledge, and will produce a pure state, and will also find out what the the spin of B's particle would be, if measured in the 3-direction. If A has measured S(3) and has seen her result, then the state she assigns to the whole algebra, the conditioned state, is the pure state, and B's result, if he measures it, is sure for Alice (though not for Bob). Some people regard this as a state-preparation by Alice for Bob. This makes sense only if he is informed of her result by Alice. B can then measure his spin, confirming the conservation law. His measurement brings no new info from the point of view of the total system; this is why his measurement alters his state but not that assigned by both of them if they share info. But as long as there is no signal from A, B's initial state is the completely mixed state of the unpolarised electron, and his spin measurement (with no looking at the result) does not alter his partial state. If A makes two measurements, of J(3) and then J(2) say, in that order, then the first, J(3), will tell her the result that B would get if he measured his J(3), but the second would not tell her the result that B would get if he measured his J(2): he would not necessarily get the opposite result from hers. Roughly, this is because the first measurement interfered with the system, spoiling the conservation of total spin in the 2-direction. Don't take my word for this; just look at the calculation given now. After A's measurement of J(3), the state is as above: [M(3) tensor I] rho = 1/2[P(-) tensor P(+) + P(+) tensor P(-)]........................(1) When we apply [M(2) tensor I] to this, we do not alter the partial state of B, because of the unit factor in the operation. Thus we do not alter the fact that the value of J(3) that B would find if he measured it is anti-correlated 100% with the quantum record held by A; but if Bob as well as Alice measure J(2), there will be no correlation between the results for J(2). If A's and B's second measurement is not J(1) or J(2), but contains a small component along J(3), then there will be a small but not 100% anticorrelation between the second measurements. The theory tells us exactly what to expect. Let us do the case when both measure J(2). Because the state is now given by eq (1) above, we need to find [M(2) tensor I] [P(+) tensor P(-)] and [M(2) tensor I] [P(-) tensor P(+)] to find the effect of A's second measurement. On A's Hilbert space, this reduces to finding M(2)P(+), since M(2)P(-) = 1 - M(2)P(+). This is easy: M(2)P(+) = P{J(2) = +}P{J(3) = +}P{J(2) = +} + P{J(2) = -}P{J(3) = +}P{J(2) = -} = 1/2[P{J(2) = +} + P{J(2) = -}]. Here, we have denoted by P{J(2) = +} the spectral projection onto its eigenvalue +1/2, and so on, and for clarity have used the same notation for what we called P(+) and P(-), namely P{J(3) = +} for P(+) etc. Put this in the formula, and we see that the state after both A and B have made their second measurement is the equal mixture of the four possible pure states, the projections onto the four eigenstates of J(2) tensor J(2): there is now no correlation. With probability 1/4, both A and B could find that J(2) for their sample is +1/2, and the total is not zero. This violation of the law of conservation of angular momentum is caused by the macroscopic intervention of the measuring device used by A in her first measurement, of J(3). This device did not interfere with the conservation of J(3), but did interfere with the law for J(2). We see that the idea of Einstein, Podolski and Rosen, that one can measure a property of B's particle in this set-up by measuring the same property of A's particle, and then using a conservation law, works only for the first measurement. In the relativistic case, in place of the tensor product we have a local C*-algebra with local structure, along the lines of a Haag field. A and B will now be space-like separated, and the measurement of a local property by A or B will be done using a CP unital map M(A) or M(B) which acts on the whole algebra, but which is the identity map when restricted to algebras space-like to the region of space-time in which the measurement takes place. The two maps must commute if Alice and Bob are space-like separated. The calculation can then go exactly as for the non-relativistic case above. See my review article for more details. The recent article in the NEW SCIENTIST entitled "Quantum Entanglement: How the future can influence the past", by Michael Brooks, 27 March 2004, is completely wrong. The future cannot influence the past. Nor is there such a thing as "remote control", as claimed in the article on page 32. It is not true that "if something affects the quantum state of one particle, it will inevitably affect the quantum state of the other [entangled particle], no matter how far apart they are" [page 32]. As we proved above, the state of the second particle (as viewed by Bob) is unchanged by Alice's The claim that the future can change the past is based on a slip, common in the interpretation of statistical correlations. Brooks talks about measuring the spin of a photon, and then measuring it again later, and getting a different result [page 35]: "...the very act of measuring the photon a second time can affect how it was polarised earlier on". This is of course impossible if the first measurement was recorded on a classical instrument. There will be correlations between the two results, but this merely allows us to conclude that there is an association between the spins, not that the later spin-value was the cause of the earlier one. Indeed, the criterion of priority, one of three needed to come to this conclusion, is not satisfied. The other criterion, that of direction, also fails. See my article EPR, cot deaths and the dangers of cannabis. Brooks's article speculates that this property, entanglement, might be behind life. This idea is not new, as it can be found in "The Emperor's New Mind", a book by Roger Penrose, and is also in the article on the mind by Stapp, reviewed by me here. This article is a summary of Stapp's book, "Mind, Matter and Quantum Mechanics", Springer-Verlag, 1993, 2004. The arguments for the idea were wrong when Penrose wrote, and wrong when Stapp wrote; they remain wrong when the New Scientist writes them. D'Ariano argues that the collapse of the wave-function takes place simultaneously over all space. Taken literally, this would imply the instantaneous transfer of information. We see that the solution to this problem is to assign information algebras to each observer, as in the theory of games, so that different observers assign different states to the same physical system. This is done only after a measurement. The argument that physics becomes subjective instead of objective has no more force here than in classical probability. Indeed, after a measurement, the quantum record (the classical pointers of the measuring instruments) is an objective fact; the state at this stage merely describes Alice's or Bob's knowledge of the event, and this depends on which of them can see the pointers. The description of the quantum results by classical events depends of which complete set of commuting observables was chosen by Alice and Bob. Thus, a different classical model is needed for each context. Note that it is the classical description that is contextual: the assignment of random variables to observables depends on the context; the mapping between observables and self-adjoint operators is the same whatever complete commuting set is contemplated. So I claim that quantum theory is non-contextual. If an incomplete set of commuting observables is measured, then the description of the resulting state is only partially classical. For example, if Alice measures J(3), and sees the result, it would be wrong for her to claim that Bob's particle has the opposite value of J(3) from hers. She does NOT know that there is a classical pointer showing the opposite result from her pointer; the experiment might not have been done. Indeed, for all she knows, Bob has already measured his J(2), and has a pointer to prove it, before she did her measurement. Thus, the only safe claim for Alice is that Bob would find, or would have found, the opposite value of J(3) if he were to make, or has made, the measurement of his J(3). In most first courses in quantum mechanics, and of probability, only one observer is discussed, and so D'Ariano's point of view is correct in that case. The EPR experiment pin-points the need for subjectivity in quantum probability; the same need in classical probability has been known and used since Bayes. A similar stand is taken by E. B. Davies in his book "Science in the Looking Glass", OUP, 2004.
{"url":"http://www.mth.kcl.ac.uk/~streater/EPR.html","timestamp":"2014-04-20T01:35:45Z","content_type":null,"content_length":"22308","record_id":"<urn:uuid:be0ce6eb-aac8-4a91-9a7a-bdde338e8f46>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Induced subgraphs with distinct sizes Noga Alon A.V. Kostochka April 1, 2008 We show that for every 0 < < 1/2, there is an n0 = n0( ) such that if n > n0 then every n-vertex graph G of size at least n 2 and at most (1 - ) n 2 contains induced k-vertex subgraphs with at least 10-7k different sizes, for every k n 3 . This is best possible, up to a constant factor. This is also a step towards a conjecture by Erdos, Faudree and S´os on the number of distinct pairs (|V (H)|, |E(H)|) of induced subgraphs of Ramsey graphs. AMS Subject Classification: 05C35, 05D40 Keywords: Induced subgraphs, size of subgraphs 1 Introduction For a graph G = (V, E), let hom(G) denote the maximum number of vertices in a clique or an independent set in G. An n-vertex graph is c-Ramsey, if hom(G) c log n. Erdos, Faudree and S´os (see [6], [7]) raised the following conjecture.
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/178/2435674.html","timestamp":"2014-04-16T17:59:33Z","content_type":null,"content_length":"7944","record_id":"<urn:uuid:5fbc0e91-78b8-43c8-ac36-e90d0efb2a6e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
Lecture on Knots and the Four-Color Theorem Lecture on Knots and the Four-Color Theorem Can you (re)color this map using only four colors? (A famous theorem says you [DEL:can:DEL] should be able to.) Are these two knots the same? (What does that even mean?) Monday, October 22nd In the Undergraduate Colloquium Series in the Mathematical Sciences, speakers connect mathematics and statistics to a variety of disciplines within the college and beyond. Last year, the series saw lectures on tsunamis, virus outbreaks, music, satellite communication, and more. So far this year, we’ve learned about two new trends in statistical analysis: “swarming” and the emerging field of “algebraic statistics.” This week, the series welcomes Emily Peters (Northwestern), who will explain what “planar algebras” have to say about knot theory and the four-color theorem. Lecture: 4:30 p.m., Cuneo Hall 312 Meet the Speaker: 4:00 p.m., Cuneo Hall 312 w/ tea & cookies Details: http://gauss.math.luc.edu/ucms/ Hosted by the Department of Mathematics and Statistics, with support from the CAS special events fund.
{"url":"http://blogs.luc.edu/mathstats/2012/10/20/lecture-on-knots-and-the-four-color-theorem/","timestamp":"2014-04-18T20:44:34Z","content_type":null,"content_length":"19672","record_id":"<urn:uuid:108a6578-2545-4fc0-b3a7-6631e3b9e03d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from August 2008 on Luke Palmer Here are some recordings from my birthday session and a recent session. Both feature Devon DeJohn on the guitar. The 8/16 session was sans Evan, who was busy with his anniversary. All these tracks are good. Well, the last one is good in its own way… Enjoy! Slicing open the belly of the IO monad in an alternate universe I’ve been looking for a way to do the pieces of I/O that are well-defined in the framework of FRP. For example, fileContents "someFile" is a perfectly good function of time, why should we be forced to drop into the semantic fuzziness of the IO monad to get it? Well, after a long talk with the Anygma folks about it, getting not very far in the practical world, I decided to do something else to quench my thirst for the time being since software needs to get written. I’m going to have the FRP program request actions by sending sinks to the top level, and then have the results of those requests come back via futures. So: type Sink a = a -> IO () type Pipe = forall a. (Future a, Sink a) data Stream a = a :> Stream a liftFRP :: Sink a -> IO a -> IO () liftFRP sink a = a >>= sink Okay, that’s nice, but how do we prevent uses of these things from becoming a tangled mess. Input always needs to communicate with output, and it seems like it would just be a pain to coordinate that. It turns out we can stick it in a monad: newtype StreamReaderT v m a = SRT { runSRT :: StateT (Stream v) m a) } deriving (Functor, Monad, MonadTrans) readNext = SRT $ do (x :> xs) <- get put xs return x type IO' = StreamReaderT Pipe (WriterT (IO ()) Future) -- using the IO () monoid with mappend = (>>) liftFRP :: IO a -> IO' a liftFRP io = do (fut,sink) <- readNext tell (io >>= sink) (lift.lift) fut unliftFRP :: IO' a -> IO a unliftFRP m = do stream <- makePipeStream -- a bit of magic here ((fut,_),action) <- runWriterT (runStateT (runSRT m) stream) return $! futVal fut It looks like IO’ has the very same (lack of) semantics as IO. liftFRP and unliftFRP form an isomorphism, so we really can treat them as pulling apart the IO monad into something more versatile, and then putting it back together. Also we get nice fun parallel version of liftFRP. I can’t decide if this should be the common one or not. liftParFRP :: IO a -> IO' a liftParFRP io = do (fut,sink) <- readNext tell (forkIO (io >>= sink)) (lift.lift) fut So using the liftFRP and unliftFRP, we are no longer “trapped in IO” as some folks like to say. We can weave in and out of using IO as we’re used to and using requests and futures when convenient. For example, it’s trivial to have a short dialog with the user via the console, and have the real time program ticking away all the while. Fun stuff! All functions are continuous, always Dan Piponi and Andrej Bauer have written about computable reals and their relationship to continuity. Those articles enlightened me, but only by way of example. Each of them constructed a representation for real numbers, and then showed that all computable functions are continuous on that representation. Today, I will show that all functions are continuous on every representation of real numbers. This article assumes some background in domain theory. I’m going to use the following definition of analytic continuity: f is continuous if for any chain of open sets x[1] ⊇ x[2] ⊇ …, ∩[i] f[x[i]] = f[∩[i] x[i]] Where ∩ denotes the intersection of a set of sets, and f[x] denotes the image of f under x (the set {f(z) | z in x}). This means that for a continuous function, the intersection of a bunch of images of that function is the same as the image of the intersection of the sets used to produce those images (whew!). It might take you a little while to convince yourself that this really means continuous in the same way as it is normally presented. The proof is left as an exercise to the reader (yeah, cop-out, I I chose this definition because it is awfully similar to the definition of Scott continuity from domain theory, which all computable functions must have. A monotone function f is scott-continuous if for any chain of values x[1] ⊑ x[2] ⊑ …, sup[i] f(x[i]) = f(sup[i] x[i]). Where ⊑ is the “information” partial ordering, and sup is the supremum, or least upper bound, of a chain. Analogous to the last definition, this means that the supremum of the outputs of a function is the same as the function applied to the supremum of the inputs used to create those outputs. What I will do to show that every computable function is continuous, no matter the representation of reals, is to show that there is a homomorphism (a straightforward mapping) from Scott continuity to analytic continuity. But first I have to say what it means to be a real number. It turns out this is all we need: fromConvergents :: [Rational] -> Real toConvergents :: Real -> [Rational] These functions convert to and from infinite convergent streams of rationals. They don’t need to be inverses. The requirement we make is that the rational at position n needs to be within 2^-n of the actual number represented (the same as Dan Piponi’s). But if a representation cannot do this, then I would say its approximation abilities are inadequate. To make sure it is well-behaved, toConvergents(fromConvergents(x)) must converge to the same thing as x. Now we will make a homomorphism H from these Reals (lifted, so they can have bottoms in them) to sets (precisely, open intervals) of actual real numbers. Range will be a function that maps lists to a center point and an error radius. Range(⊥) = 〈0,∞〉 Range(r:⊥) = 〈r,1〉 Range(r:rs) = 〈r’,e/2〉 where 〈r’,e〉 = Range(rs) And now H: H(x) = (r-e,r+e) where 〈r,e〉 = Range(toConvergents(x)) H(⊑) = ⊇ H(sup) = ∩ H(f) = H ° f ° fromConvergents ° S, where S(x) gives a cofinal sequence of rational numbers less than x that satisfies the error bound requirement above. The hard part of the proof is H ° f ° fromConvergents = H(f) ° H ° fromConvergents for fully defined inputs; in other words that the homomorphism does actually preserve the meaning of the function. It boils down to the fact that H ° fromConvergents ° S is the identity for fully defined inputs, since Range(x) is just {lim x} when x is fully defined. I expect some of the details to get a bit nasty though. Left as an exercise for a reader less lazy than the author. And that’s it. It means when you interpret f as a function on real numbers (namely, H), it will always be continuous, so long as the computable real type you’re using has well-behaved toConvergents and fromConvergents functions. Intuitively, H maps the set of convergents to the set of all possible numbers it could represent. So ⊥ gets mapped to the whole real line, 0:⊥ gets mapped to the interval (-1,1) (all the points within 1 of 0), etc. The analytical notion of continuity above can be generalized to any function on sets, rather than just f[] (the image function of f). This means we can define continuity (which is equivalent to computability) on, for example, functions from Real to Bool. This was a fairly technical explanation, where I substituted mathematical reasoning for intuition. This is partially because I’m still trying to truly understand this idea myself. Soon I may post a more intuitive / visual explanation of the idea. If you want to experiment more, answer: what does it mean for a function from Real -> Bool to be continuous? Mindfuck: The Reverse State Monad Someone in the #haskell IRC channel mentioned the “reverse state monad” explaining that it used the state from the next computation and passed it to the previous one. Well, I just had to try this! First, a demonstration: we will compute the fibonacci numbers by starting with them and mapping them back to the empty list. -- cumulativeSums [1,2,3,4,5] = [0,1,3,6,10,15] cumulativeSums = scanl (+) 0 computeFibs = evalRState [] $ do -- here the state is what we want: the fibonacci numbers fibs <- get modify cumulativeSums -- now the state is the difference sequence of -- fibs, [1,0,1,1,2,3,5,8,13,...], because the -- cumulativeSums of that sequence is fibs. Notice -- that this sequence is the same as 1:fibs, so -- just put that to get here. put (1:fibs) -- And here the state is empty (or whatever else -- we want it to be, because we just overwrite it on -- the previous line -- but we defined it to be -- empty on the evalRState line) return fibs And sure enough: >>> take 15 computeFibs And now the implementation: newtype RState s a = RState { runRState :: s -> (a,s) } evalRState s f = fst (runRState f s) instance Monad (RState s) where return x = RState $ (,) x RState sf >>= f = RState $ \s -> let (a,s'') = sf s' (b,s') = runRState (f a) s in (b,s'') get = RState $ \s -> (s,s) modify f = RState $ \s -> ((),f s) put = modify . const The important part is the definition of (>>=). Notice how the data (a,b) flows forward, but the state (s,s’,s”) flows backward. I just had my mind blown by the trial of Braid, by Jonathan Blow, which just came out on XBox Live Arcade. This is the most interesting puzzle game I have played in many years. It’s a platformer about playing with time, and in incorporates this very effectively to allow clever solutions to puzzles which seem impossible. Not very many games get my money these days, but this one does! Composable Input for Fruit The other day I had an idea for a game which required a traditionalish user interface (text boxes, a grid of checkboxes, …). But I’m addicted to Haskell at the moment, so I was not okay with doing it in C#, my usual GUI fallback. Upon scouring hackage for a GUI widget library I could use, I realized that they all suck—either they are too imperative or too inflexible. So I set out to write yet another one, in hopes that it wouldn’t suck. The plan is to write it on top of graphics-drawingcombinators, my lightweight declarative OpenGL wrapper, and reactive, the latest (in progress) implementation of FRP. In researching the design, Conal pointed me to a paper on “Fruit”, which is a very simple design for GUIs in FRP. It’s nice (because it is little more than FRP itself), and it’s approximately what I’m going to do. But before I do, I had to address a big grotesque wart in the design: data Mouse = Mouse { mpos :: Point, lbDown :: Bool, rbDown :: Bool } data Kbd = Kbd { keyDown :: [Char] } type GUIInput = (Maybe Kbd, Maybe Mouse) type GUI a b = SF (GUIInput,a) (Picture,b) So every GUI transformer takes a GUIInput as a parameter. The first thing that caught my eye was the Maybes in the type of GUIInput, which are meant to encode the idea of focus. This is an example of the inflexibility I noticed in the existing libraries: it is a very limited, not extensible, not customizable idea of focus. But there is something yet more pressing: this input type is not composable. The type of input is always the same, and there is no way to build complex input handling from simple input handling. I took a walk, and came up with the following: Scrap GUI. Our interface will be nothing more than pure FRP. But that doesn’t solve the input problem, it just gives it to the users to solve. So to solve that, we build up composable input types, and then access them using normal FRP methods. We will start with Kbd and Mouse as above. The problem to solve is that when we pass input to a subwidget, its local coordinate system needs to be transformed. So the only cabability input types need to have is that they need to be transformable. -- A class for invertable transformations. We restrict to affine transformations -- because we have to work with OpenGL, which does not support arbitrary -- transformation. class Transformable a where translate :: Point -> a -> a rotate :: Double -> a -> a scale :: Double -> Double -> a -> a instance Transformable Point where -- .. typical affine transformations on points -- Keyboard input does not transform at all instance Transformable Kbd where translate _ = id rotate _ = id scale _ _ = id -- The mouse position transforms instance Transformable Mouse where translate p m = m { mpos = translate p (mpos m) } rotate theta m = m { mpos = rotate r (mpos m) } scale sx sy m = m { mpos = scale sx sy (mpos m) } -- Behaviors transform pointwise. In fact, this is the instance -- for Transformable on any Functor, but we have no way of telling -- Haskell that. instance (Transformable a) => Transformable (Behavior a) where translate = fmap . translate rotate = fmap . rotate scale sx sy = fmap (scale sx sy) Widgets that accept input will have types like: Behavior i -> Behavior o where both i and o are transformable (o is usually a Drawing, or a Drawing paired with some other output). So we can transform a whole widget at once by defining a transformable instance for functions. instance (Transformable i, Transformable o) => Transformable (i -> o) where translate p f = translate p . f . translate (-p) rotate r f = rotate r . f . rotate (-r) scale sx sy = scale sx sy . f . scale (recip sx) (recip sy) The way we transform a function is to inversely transform the input, do the function, then transform the output. This is called the conjugate of the transformation. And that’s it for composable input: just a class for affine transformations. A typical GUI might look like: -- Takes a mouse position, returns the "pressed" state and its picture. button :: Behavior Mouse -> Behavior (Bool,Drawing) And if we want two buttons: twoButtons = (liftA2.liftA2) (second over) button (translate (1,0) button) That is, just transform each subGUI as a whole (rather than separating input and output) and combine appropriately. That’s the theory, at least. For this to actualy work correctly, we would need one of the following: instance Transformable b => Transformable (a,b) instance Transformable Bool -- do nothing Neither of these rubs me the right way. That seems like the wrong instance of transformable (a,b) to me (however, (a,) is a functor, so it’s consistent with what I said earlier). I don’t like having transformable instances for things that don’t actually transform. I’m thinking about maybe a type like this: newtype WithDrawing a = WithDrawing (a,Drawing) instance Transformable (WithDrawing a) (Or the appropriate WithTransformable generalization)
{"url":"http://lukepalmer.wordpress.com/2008/08/","timestamp":"2014-04-20T05:44:36Z","content_type":null,"content_length":"67337","record_id":"<urn:uuid:b2d1c07c-12c0-462d-a6f8-5b7611328e7e>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptology ePrint Archive: Report 2011/639 Towards a Probabilistic Complexity-theoretic Modeling of Biological Cyanide Poisoning as Service Attack in Self-organizing NetworksJiejun Kong, Dapeng Wu, Xiaoyan Hong, Mario GerlaAbstract: We draw an analogy of \emph{biological cyanide poisoning} to security attacks in self-organizing mobile ad hoc networks. When a circulatory system is treated as an enclosed network space, a hemoglobin is treated as a mobile node, and a hemoglobin binding with cyanide ion is treated as a compromised node (which cannot bind with oxygen to furnish its oxygen-transport function), we show how cyanide poisoning can reduce the probability of oxygen/message delivery to a rigorously defined ``negligible'' quantity. Like formal cryptography, security problem in our network-centric model is defined on the complexity-theoretic concept of ``negligible'', which is asymptotically sub-polynomial with respect to a pre-defined system parameter $x$. Intuitively, the parameter $x$ is the key length $n$ in formal cryptography, but is changed to the network scale, or the number of network nodes $N$, in our model. We use the $\RP$ ($n$-runs) complexity class with a virtual oracle to formally model the cyanide poisoning phenomenon and similar network threats. This new analytic approach leads to a new view of biological threats from the perspective of network security and complexity theoretic study. Category / Keywords: foundations / biochemical science based on complexity theoryDate: received 25 Nov 2011Contact author: jiejunkong at yahoo comAvailable format(s): Postscript (PS) | Compressed Postscript (PS.GZ) | PDF | BibTeX Citation Version: 20111129:220737 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
{"url":"http://eprint.iacr.org/2011/639","timestamp":"2014-04-21T02:04:39Z","content_type":null,"content_length":"3098","record_id":"<urn:uuid:338aee17-13ca-45a9-8ce3-f5adbbe5aeb7>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
Chevy Chase Village, MD SAT Math Tutor Find a Chevy Chase Village, MD SAT Math Tutor ...III. Integrals A. Interpretations and Properties of Integrals. 21 Subjects: including SAT math, calculus, statistics, geometry ...I value a student's desire to learn and commitment to having a good educational relationship. Being open about your needs, concerns, and things that are going well is very helpful to improving your math skills. My promise is to be supportive and to ensure that you have the best chance to do well. 15 Subjects: including SAT math, chemistry, calculus, geometry ...In many math classrooms today, teachers show their students one way to solve a problem, and then the students simply mimic a series of steps. This approach does not promote conceptual understanding! Students need to be able to think critically and creatively when they face a new problem. 16 Subjects: including SAT math, English, writing, calculus ...I tell you how to perform tasks on your own to minimize tutoring costs. I know how to speak in plain language and know how to listen. I myself learn from every student, and I have never stopped learning. 25 Subjects: including SAT math, chemistry, reading, writing ...I was very successful as their tutor. I enjoy math and I am very patient. As an engineering professor, I use calculus often and provide one to one tutoring to my engineering students. 15 Subjects: including SAT math, chemistry, calculus, algebra 2 Related Chevy Chase Village, MD Tutors Chevy Chase Village, MD Accounting Tutors Chevy Chase Village, MD ACT Tutors Chevy Chase Village, MD Algebra Tutors Chevy Chase Village, MD Algebra 2 Tutors Chevy Chase Village, MD Calculus Tutors Chevy Chase Village, MD Geometry Tutors Chevy Chase Village, MD Math Tutors Chevy Chase Village, MD Prealgebra Tutors Chevy Chase Village, MD Precalculus Tutors Chevy Chase Village, MD SAT Tutors Chevy Chase Village, MD SAT Math Tutors Chevy Chase Village, MD Science Tutors Chevy Chase Village, MD Statistics Tutors Chevy Chase Village, MD Trigonometry Tutors Nearby Cities With SAT math Tutor Bethesda, MD SAT math Tutors Brentwood, MD SAT math Tutors Cabin John SAT math Tutors Chevy Chase SAT math Tutors Chevy Chs Vlg, MD SAT math Tutors Colmar Manor, MD SAT math Tutors Cottage City, MD SAT math Tutors Fort Myer, VA SAT math Tutors Garrett Park SAT math Tutors Kensington, MD SAT math Tutors Martins Add, MD SAT math Tutors Martins Additions, MD SAT math Tutors N Chevy Chase, MD SAT math Tutors Somerset, MD SAT math Tutors West Mclean SAT math Tutors
{"url":"http://www.purplemath.com/Chevy_Chase_Village_MD_SAT_Math_tutors.php","timestamp":"2014-04-20T08:45:17Z","content_type":null,"content_length":"24289","record_id":"<urn:uuid:c2ed7470-5914-43f2-a52b-f79b8438c92f>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
Is a certain A-infinity algebra (homologically) smooth ? up vote 3 down vote favorite An A-infinity algebra is smooth a'la Kontsevich if it is perfect as an A-A bimodule. I am wondering about the standard tricks to show smoothness of given algebras. A relatively basic example should be the following. I have a guess that the following Z/2Z graded A-infinity algebras over C should be smooth even though their "underlying associative algebra" isn't. The algebra A_n is a vector space 1,e, where e is in even degree. The "classical" multiplications are 1 acts as a unit and e*e=0. There is only one higher multiplication, e^(tensor power)2n-->1, n>1. If I did my math right, this defines an A-infinity algebra. The basis for the guess of smoothness is that the Hochschild homology is finite dimensional, which would be a corollary of perfection, but that doesn't quite prove it without some other statement about the perfection of finite-dimensional modules over A "tensor" A-op(which would also imply the statement directly of course). I have struggled quite a bit unsuccessfully to brute force this. I have tried to find a suitable dg-algebra equivalent to A and then to compute explicitly A-"tensor"-A-op and then write down a resolution of A. This was too gritty for me, though maybe a more insightful person could make it work. Is the fact true? Can anyone give an explanation/proof? homological-algebra at.algebraic-topology Can you be more precise about the waymultiplications work in the algebra? – Mariano Suárez-Alvarez♦ Jul 3 '10 at 14:49 Hopefully my edits made it clearer... – Daniel Pomerleano Jul 3 '10 at 15:18 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged homological-algebra at.algebraic-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/30413/is-a-certain-a-infinity-algebra-homologically-smooth","timestamp":"2014-04-17T10:23:25Z","content_type":null,"content_length":"48458","record_id":"<urn:uuid:d0ea3542-0d4f-497a-9132-059a15b6e71f>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
Just some jokes I found! A math student is pestered by a classmate who wants to copy his homework assignment. The student hesitates, not only because he thinks it's wrong, but also because he doesn't want to be sanctioned for aiding and abetting. His classmate calms him down: "Nobody will be able to trace my homework to you: I'll be changing the names of all the constants and variables: a to b, x to y, and so on." Not quite convinced, but eager to be left alone, the student hands his completed assignment to the classmate for copying. After the deadline, the student asks: "Did you really change the names of all the variables?" "Sure!" the classmate replies. "When you called a function f, I called it g; when you called a variable x, I renamed it to y; and when you were writing about the log of x+1, I called it the timber of Life is complex: it has both real and imaginary components. Q: How does a mathematician induce good behavior in her children? A: `I've told you n times, I've told you n+1 times...' A mathematician and his best friend, an engineer, attend a public lecture on geometry in thirteen-dimensional space. "How did you like it?" the mathematician wants to know after the talk. "My head's spinning", the engineer confesses. "How can you develop any intuition for thirteen-dimensional space?" "Well, it's not even difficult. All I do is visualize the situation in arbitrary N-dimensional space and then set N = 13." Math problems? Call 1-800-[(10x)(13i)2]-[sin(xy)/2.362x]. A math professor is talking to her little brother who just started his first year of graduate school in mathematics. "What's your favorite thing about mathematics?" the brother wants to know. "Knot theory." "Yeah, me neither." Just some funny jokes I found...! MATH......that is all.
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=14958","timestamp":"2014-04-21T05:42:15Z","content_type":null,"content_length":"18670","record_id":"<urn:uuid:fdb08348-5017-4241-8ea0-21ee04fb45f2>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
Resonant filters So far, the filter designs we've concentrated on have employed either capacitors or inductors, but never both at the same time. We should know by now that combinations of L and C will tend to resonate, and this property can be exploited in designing band-pass and band-stop filter circuits. Series LC circuits give minimum impedance at resonance, while parallel LC (“tank”) circuits give maximum impedance at their resonant frequency. Knowing this, we have two basic strategies for designing either band-pass or band-stop filters. For band-pass filters, the two basic resonant strategies are this: series LC to pass a signal (Figure below), or parallel LC (Figure below) to short a signal. The two schemes will be contrasted and simulated here: Series resonant LC band-pass filter. Series LC components pass signal at resonance, and block signals of any other frequencies from getting to the load. (Figure below) series resonant bandpass filter v1 1 0 ac 1 sin l1 1 2 1 c1 2 3 1u rload 3 0 1k .ac lin 20 50 250 .plot ac v(3) Series resonant band-pass filter: voltage peaks at resonant frequency of 159.15 Hz. A couple of points to note: see how there is virtually no signal attenuation within the “pass band” (the range of frequencies near the load voltage peak), unlike the band-pass filters made from capacitors or inductors alone. Also, since this filter works on the principle of series LC resonance, the resonant frequency of which is unaffected by circuit resistance, the value of the load resistor will not skew the peak frequency. However, different values for the load resistor will change the “steepness” of the Bode plot (the “selectivity” of the filter). The other basic style of resonant band-pass filters employs a tank circuit (parallel LC combination) to short out signals too high or too low in frequency from getting to the load: (Figure below) Parallel resonant band-pass filter. The tank circuit will have a lot of impedance at resonance, allowing the signal to get to the load with minimal attenuation. Under or over resonant frequency, however, the tank circuit will have a low impedance, shorting out the signal and dropping most of it across series resistor R[1]. (Figure below) parallel resonant bandpass filter v1 1 0 ac 1 sin r1 1 2 500 l1 2 0 100m c1 2 0 10u rload 2 0 1k .ac lin 20 50 250 .plot ac v(2) Parallel resonant filter: voltage peaks a resonant frequency of 159.15 Hz. Just like the low-pass and high-pass filter designs relying on a series resistance and a parallel “shorting” component to attenuate unwanted frequencies, this resonant circuit can never provide full input (source) voltage to the load. That series resistance will always be dropping some amount of voltage so long as there is a load resistance connected to the output of the filter. It should be noted that this form of band-pass filter circuit is very popular in analog radio tuning circuitry, for selecting a particular radio frequency from the multitudes of frequencies available from the antenna. In most analog radio tuner circuits, the rotating dial for station selection moves a variable capacitor in a tank circuit. Variable capacitor tunes radio receiver tank circuit to select one out of many broadcast stations. The variable capacitor and air-core inductor shown in Figure above photograph of a simple radio comprise the main elements in the tank circuit filter used to discriminate one radio station's signal from another. Just as we can use series and parallel LC resonant circuits to pass only those frequencies within a certain range, we can also use them to block frequencies within a certain range, creating a band-stop filter. Again, we have two major strategies to follow in doing this, to use either series or parallel resonance. First, we'll look at the series variety: (Figure below) Series resonant band-stop filter. When the series LC combination reaches resonance, its very low impedance shorts out the signal, dropping it across resistor R[1] and preventing its passage on to the load. (Figure below) series resonant bandstop filter v1 1 0 ac 1 sin r1 1 2 500 l1 2 3 100m c1 3 0 10u rload 2 0 1k .ac lin 20 70 230 .plot ac v(2) Series resonant band-stop filter: Notch frequency = LC resonant frequency (159.15 Hz). Next, we will examine the parallel resonant band-stop filter: (Figure below) Parallel resonant band-stop filter. The parallel LC components present a high impedance at resonant frequency, thereby blocking the signal from the load at that frequency. Conversely, it passes signals to the load at any other frequencies. (Figure below) parallel resonant bandstop filter v1 1 0 ac 1 sin l1 1 2 100m c1 1 2 10u rload 2 0 1k .ac lin 20 100 200 .plot ac v(2) Parallel resonant band-stop filter: Notch frequency = LC resonant frequency (159.15 Hz). Once again, notice how the absence of a series resistor makes for minimum attenuation for all the desired (passed) signals. The amplitude at the notch frequency, on the other hand, is very low. In other words, this is a very “selective” filter. In all these resonant filter designs, the selectivity depends greatly upon the “purity” of the inductance and capacitance used. If there is any stray resistance (especially likely in the inductor), this will diminish the filter's ability to finely discriminate frequencies, as well as introduce antiresonant effects that will skew the peak/notch frequency. A word of caution to those designing low-pass and high-pass filters is in order at this point. After assessing the standard RC and LR low-pass and high-pass filter designs, it might occur to a student that a better, more effective design of low-pass or high-pass filter might be realized by combining capacitive and inductive elements together like Figure below. Capacitive Inductive low-pass filter. The inductors should block any high frequencies, while the capacitor should short out any high frequencies as well, both working together to allow only low frequency signals to reach the load. At first, this seems to be a good strategy, and eliminates the need for a series resistance. However, the more insightful student will recognize that any combination of capacitors and inductors together in a circuit is likely to cause resonant effects to happen at a certain frequency. Resonance, as we have seen before, can cause strange things to happen. Let's plot a SPICE analysis and see what happens over a wide frequency range: (Figure below) lc lowpass filter v1 1 0 ac 1 sin l1 1 2 100m c1 2 0 1u l2 2 3 100m rload 3 0 1k .ac lin 20 100 1k .plot ac v(3) Unexpected response of L-C low-pass filter. What was supposed to be a low-pass filter turns out to be a band-pass filter with a peak somewhere around 526 Hz! The capacitance and inductance in this filter circuit are attaining resonance at that point, creating a large voltage drop around C[1], which is seen at the load, regardless of L[2]'s attenuating influence. The output voltage to the load at this point actually exceeds the input (source) voltage! A little more reflection reveals that if L[1] and C[2] are at resonance, they will impose a very heavy (very low impedance) load on the AC source, which might not be good either. We'll run the same analysis again, only this time plotting C[1]'s voltage, vm(2) in Figure below, and the source current, I(v1), along with load voltage, vm(3): Current inceases at the unwanted resonance of the L-C low-pass filter. Sure enough, we see the voltage across C[1] and the source current spiking to a high point at the same frequency where the load voltage is maximum. If we were expecting this filter to provide a simple low-pass function, we might be disappointed by the results. The problem is that an L-C filter has a input impedance and an output impedance which must be matched. The voltage source impedance must match the input impedance of the filter, and the filter output impedance must be matched by “rload” for a flat response. The input and output impedance is given by the square root of (L/C). Z = (L/C)^1/2 Taking the component values from (Figure below), we can find the impedance of the filter, and the required , R[g] and R[load] to match it. For L= 100 mH, C= 1µF Z = (L/C)^1/2=((100 mH)/(1 µF))^1/2 = 316 Ω In Figure below we have added R[g] = 316 Ω to the generator, and changed the load R[load] from 1000 Ω to 316 Ω. Note that if we needed to drive a 1000 Ω load, the L/C ratio could have been adjusted to match that resistance. Circuit of source and load matched L-C low-pass filter. LC matched lowpass filter V1 1 0 ac 1 SIN Rg 1 4 316 L1 4 2 100m C1 2 0 1.0u L2 2 3 100m Rload 3 0 316 .ac lin 20 100 1k .plot ac v(3) Figure below shows the “flat” response of the L-C low pass filter when the source and load impedance match the filter input and output impedances. The response of impedance matched L-C low-pass filter is nearly flat up to the cut-off frequency. The point to make in comparing the response of the unmatched filter (Figure above) to the matched filter (Figure above) is that variable load on the filter produces a considerable change in voltage. This property is directly applicable to L-C filtered power supplies– the regulation is poor. The power supply voltage changes with a change in load. This is undesirable. This poor load regulation can be mitigated by a swinging choke. This is a choke, inductor, designed to saturate when a large DC current passes through it. By saturate, we mean that the DC current creates a “too” high level of flux in the magnetic core, so that the AC component of current cannot vary the flux. Since induction is proportional to dΦ/dt, the inductance is decreased by the heavy DC current. The decrease in inductance decreases reactance X[L]. Decreasing reactance, reduces the voltage drop across the inductor; thus, increasing the voltage at the filter output. This improves the voltage regulation with respect to variable loads. Despite the unintended resonance, low-pass filters made up of capacitors and inductors are frequently used as final stages in AC/DC power supplies to filter the unwanted AC “ripple” voltage out of the DC converted from AC. Why is this, if this particular filter design possesses a potentially troublesome resonant point? The answer lies in the selection of filter component sizes and the frequencies encountered from an AC/DC converter (rectifier). What we're trying to do in an AC/DC power supply filter is separate DC voltage from a small amount of relatively high-frequency AC voltage. The filter inductors and capacitors are generally quite large (several Henrys for the inductors and thousands of µF for the capacitors is typical), making the filter's resonant frequency very, very low. DC of course, has a “frequency” of zero, so there's no way it can make an LC circuit resonate. The ripple voltage, on the other hand, is a non-sinusoidal AC voltage consisting of a fundamental frequency at least twice the frequency of the converted AC voltage, with harmonics many times that in addition. For plug-in-the-wall power supplies running on 60 Hz AC power (60 Hz United States; 50 Hz in Europe), the lowest frequency the filter will ever see is 120 Hz (100 Hz in Europe), which is well above its resonant point. Therefore, the potentially troublesome resonant point in a such a filter is completely avoided. The following SPICE analysis calculates the voltage output (AC and DC) for such a filter, with series DC and AC (120 Hz) voltage sources providing a rough approximation of the mixed-frequency output of an AC/DC converter. AC/DC power suppply filter provides “ripple free” DC power. ac/dc power supply filter v1 1 0 ac 1 sin v2 2 1 dc l1 2 3 3 c1 3 0 9500u l2 3 4 2 rload 4 0 1k .dc v2 12 12 1 .ac lin 1 120 120 .print dc v(4) .print ac v(4) v2 v(4) 1.200E+01 1.200E+01 DC voltage at load = 12 volts freq v(4) 1.200E+02 3.412E-05 AC voltage at load = 34.12 microvolts With a full 12 volts DC at the load and only 34.12 µV of AC left from the 1 volt AC source imposed across the load, this circuit design proves itself to be a very effective power supply filter. The lesson learned here about resonant effects also applies to the design of high-pass filters using both capacitors and inductors. So long as the desired and undesired frequencies are well to either side of the resonant point, the filter will work OK. But if any signal of significant magnitude close to the resonant frequency is applied to the input of the filter, strange things will happen! • REVIEW: • Resonant combinations of capacitance and inductance can be employed to create very effective band-pass and band-stop filters without the need for added resistance in a circuit that would diminish the passage of desired frequencies. Related Links
{"url":"http://www.allaboutcircuits.com/vol_2/chpt_8/6.html","timestamp":"2014-04-21T12:50:09Z","content_type":null,"content_length":"26645","record_id":"<urn:uuid:223acbac-eff5-48ec-98d8-8a11927228af>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
ACC 560 WeeK 9 Quiz 12 1. Capital budgeting decisions usually involve large investments and often have a significant impact on a company's future profitability. 2. The capital budgeting committee ultimately approves the capital expenditure budget for the year. 3. For purposes of capital budgeting, estimated cash inflows and outflows are preferred for inputs into the capital budgeting decision tools. 4. The cash payback technique is a quick way to calculate a project's net present value. 5. The cash payback period is computed by dividing the cost of the capital investment by the net annual cash inflow. 6. The cash payback method is frequently used as a screening tool but it does not take into consideration the profitability of a project. 7. The cost of capital is a weighted average of the rates paid on borrowed funds, as well as on funds provided by investors in the company's stock. 8. Using the net present value method, a net present value of zero indicates that the project would not be acceptable. 9. The net present value method can only be used in capital budgeting if the expected cash flows from a project are an equal amount each year. 10. By ignoring intangible benefits, capital budgeting techniques might incorrectly eliminate projects that could be financially beneficial to the company. 11. To avoid accepting projects that actually should be rejected, a company should ignore intangible benefits in calculating net present value. 12. One way of incorporating intangible benefits into the capital budgeting decision is to project conservative estimates of the value of the intangible benefits and include them in the NPV 13. The profitability index is calculated by dividing the total cash flows by the initial investment. 14. The profitability index allows comparison of the relative desirability of projects that require differing initial investments. 15. Sensitivity analysis uses a number of outcome estimates to get a sense of the variability among potential returns. 16. A well-run organization should perform an evaluation, called a post-audit, of its investment projects before their completion. 17. Post-audits create an incentive for managers to make accurate estimates, since managers know that their results will be evaluated. 18. A post-audit is an evaluation of how well a project's actual performance matches the projections made when the project was proposed. 19. The internal rate of return method is, like the NPV method, a discounted cash flow technique. 20. The interest yield of a project is a rate that will cause the present value of the proposed capital expenditure to equal the present value of the expected annual cash inflows. 21. Using the internal rate of return method, a project is rejected when the rate of return is greater than or equal to the required rate of return. 22. Using the annual rate of return method, a project is acceptable if its rate of return is greater than management's minimum rate of return. 23. The annual rate of return method requires dividing a project's annual cash inflows by the economic life of the project. 24. A major advantage of the annual rate of return method is that it considers the time value of money. 25. An advantage of the annual rate of return method is that it relies on accrual accounting numbers rather than actual cash flows. 26. The capital budget for the year is approved by a company's a. board of directors. b. capital budgeting committee. c. officers. d. stockholders. 27. All of the following are involved in the capital budgeting evaluation process except a company's a. board of directors. b. capital budgeting committee. c. officers. d. stockholders. 28. Most of the capital budgeting methods use a. accrual accounting numbers. b. cash flow numbers. c. net income. d. accrual accounting revenues. 29. The first step in the capital budgeting evaluation process is to a. request proposals for projects. b. screen proposals by a capital budgeting committee. c. determine which projects are worthy of funding. d. approve the capital budget. 30. The capital budgeting decision depends in part on the a. availability of funds. b. relationships among proposed projects. c. risk associated with a particular project. d. all of these. 31. Capital budgeting is the process a. used in sell or process further decisions. b. of determining how much capital stock to issue. c. of making capital expenditure decisions. d. of eliminating unprofitable product lines. 32. Net annual cash flow can be estimated by a. deducting credit sales from net income. b. adding depreciation expense to net income. c. deducting credit purchases from net income. d. adding advertising expense to net income. 33. Which of the following is not a typical cash flow related to equipment purchase and replacement decisions? a. Increased operating costs b. Overhaul of equipment c. Salvage value of equipment when project is complete d. Depreciation expense 34. Capital expenditure proposals are initially screened by the a. board of directors. b. executive committee. c. capital budgeting committee. d. stockholders. 35. Capital budgeting decisions depend in part on all of the following except the a. relationships among proposed projects. b. profitability of the company. c. company’s basic decision making approach. d. risks associated with a particular project. 36. The corporate capital budget authorization process consists of how many steps? a. 4 b. 3 c. 2 d. 1 37. Which of the following is not a capital budgeting decision? a. Constructing new studios b. Replacing old equipment c. Scrapping obsolete inventory d. Remodeling an office building 38. Which of the following is a disadvantage of the cash payback technique? a. It is difficult to calculate b. It relies on the time value of money c. It can only be calculated when there are equal annual net cash flows d. It ignores the expected profitability of a project 39. The payback period is often compared to an asset’s a. estimated useful life. b. warranty period. c. net present value. d. internal rate of return. 40. Which of the following ignores the time value of money? a. Internal rate of return b. Profitability index c. Net present value d. Cash payback 41. Brady Corp. is considering the purchase of a piece of equipment that costs $20,000. Projected net annual cash flows over the project’s life are: Year Net Annual Cash Flow 1 $ 3,000 2 8,000 3 15,000 4 9,000 The cash payback period is a. 2.29 years. b. 2.60 years. c. 2.40 years. d. 2.31 years. 42. Bradshaw Inc. is contemplating a capital investment of $88,000. The cash flows over the project’s four years are: Expected Annual Expected Annual Year Cash Inflows Cash Outflows 1 $30,000 $12,000 2 45,000 20,000 3 60,000 25,000 4 50,000 30,000 The cash payback period is a. 3.59 years. b. 3.50 years. c. 2.37 years. d. 3.20 years. 43. Jordan Company is considering the purchase of a machine with the following data: Initial cost $150,000 One-time training cost 12,000 Annual maintenance costs 15,000 Annual cost savings 75,000 Salvage value 20,000 The cash payback period is a. 2.70 years. b. 2.50 years. c. 2.37 years. d. 2.17 years. 44. If project A has a lower payback period than project B, this may indicate that project A may have a a. lower NPV and be less profitable. b. higher NPV and be less profitable. c. higher NPV and be more profitable. d. lower NPV and be more profitable. 45. Which of the following does not consider a company’s required rate of return? a. Net present value b. Internal rate of return c. Annual rate of return d. Cash payback 46. The cash payback technique a. considers cash flows over the life of a project. b. cannot be used with uneven cash flows. c. is superior to the net present value method. d. may be useful as an initial screening device. 47. If an asset costs $240,000 and is expected to have a $40,000 salvage value at the end of its ten-year life, and generates annual net cash inflows of $40,000 each year, the cash payback period is a. 7 years. b. 6 years. c. 5 years. d. 4 years. 48. If a payback period for a project is greater than its expected useful life, the a. project will always be profitable. b. entire initial investment will not be recovered. c. project would only be acceptable if the company's cost of capital was low. d. project's return will always exceed the company's cost of capital. 49. The cash payback technique a. should be used as a final screening tool. b. can be the only basis for the capital budgeting decision. c. is relatively easy to compute and understand. d. considers the expected profitability of a project. 50. The cash payback period is computed by dividing the cost of the capital investment by the a. annual net income. b. net annual cash inflow. c. present value of the cash inflow. d. present value of the net income. 157 Questions Answered
{"url":"http://homework-aid.com/ACC-560-WeeK-9-Quiz-12-705.htm","timestamp":"2014-04-21T14:40:27Z","content_type":null,"content_length":"74851","record_id":"<urn:uuid:2f1d0b90-bb8f-4517-b2b5-9760c9beade4>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Heartland Math Teacher's Circle Why Should Kids Have All The Fun? Rediscover the Thrill of Learning Mathematics Summer Workshop Questions? June 24-28, 2013 Contact Mark Brown 8:30 a.m.-5:00 p.m. Monday-Thursday mabrown@mnu.edu 8:30 a.m.-12:00 p.m. Friday 913-971-3663 MNU Campus Olathe, Kansas What is a Math Teachers’ Circle? A Math Teachers’ Circle is a place where middle school math teachers and mathematicians come together to explore problem solving approaches with interesting and fun math problems and to share classroom experiences and successes. The cost of the workshop is only $50, thanks to the PERK grant. For those interested in earning three hours continuing education credit, the cost is an additional $255 ($85 per credit hour). Our Mission: Rediscover the thrill of mathematics! Mathematics can and should be enjoyable for all. Join the MidAmerica Math Teachers’ Circle to experience the thrill and fun of working on mathematics with a supportive group. Participate in a process of learning that puts the enjoyment back in mathematics through the investigation of intriguing problems. The Heartland Math Teachers’ Circle at MNU provides guidance and support for promoting critical thinking and problem solving in the classroom. Participant Benefits Conference Overview → Expand your mathematical awareness and enrich → 5-day workshop your problem solving skills. → Collaborative mathematical problem solving → Meet with other middle school math teachers and → Daily connections to the common core share ideas. → Breakfast, lunch and snacks provided → Make connections to the Common Core. → Ample opportunity for networking and social time → Spread the infectious love of mathematics!!! → One evening banquet → Earn 3 hours of Continuing Education Credits. Sponsored by: MidAmerica Nazarene Universtiy Preparing Educators for Rural Kansas (PERK) American Institute of Mathematics Math Teacher's Circle
{"url":"https://www.mnu.edu/grants/grant-deadlines-calendar/119-default/121-student-activities/1861-men-s-intramural-basketball.html?tmpl=component","timestamp":"2014-04-17T17:16:28Z","content_type":null,"content_length":"6052","record_id":"<urn:uuid:5bd1865a-85e4-4840-888b-5b5967060f11>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
Applications of fuzzy regression in actuarial In this article, we propose several applications of fuzzy regression techniques for actuarial problems. Our main analysis is motivated, on the one hand, by the fact that several articles in the financial and actuarial literature suggest using fuzzy numbers to model interest rate uncertainty but do not explain how to quantify these rates with fuzzy numbers. Likewise, actuarial literature has recently focused some of its attention in analyzing the Term Structure of Interest Rates (TSIR) because this is a key instrument for pricing insurance contracts. With these two ideas in mind, we show that fuzzy regression is suitable for adjusting the TSIR and discuss how to apply a fuzzy TSIR when pricing life insurance contracts and property-liability policies. Finally, we reflect on other actuarial applications of fuzzy regression and develop with this technique the London Chain Ladder Method for obtaining Incurred But Not Reported Reserves. To obtain the financial price of an insurance contract and in general the price of any other asset, we have to discount the cash flows that the asset produces throughout its life. We therefore need to know the discount rates that must be applied at each moment. The reference values of these discount rates are those interest rates that are free of default risk, i.e., those that correspond to public debt bonds. Of course, this is especially true in an actuarial pricing context because the profit to the insurer must be in accordance with the return from the insurer's investments of the premiums, and some of the premiums are invested in public debt securities. So, predicting the evolution of the default-free interest rate is a crucial question in actuarial pricing. This explains why yield curve analysis has become an important topic in actuarial science. Babbell and Merrill (1996) and Ang and Sherris (1997) provided a wide survey of Term Structure of Interest Rate (TSIR) models derived from the contingent claims theory, whereas Yao (1999) discussed the asymptotic properties of the rates fitted with some of these models, bearing in mind actuarial pricing. Delbaen and Lorimier (1992) used a nonparametric method based on quadratic programming to fit the short-term yield curve, whereas Carriere (1999) proposed combining bootstrapping and spline functions to estimate long-term yield rates embedded in the TSIR. However, many actuarial analyses are concerned with the medium and long term and, in our opinion, modeling the behavior of interest rates in the long term by means of a stochastic model is not very realistic. As Gerber (1995) pointed out, there is no commonly accepted stochastic model for predicting long-term discount rates. Fuzzy Sets Theory (FST) has been used successfully in insurance problems that require much actuarial subjective judgment and those for which measuring the embedded variables is difficult. Lemaire (1990) applied fuzzy logic to underwriting and reinsurance decisions whereas Cummins and Derrig (1993) used fuzzy decision to evaluate several econometric methods of claim cost forecasting. Derrig and Ostaszewski (1995) showed that fuzzy clustering methods are suitable for risk classification and Young (1996) applied fuzzy reasoning to insurance rate decisions. Therefore, and given that there are only vague data or data ill related with the behavior of future discount rates to predict them (e.g., the price of fixed-income securities or the opinions of "experts" about the future behavior of macroeconomic magnitudes), many authors think that it is often more suitable and realistic to make financial analyses in the long term with yields quantified with fuzzy numbers. For financial analysis, see Kaufmann (1986), Buckley (1987), or Li Calzi (1990), whereas in actuarial literature, see Lemaire (1990), Ostaszewski (1993), or Terceno et al. (1996) in a life insurance context, and articles by Cummins and Derrig (1997) and Derrig and Ostaszewski (1997) on the financial analysis of property-liability insurance. However, these articles do not explain in great detail how to estimate the discount rates with fuzzy sets. They usually suggest that "the rates are estimated subjectively by the experts using fuzzy numbers" but offer no more explanation. In this article, we propose a solution to this problem. This involves estimating the TSIR with fuzzy sets, since the TSIR implicitly contains the expectations of the fixed income market agents (i.e., the experts) regarding the evolution of the future interest rates. Our results will be similar to those of Carriere (1999). His method obtains an estimate of the TSIR with probabilistic confidence intervals, whereas ours describes the yield curve as fuzzy confidence intervals. We also discuss how to use our fuzzy TSIR to price life-insurance contracts and property-liability policies. We would like to point out that using a fuzzy TSIR to price insurance policies was initially suggested in Ostaszewski (1993). Another aim of this article is to suggest other actuarial applications of fuzzy regression. We have therefore developed the method for obtaining Incurred But Not Reported Reserves proposed by Benjamin and Eagles (1986) with fuzzy regression methods. We also discuss how fuzzy regression can help us with trending claim costs and with premium rating from the CAPM perspective. The structure of the article is as follows. In the next section we describe some basic aspects of fuzzy arithmetic and fuzzy regression. In "Estimating the TSIR With Fuzzy Methods" we propose a method for obtaining a fuzzy TSIR based on fuzzy regression, apply our method to the Spanish public debt market, and compare our results with those of standard econometric methods. In "Using a Fuzzy TSIR for Financial and Actuarial Pricing" we discuss how our fuzzy TSIR can be used in actuarial pricing. In "Discussing Further Actuarial Applications of Fuzzy Regression" we suggest further applications of fuzzy regression to insurance problems. Basics of FST and Fuzzy Numbers FST is constructed from the concept of fuzzy subset. A fuzzy subset A is a subset defined over a reference set X for which the level of membership of an element x [member of] X to A accepts values other than 0 or 1 (absolute nonmembership or absolute membership). A fuzzy subset A can therefore be defined as A = {(x, [[mu].sub.A](x)) | x [member of] X}, where [[mu].sub.A](x) is called the membership function and is a mapping [[mu].sub.A]: X [right arrow] [0, 1]. So, an element x has its image within [0, 1], where 0 indicates nonmembership to the fuzzy subset A and 1 indicates absolute membership. Alternatively, a fuzzy subset A can be represented by its level sets [alpha] or [alpha]-cuts. An [alpha]-cut is an ordinary (crisp) set containing elements whose membership level is at least [alpha]. For a fuzzy subset A, we will name an [alpha]-cut with [A.sub.[alpha]] being its mathematical expression: [A.sub.[alpha] = {x [member of] X | [[mu].sub.A](x) [greater than or equal to] [alpha]}, 0 [less than or equal to] [alpha] [less than or equal to] 1. A fuzzy number (FN) is a fuzzy subset A defined over the real numbers (X is the set R). It is the main instrument of FST for quantifying uncertain or imprecise magnitudes (e.g., the discount rates in financial mathematics). Two other conditions are required for an FN. First, it must be a normal fuzzy set, i.e., it exists at least one x [member of] X such that [[mu].sub.A](x) = 1. Second, it must be convex (i.e., its [alpha]-cuts must be convex sets in the real numbers). Figure 1 shows the shape of an FN. [FIGURE 1 OMITTED] The most widely used FNs are triangular fuzzy numbers (TFNs) because they are easy to use and can be interpreted intuitively. (1) To construct a TFN named A, we must establish its center, a unique value [a.sub.C] (i.e., [a.sub.C] = [a.sub.2] = [a.sub.3] in Figure 1), and the deviations from there that we consider reasonable, i.e., its left spread, [l.sub.A]; and its right spread, [r.sub.A]. A TFN (2) will be denoted as A = ([a.sub.C], [l.sub.A], [r.sub.A]). Its membership function, [[mu].sub.A](x), is given by linear functions and its [alpha]-cuts, [A.sub.[alpha]], are confidence intervals where its extremes are also done by linear functions. So In fuzzy regression, the symmetrical TFNs (STFNs) are widely used. These are TFNs where [l.sub.A] = [r.sub.A] = [a.sub.R] and we will denote them as A = ([a.sub.C], [a.sub.R]). The membership function and [alpha]-cuts of an STFN A are Figures 2 and 3 show the shape of a TFN and an STFN, respectively. [FIGURES 2-3 OMITTED] To develop our article we need to know the level of inclusion of an FN B within another FN A, [mu] (B [subset or equal to] A). If the [alpha]-cuts of these FNs are [A.sub.[alpha]] = [[A.sup.1] ([alpha]), [A.sup.2]([alpha])] and [B.sub.[alpha]] = [[B.sup.1]([alpha]), [B.sup.2]([alpha])], then [mu](B [subset or equal to] A) [greater than or equal to] [alpha], if [B.sub.[alpha]] [subset or equal to] [A.sub.[alpha]], i.e., if (3) [A.sup.1]([alpha]) [less than or equal to] [B.sup.1]([alpha]) and [A.sup.2]([alpha]) [greater than or equal to] [B.sup.2]([alpha]). For example, in Figure 4, [mu] (B [subset or equal to] A) [greater than or equal to] 0.4. [FIGURE 4 OMITTED] We also need to establish when one FN is greater than another. (3) Ramik and Rimanek (1985) suggested that B is greater or equal to A with a membership level of at least [alpha], [mu](B [greater than or equal to] A) [greater than or equal to] [alpha], if (4) [A.sup.1]([alpha]) [less than or equal to] [B.sup.1]([alpha]) and [A.sup.2]([alpha]) [less than or equal to] [B.sup.2] ([alpha]). For example, in Figure 4, [mu] (B [greater than or equal to] A) [greater than or equal to] 1 and [mu](A [greater than or equal to] B) [greater than or equal to] 0.4. In actuarial analysis we often need to evaluate functions (e.g., the net present value), which in a general way we shall symbolize as y = f ([x.sub.1], [x.sub.2], ..., [x.sub.n])--e.g., [x.sub.1], [x.sub.2], ..., [x.sub.n-1] may be the cash flows and [x.sub.n] the discount rate. Then, if [x.sub.1], [x.sub.2], ..., [x.sub.n] are not given by crisp numbers but by the FNs [A.sub.1], [A.sub.2], ..., [A.sub.n] (i.e., to calculate the net present value we know the cash flows and the discount rate imprecisely), when evaluating f(*) we will obtain an FN B, B = f([A.sub.1], [A.sub.2], ..., [A.sub.n]). To determine the membership function of B, [[mu].sub.B](y), we must apply the Zadeh's extension principle, exposed in the seminal paper by Zadeh (1965). In a similar way as when handling arithmetically random variables, we obtain the membership function of the result when operating with FNs by convoluting membership functions of the FNs [A.sub.1], [A.sub.2], ..., [A.sub.n] (for random variables, the density functions). The difference is that random variables are convoluted using operators sum-product, whereas with FN they are convoluted using operators max-min (max instead of sum and rain instead of product). Mathematically, Unfortunately, it is often impossible to obtain a closed expression for the membership function of B (this problem often arises when handling random variables). However, we may be able to obtain its [alpha]-cuts, [B.sub.[alpha]], from [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] as In actuarial mathematics, many functional relationships are continuously increasing or decreasing with respect to every variable in such a way that it is easy to evaluate the [alpha]-cuts of B. Buckley and Qu (1990b) demonstrated that if the function f(*) that induces B is increasing with respect to the first m variables, where m [less than or equal to] n, and decreasing with respect to the last n-m variables, [B.sub.[alpha]] is (7) [B.sub.[alpha]] = [[B.sup.1] ([alpha]), [B.sup.2] ([alpha])] = [f ([A.sup.1.sub.1] ([alpha]), ..., [A.sup.1.sub.m] ([alpha]), [A.sup.2.sub.m+1], ([alpha]), ..., [A.sup.2.sub.n] ([alpha])), f ([A.sup.2.sub.1] ([alpha]), ..., [A.sup.2.sub.m] ([alpha]), [A.sup.1.sub.m+1] ([alpha]), ..., [A.sup.sub.n] ([alpha]))]. Some operations with STFN are easy to solve. For instance, if we multiply A = ([a.sub.C], [a.sub.R]) by a real number k, B = k A, the result is B = ([b.sub.C], [b.sub.R]) = ([ka.sub.C], [absolute value of k] [a.sub.R]). The sum of two (4) STFNs, C = A + B, is also an STFN. Concretely, C =([c.sub.C], [c.sub.R]) = ([a.sub.C], [a.sub.R]) + ([b.sub.C], [b.sub.R]) = ([a.sub.C], + [b.sub.C], [a.sub.R] + [b.sub.R]). So, if the FN B is obtained from a linear combination of the STFNs [A.sub.i] = ([a.sub.iC], [a.sub.iR), i = 1, ..., n, i.e., B = [[summation of].sup.n.sub.(i=1)] [k.sub.i], [A.sub.i] where [k.sub.i] [member of] R, B will be an STFN, B = ([b.sub.C], [b.sub.R]) where (8) ([b.sub.C], [b.sub.R]) = ([k.sub.1] * [a.sub.1C] + [k.sub.2] * [a.sub.2C] + ... + [k.sub.n] * [a.sub.nC], [[absolute value of k].sub.1] * [a.sub.1R] + [[absolute value of k].sub.2] * [a.sub.2R] + ... + [[absolute value of k].sub.n] * [a.sub.nR]). Unfortunately, the result of a nonlinear operation with STFNs is not an STFN. So, if we evaluate B = f ([A.sub.1], [A.sub.2], ..., [A.sub.n]), where we suppose that [A.sub.i] = ([a.sub.iC], [a.sub.iR]) [for all] i, B is often not an STFN despite the characteristics of [A.sub.1], [A.sub.2], ..., [A.sub.n]. Even so, Dubois and Prade (1993) showed that if f(*) is increasing with respect to the first m variables, where m [less than or equal to] n, and decreasing with respect to the others, B can be estimated well by B' = ([b.sub.C], [b.sub.R]), So, for B = [A.sup.k] where k [member of] R, if A = ([a.sub.C], [a.sub.R]), from (9) we obtain: (10) B [approximately equal to] ([b.sub.C], [b.sub.R]) = [(([a.sub.C].sup.k], [absolute value of k][([a.sub.C]).sup.k-1][a.sub.R]) while for C = A * B where k [member of] R, if A = ([a.sub.C], [a.sub.R]) and B = ([b.sub.C], [b.sub.R]), from (9) we obtain (11) C [approximately equal to] ([c.sub.C], [c.sub.R]) = ([a.sub.C] * [b.sub.C], [a.sub.C] * [b.sub.R] + [b.sub.C] * [a.sub.R]). Tanaka and Ishibuchi's Fuzzy Regression Model The fuzzy regression model developed in Tanaka (1987) and Tanaka and Ishibuchi (1992) is one of the most widely used models in fuzzy literature for economic applications. (5) Like any regression technique, the aim of fuzzy regression is to determine a functional relationship between a dependent variable and a set of independent ones. Fuzzy regression allows to obtain functional relationships when independent variables, dependent variables, or both, are not crisp values but confidence intervals. As in econometric linear regression, we shall suppose that the explained variable is a linear combination of the explanatory variables. This relationship should be obtained from a sample of n observations {([Y.sub.1], [X.sub.1]), ([Y.sub.2], [X.sub.2]), ..., ([Y.sub.j], [X.sub.j]), ..., ([Y.sub.n], [X.sub.n])} where [X.sub.j] is the jth observation of the explanatory variable, [X.sub.j] = ([X.sub.0j], [X.sub.1j], [X.sub.2j], ..., [X.sub.1j], ..., [X.sub.mj]). Moreover, [X.sub.0j] = 1 [for all] j, and [X.sub.ij] is the observed value for the ith variable in the jth case of the sample. [Y.sub.j] is the jth observation of the explained variable, j = 1, 2, ..., n. The jth observation may either be a crisp value or a confidence interval, in either case, it can be represented through its center and its spread or radius as [Y.sub.j] = <[Y'.sub.jC], [Y'.sub.jR]>, where [Y'.sub.jC] is the center and [Y'.sub.jR] is the radius. Similarly, we suppose that the jth observation for the dependent variable is an [[alpha].sup.*]-cut of the FN it arises from where [[alpha].sup.*] may be stated previously by the decision maker. Also, the FN that quantifies the jth observation of the dependent variable is an STFN that we will write as [Y.sub.j] = ([Y.sub.jC], [Y.sub.jR]). Therefore, since the [[alpha].sup.*]-cut of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is (see (2)): (12) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] the center and spread of [Y.sub.j] can be obtained from its [[alpha].sup.*]-cut taking into account (2) as (13) [Y'.sub.jC] = [Y.sub.jC] and [Y'.sub.jR] = [Y.sub.jR](1 - [[alpha].sup.*]) [??] [Y.sub.jR] = [Y'.sub.jR]/(1 - [[alpha].sup.*] and we must estimate the following fuzzy linear function (14) [Y.sub.j] = [A.sub.0] + [A.sub.1][X.sub.1j] + ... + [A.sub.m][X.sub.mj]. In this model of fuzzy regression, the disturbance is not introduced as a random addend in the linear relation but is incorporated into the coefficients [A.sub.i], i = 0, 1, ..., m. Of course, the final objective is to adjust the fuzzy numbers [A.sub.i] which estimate [A.sub.i] from the available sample. Given the characteristics of [Y.sub.j], the parameters [A.sub.i], i = 0, 1, 2, ..., m must be STFNs. These parameters can therefore be written as [A.sub.i] = ([a.sub.iC], [a.sub.iR]), i = 0, 1, ..., m. The main objective is to estimate every FN [A.sub.i], [A.sub.i] = ([a.sub.iC], [a.sub.iR]). When we have obtained [A.sub.i] the estimates of [Y.sub.j] = ([Y.sub.jC], [Y.sub.jR]), will be (15) [Y.sub.j] = [A.sub.0] + [A.sub.1][X.sub.1j] + ... + [A.sub.m][X.sub.mj]. Therefore, [Y.sub.j] is obtained from (8): (16) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] whose [alpha]-cuts for a level [[alpha].sup.*] are (17) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] The parameters [a.sub.iC] and [a.sub.iR], must minimize the spreads of [Y.sub.j], and simultaneously maximize the congruence of [Y.sub.j] with [Y.sub.j], which is measured as [mu]([Y.sub.j] [subset or equal to] [Y.sub.j]). Specifically, we must solve the following multiple objective program: (18a) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] subject to (18b) [mu]([Y.sub.j] [subset or equal to] [Y.sub.j] [greater than or equal to] [alpha] j = 1, 2, ..., n; [a.sub.iR] [greater than or equal to] 0 i = 0, 1, ..., m, [alpha] [member of] [0, 1]. If for the second objective we require a minimum accomplishment level [[alpha].sup.*], i.e., the level that the decision maker considers that <[Y'.sub.jC], [Y'.sub.jR]>, j = 1, 2, ..., n, has been obtained, the above program is transformed into the following linear one (19a) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] subject to (19b) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] The first block of constraints in Equation (19b) is a consequence of the requirement that [mu](Y.sub.j] [subset or equal to] [Y.sub.j] [greater than or equal to] [[alpha].sup.*], which must be implemented by taking into account (3), (12), and (17). With the last block of constraints in Equation (19b) we ensure that [a.sub.iR] [for all] i will be nonnegative. In our opinion, fuzzy regression techniques have a number of advantages over traditional regression techniques: (a) The estimates obtained after adjusting the coefficients are not random variables, which are difficult to manipulate in arithmetical operations, but fuzzy numbers, which are easier to handle arithmetically using [alpha]-cuts. So, when starting from magnitudes estimated by random variables (e.g., from a least squares regression), these random variables are often reduced to their mathematical expectation (which may or may not be corrected by its variance) to make them easier to handle. As we have already pointed out, this loss of information does not necessarily take place when we operate with FNs. (b) When investigating economic or social phenomena, the observations are a consequence of the interaction between the economic agents' beliefs and expectations, which are highly subjective and vague. A good way to treat this kind of information is therefore with FST. For example, the asset prices that are determined in the markets are due to the agents' expectations of future inflation and the issuers' credibility. We think that it is more realistic to consider that the bias between the observed value of the dependent variable and its theoretical value (the error) is not random but fuzzy. At least in this way we assume that the analyzed phenomena have a large subjective component. (c) The observations are often not crisp numbers but confidence intervals. For instance, the price of one financial asset throughout one session often oscillates within an interval and is rarely unique (e.g., it can oscillate within [$100, $105]). To be able to use econometric methods, the observations for the explained variable and/or the explanatory variable must be represented by a single value (e.g., $102.5 for [$100, $105]), which involves losing a great deal of information. However, fuzzy regression does not necessarily reduce each variable to a crisp number, i.e., all the observed values can be used in the regression analysis. Estimating the TSIR With Conventional Econometric Methods Estimating the TSIR for a concrete date and a given market is fairly straightforward if there are many sufficiently liquid zero coupon bonds and their price can be observed without perturbations. However, fixed-income markets rarely enjoy these conditions simultaneously. This subsection describes the essence of a family of methods for estimating the discount function associated to the TSIR with econometric methods. They can be used if the sample is made up entirely of zero coupon bonds, entirely of bonds with coupon or, as is usual, if it is made up of both types of bonds. These methods start from the fact that the rth bond, where r = 1, 2, ..., k, in which k is the number of available bonds, provides several cash flows (coupons and principal). These are denoted by [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], where [C.sup.r.sub.i] is the amount of the ith cash flow and [t.sup.r.sub.i] is its maturity in years. If we suppose that the default-free bonds do not include any option (i.e., they are not convertible or callable, etc.), the price of the rth bond is therefore the sum of the discounted value of every coupon and the principal with the corresponding spot rate (20) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is the discounted value of one dollar with maturity [t.sup.r.sub.i] years. Of course, subsequently we should define a form for the discount function to specify the econometric equation to be estimated. Our proposal is based on the methods that use splines (piecewise functions) to model the discount function. The best known methods are those in McCulloch's articles (1971, 1975) (quadratic splines and cubic splines) and in Vasicek and Fong's article (1982) (exponential splines). These methods suppose that the discount function is a linear combination of m + 1 functions of time. Therefore (21) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] From (20) and (21) we can deduce that the following linear equation must be estimated (22) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] We assume that [g.sub.j](t) are splines and not simply polynomial functions because with splines we can determine [g.sub.j](t) according to the distribution of the maturity dates of the sample and, therefore, fit the discount function better for the most common maturities. Moreover, with splines we can obtain TSIR profiles that do not fluctuate very much and forward rates that do not behave A random disturbance is justified because several factors disturb the formation of the prices of fixed-income securities. Chambers, Carleton, and Waldman (1984) point out some of them; for example, the coupon-bearing of bonds with maturities greater than one contain information about more than one present value coefficient; the bond portfolios are not continuously rebalanced, so at any moment each bond can deviate by some (presumably random) amount or there is no single price for each bond, which implies that there is some inherent imprecision to the concept of a single price. We will now present our fuzzy method for estimating the TSIR. We will first establish a hypothesis on which to construct a method for estimating the TSIR that uses fuzzy regression from the analyzed framework. We will then discuss how to estimate the TSIR and the forward rates using fuzzy numbers and make an empirical application. As we stated above, we will suppose that the bonds in our analysis are default-free and only produce a stream of payments (coupons and principal or only principal if they are zero coupon bonds) and that they do not have any embedded option. Also, as the price of one bond we will take all the prices traded on 1 day, rather than an average price. Hypothesis 1: The price of the rth bond in a session is an STFN. This price will be written as [P.sup.r] where (23) [P.sup.r] = ([P.sup.r.sub.C], [P.sup.r.sub.R]) [P.sup.r.sub.C], [P.sup.r.sub.R] [greater than or equal to] 0, r = 1, 2, ..., k. This hypothesis considers the price of a bond on i day to be "approximately [P.sub.C]," and not exactly [P.sub.C]. We think that this is more suitable because over a session it is usual to negotiate more than one price for the same bond (one for each trade). However, if these prices are unique then [P.sup.r.sub.R] = 0. Hypothesis 2: The observed price for each security is an [alpha]-cut of the FN that quantifies that price for a predefined [[alpha].sup.*]. Its inferior and upper extremes are the minimum and maximum price of the bond over the session. Therefore, this interval will be expressed through its center and spread as <[P'.sup.r.sub.C], [P'.sup.r.sub.R]> For example, if the traded price for one bond on 1 day has fluctuated between 100 and 103, it will be expressed as <101.5, 1.5>. Similarly, from these parameters we can obtain the center and the spread of its corresponding FN, (23), taking into account (13): (24) [P'.sup.r.sub.C] = [P.sup.r.sub.C] y [P'.sup.r.sub.R] = (1 - [[alpha].sup.*]) [P.sup.r.sub.R] [??] [P.sup.r.sub.R] = [P'.sup.r.sub.R] /(1 - [[alpha].sup.*]), 0 [less than or equal to] [[alpha].sup.*] [less than or equal to] 1, r = 1, 2, ..., k. Hypothesis 3: The discount function is quantified via an FN that depends on the time. So, for a given maturity t, the discount function is the following STFN: (25) [f.sub.t] = ([f.sub.tC], [f.sub.tR]), t > 0, 0 [less than or equal to] [f.sub.tC] - [f.sub.tR] [less than or equal to] [f.sub.tC] + [f.sub.tR] [less than or equal to] 1. The price of the rth bond, (23), can therefore be written from (20) as (26) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and combining (25) and (26) we obtain (27) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] So, the observed [[alpha].sup.*]-cut for the price of the rth bond, is (28) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] with [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] Hypothesis 4: The discount function, (25), can be approximated from a linear combination of m + 1 functions [g.sub.j](t), j = 0, 1, ..., m with image in [R.sup.+] that are continuously differentiable, and whose parameters are given by STFN. In this way, these parameters can be represented as (29) [a.sub.j] = ([a.sub.jC], [a.sub.jR]), [a.sub.jR] [greater than or equal to] 0, j = 0, 1, ..., m and so the discount function is obtained from (21), (29) and using (8): (30) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] Then, using (27) and (30), the price of the rth bond can be expressed by (31) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and the [[alpha].sup.*]-cut of the rth bond price, (28), is now (32) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] with [a'.sub.jC] = [a.sub.jC] and [a'.sub.jR] = (1 - [[alpha].sup.*])[a.sub.jR], 0 [less than or equal to] [[alpha].sup.*] [less than or equal to] 1, r = 1, 2, ..., k. Adjusting the Discount Function Using a Fuzz Regression Model Since the value of the discount function for t = 0 should be 1, [f.sub.0] = ([f.sub.0C], [f.sub.0R]) = (1, 0). As McCulloch stated in (1971), this condition is met if [a.sub.0] =([a.sub.0C], [a.sub.0R]) = (1, 0), [g.sub.0](t) = 1, and [g.sub.j](0) = 0, j = 1, 2, ..., m. So, from (31), we can express the price of the rth bond as (33) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and, using fuzzy arithmetic, we finally write (34) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. In this way, by identifying in Equation (34) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], we obtain (35) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] It is easy to verify in Equation (34) that the value of the jth explanatory variable for the rth bond is the crisp value [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Therefore, from (34) and (35) we can write (36) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] Then, after obtaining the estimate for every [a.sub.j], [a.sub.j] = [a.sub.jC], [a.sub.jR]), the fitted value of the explained variable of the rth observation, [Y.sup.r], is (37) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] Bearing in mind that the dependent variable and the parameters are actually quantified via their [[alpha].sup.*]-cut, the expression of the [[alpha].sup.*]-cut of [Y.sup.r] (that of FN (37)) is (38) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] where we must estimate the center and the spread of the [[alpha].sup.*]-cut for [a.sub.j], j = 1, ..., m. After estimating these parameters, the center and the radius [a.sub.jC] and [a.sub.jR] are estimated using (2) as (39) [a.sub.jC] =[a'.sub.jC] and [a.sub.jR] = [a'.sub.jR]/(1 - [[alpha].sup.*]). Adding certain constraints, which are related to the properties of the discount function, to obtain [a'.sub.jC] and [a'.sub.jR], we have to solve the following linear program (40a) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] subject to (40b) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (40c) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (40d) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (40e) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (40f) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (40g) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (40h) [a'.sub.j] R [greater than or equal to] 0 j = 1, 2, ..., m. The constraints (40b), (40c), and (40h) correspond to Tanaka's regression model (see (18b) or (19b)). Similarly, (40d) and (40e) ensure that the discount function is decreasing, and that it is accomplished for an arbitrary periodicity P (in years). Therefore uP is the greatest maturity that we will use in a later analysis. It is reasonable to suppose that uP is close to the expiration of the bond with the greatest maturity. In Equations (40d) and (40e) we use the Ramik and Rimanek's criteria for ordering FNs (see (4)). Finally, constraints (40f) and (40g) ensure that the discount function is within [0, 1]. Estimating the Spot Rates and the Forward Rates Using Fuzzy Numbers The discount function in t, [f.sub.t], is obtained from its corresponding spot rate [i.sub.t] by [f.sub.t] = [(1 + [i.sub.t]).sup.-t] and then (41) [i.sub.t] = [([f.sub.t]).sup.-1/t] - 1 If the discount function in t is an FN, the spot rate will be an FN [i.sub.t]. Its membership function can be obtained applying the extension principle (5) to the relation (41) as follows: Unfortunately, despite using a discount function quantified via an STFN, the spot rate is not an STFN because it is not a linear function of [f.sub.t]. However, applying (10) in (41) we obtain (42) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] To obtain the forward rate for the tth year, [r.sub.t], we should solve the following fuzzy equation: (43) [f.sub.t-1] [(1 + [r.sub.t]).sup.-1] = [f.sub.t]. The [[alpha].sup.*]-cuts of [r.sub.t], [r.sub.t[alpha] are (see Appendix): (44) [r.sub.t[alpha]] = [[r.sup.1.sub.t]([alpha]), [r.sup.2.sub.t]([alpha])] = [[f.sub.(t-1)C] + [f.sub.(t-1)R](1 - [alpha])/[f.sub.tC] + [f.sub.tR](1 - [alpha]) - 1, [f.sub.(t-1)C] - [f.sub.(t-1)R] (1 - [alpha])/[f.sub.tC] - [f.sub.tR](1 - [alpha]) - 1]. Then, although [r.sub.t] is not an STFN, it can be approximated reasonably well by this type of FN. In the Appendix we demonstrate that its approximation by means of an STFN is (45) [r.sub.t] [approximately equal to] ([r.sub.tC], [r.sub.tR]) = ([f.sub.(t-1)C]/[f.sub.tR] - 1, [f.sub.(t-1)C] * [f.sub.tR] - [f.sub.tC] * [f.sub.(t-1)R]/[([f.sub.tC]).sup.2]). Notice that, although we take annual periods, calculating implied rates for any other periodicity is not a problem because the discount function is a continuous function of the maturity. Empirical Application In this subsection, we used our method to estimate the TSIR in the Spanish public debt market on June 29, 2001. Table 1 shows the bonds included in our sample and their characteristics. To fit the TSIR in this date, we formalized the discount function using McCulloch's quadratic splines. (6) So we took m = 5, and the knots that we used to construct the splines were [d.sub.1] = 0 years, [d.sub.2] = 1.58 years, [d.sub.3] = 3.83 years, [d.sub.4] = 8.96 years, and [d.sub.5] = 31.1 years. Then, the functions [g.sub.j](t), j = 1, ..., 5 are We used OLS regression, taking for the price of the kth bond ([P.sup.k.sub.min] + [P.sup.k.sub.max])/2. Its determination coefficient is [R.sup.2] = 99.98% whereas the discount function is [f.sub.t] = 1 - 0.04394[g.sub.1](t) - 0.03672[g.sub.2](t) - 0.04875[g.sub.3](t) - 0.03572[g.sub.4](t) - 0.00730[g.sub.5](t). To fit the TSIR by using fuzzy regression we took for the price of the kth bond the interval [P.sup.k] = <[P.sup.k.sub.C], where [P.sup.k.sub.R]>, where [P.sup.k.sub.C] = ([P.sup.k.sub.min] + [P.sup.k.sub.max])/2 and [P.sup.k.sub.R] = ([P.sup.k.sub.max] - [P.sup.k.sub.min])/2. To build the constraints (40d), (40e), (40f), and (40g) in fuzzy regression we assumed an annual periodicity. The final value of the objective function (40a) is z = 23.77. To interpret this value, we should remember that the size of our sample was 28 assets. When taking the level of congruence [[alpha].sup.*] = 0.5, our fuzzy discount function is [f.sub.t] = (1, 0) + (-0.04280, 0.00422)[g.sub.1](t) + (-0.03874, 0.00043)[g.sub.2](t) + (-0.04675, 0.00397)[g.sub.3](t) + (-0.03841, 0.00069)[g.sub.4](t) + (-0.00255, 0)[g.sub.5](t) and requiring [[alpha].sup.*] = 0.75, the discount function is [f.sub.t] = (1, 0) + (-0.04280, 0.00843)[g.sub.1](t) + (-0.03874, 0.00086)[g.sub.2](t) +(-0.04675, 0.00793)[g.sub.3](t) + (-0.03841, 0.00139)[g.sub.4](t) + (-0.00255, 0)[g.sub.5](t). Table 2 shows the spot and forward rates for the next 15 years with OLS and fuzzy regressions (in the last case, for [[alpha].sup.*] = 0.5, 0.75). We can see that the parameter [[alpha].sup.*] can be interpreted as an indicator of the perceived uncertainty in the market. If [[alpha].sup.*] increases, uncertainty of the observations for the explained variable (price of the bonds) also increases (see Wang and Tsaur, 2000) and the spread of the subsequent estimates of the spot rates and the forward rates will be wider. To compare the difference between the OLS estimates and the estimates of fuzzy regression for discount rates, Table 2 includes coefficient D = 100 x [absolute value of [V.sup.E] - [V.sup.F]]/ [V.sup.E], where [V.sup.E] is the value of a yield rate obtained with econometric methods and [V.sup.F] is the center of the fuzzy estimate for this rate. We can see that the estimates of the spot rates with each method are quite similar. This similarity decreases when estimating forward rates. Calculating the Present Value of an Annuity Buckley (1987) determines the present value of a stream of amounts when these amounts and the discount rate are given by FNs. Buckley supposes that the discount rate to be applied throughout the evaluation horizon was a unique FN i. It implies that the reference TSIR is flat. If we also suppose that amounts are given by means of nonnegative FNs, in which the tth cash flow is the FN [C.sub.t], and that they are an immediate postpayable annuity with an annual periodicity, then the net present value of that annuity is the following FN, V: (46) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] It is often impossible to obtain the membership function of V by means of the extension principle. However, it is fairly straightforward to obtain a closed expression of the [alpha]-cuts of the present value. If we bear in mind that the function of present value is continuously decreasing (increasing) with respect to the discount rate (the amounts), we can calculate the upper and lower extremes of its [alpha]-cuts immediately from (7). If the amounts and the cash flows are given by STFNs, 19 will not be an STFN but it can be approximated by an STFN from (9). It is well known that it is quite unrealistic to suppose a flat TSIR. Moreover, in the previous section we proposed a method for obtaining an empirical fuzzy TSIR and discussed how to obtain its spot and implied rates. So, (46) can be generalized to any shape of the TSIR, using the spot rates, the forward rates, or the discount function (47) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] Of course, from the [alpha]-cuts of the amounts and the discount rates (or alternatively the discount function), we can easily obtain 19 from (7). Moreover, if the amounts are crisp (e.g., if they are the amounts paid by a bond), and to obtain their present value we use a discount function quantified via an STFN like (25), (47) can be written as (48) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] Example: Suppose an investor is going to buy bonds of the Spanish public debt market on June 29, 2001. The maturity of these bonds is 5 years, and they offer a 5 percent annual coupon. The values of 1 monetary unit with maturities from 1 to 5 years obtained from the regression with [[alpha].sup.*] = 0.5 are given in Table 3. Therefore, the investor obtains the following preliminary price for one of these bonds from (48): P = 5 * (0.9585, 0.0030) + 5 * (0.9190, 0.0040) + 5 * (0.8770, 0.0059) + 5 * (0.8315, 0.0093) + 105 * (0.7858, 0.0128) = (100.44, 1.46). Pricing Life-Insurance Contracts In this subsection, we show how to obtain the net single premium for some life-insurance contracts (n-year pure endowments, n-year term life-insurance contracts, and n-year endowments--the combination of the two first contracts) from our fuzzy TSIR. To simplify the analysis, we will suppose that the insured amounts are fixed beforehand and that they are annual and payable at the end of each year of the contract. Taking into account that standard life-insurance mathematics establishes that the net single premium for a policy is the discounted value of the mathematical expectation of the guaranteed amounts, and naming as [C.sub.n] the amount payable at the end of an n-year pure endowment, the premium for an individual aged x for this contract, [[PI].sub.1], is (49) [[PI].sub.1] = [C.sub.n] * [(1 + [i.sub.t]).sup.-n] * [sub.n][p.sub.x] = [C.sub.n] * [f.sub.n] * [sub.n][p.sub.x], where [sub.n] [p.sub.x] stands for the probability that an individual aged x attains at the age x + n. If we suppose that for an n-year term life insurance the amounts are payable at the end of the year of death, for an insured person aged x, the premium, [[PI].sub.2], is (50) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] where [sub. t|][q.sub.x] stands for the probability of death at age x + t and [C.sub.t] the insured amount for this event. The net single premium for an n-year endowment, [[PI].sub.3], is obtained from (49) and (50) as: (51) [[PI].sub.1] = [[PI].sub.1] + [[PI].sub.2] So, if the discount function estimated by an STFN, the net single premium for an n-year pure endowment, (49), is reduced to (52) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and then, for an n-year term life insurance, (50), we obtain (53) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] Then, from (51), (52), and (53) we obtain the price of an n-year endowment, [[PI].sub.3]: (54) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] Table 4 shows the fuzzy pure single premiums of the three types of policies for people of several ages and with the value of the discount function in Table 3. In all cases the duration of the contracts is 5 years. We suppose that the amounts of the insured events are 1,000 monetary units and for the calculations we have taken the Swiss GRM-82 mortality tables. The maturities of the bonds used to estimate the TSIR, and obviously the temporal horizon covered with this TSIR may not be large enough to discount all the cash flows (e.g., when pricing a whole life annuity). To complete the interest rates for the maturities of the TSIR that are not covered in the regression, de Andres (2000) suggested two solutions: (1) The first involves estimating subjectively a unique nominal interest rate for the longer maturities by Fisher's relationship. In this way, Devolder (1988) suggested obtaining the discount rate for long-term insurance policies by using Fisher's relationship as: nominal interest rate = real interest rate + [lambda] x anticipated inflation, where 0 [less than or equal to] [lambda] < 1. Regarding the real interest rate, Devolder stated that "generally it must be quantified between the 2 and 3 percent" and the anticipated inflation "must be reasonable in the long term." Clearly, these sentences allow a fuzzy quantification even though this is probably not the aim of the author. If we call i the FN that quantifies the discount rate, we can obtain this, e.g., as i = (0.025, 0.01) + [lambda][pi], where [pi] stands for the anticipated inflation. (2) From financial logic, the shape of the TSIR must be asymptotic. So, a second solution involves taking the latest forward (or spot) rate of our estimated TSIR as a reference for the rate to discount the amounts whose maturities are not covered by the TSIR. Fuzzy Financial Pricing of Property-Liability Insurance In this subsection we will show how to apply our fuzzy TSIR when pricing property-liability insurance. For a wide discussion on this topic, consult Myers and Cohn (1987) (MC) and Cummins (1990) under a nonfuzzy environment or Cummins and Derrig (1997) (CD) under a fuzzy environment. To simplify our explanation, we will suppose only a three-period model. The MC model states that the present value of the premiums must compensate the cost of the liabilities and the taxes for the insurer. Supposing a single premium, only two periods (years) in claiming, but that the TSIR can have any shape, the CD formulation can be transformed into (55) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] where P = pure single premium. The parameter [delta] ([delta] > 0) indicates the proportion of the fair premium corresponding to the surplus and [tau] is the tax rate that we suppose the same for the underwriting profit and the investment income. L is the total amount of the claim cost whereas [c.sub.t] = the proportion of the liabilities payable at the tth year. Therefore [c.sub.1] + [c.sub.2] = 1. Notice that we have supposed that the proportion of the claims cost in the tth year deductible from income taxes is equal to [c.sub.t]. If [f.sub.t] is the present value of 1 unitary unit payable at t with the spot rate free of risk of failure for that maturity ([i.sub.t]), then [f.sub.t] = [(1+ [i.sub.t]).sup.-t]. Similarly, [f.sup. (L).sub.t] is the value of the discount traction for one monetary unit of liability payable at the tth year. This can be obtained from the spot rate of the liabilities at time t, [i.sup.(L).sub.t]. Therefore, [f.sup.(L).sub.t] = [(1 + [i.sup.(L).sub.t]).sup.-t]. To simplify our discussion, we will suppose that [i.sup.(L).sub.t] is obtained by applying over it [i.sub.t] the risk loading [k.sub.t], by doing [i.sup.(L).sub.t] = (1 - [k.sub.t])(1 + [i.sub.t]) - 1, where 1 > [k.sub.t] > 0. From this relation, the following connection between [f.sup.(L).sub.t] and [f.sub.t] arises (56) [f.sup.(L).sub.t] = [(1 - [k.sub.t]).sup.-t] [f.sub.t]. Finally, [r.sub.t], t = 1, 2 is the return obtained by investing the premium within the tth year of the contract. If we assume, as is usual, that this return corresponds to the risk-free rate, within a pure expectations framework, [r.sub.t] can be quantified by the forward rate for the tth year of the contract. Then, taking into account that [r.sub.t] can be obtained from the values of spot discount function [f.sub.t-1] and [f.sub.t] as (57) [r.sub.t] = ([f.sub.t-1]/[f.sub.t]) - 1. Then, (55) can be rewritten from (56), (57) and remembering that [c.sub.1] + [c.sub.2] = 1 as (58) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and so the value of the pure single premium is (59) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] We wish to show how to introduce fuzziness into the future behavior of interest rates when pricing a property-liability contract. The fuzziness of this behavior is introduced by the discount function, which is fuzzy. Moreover, we will allow the total amount of the liabilities, L, to be estimated by an STFN, i.e., L = ([L.sub.C], [L.sub.R]). This is easy to interpret from an intuitive point of view: the actuary estimates that the total cost of the claims will be "around [L.sub.C]." We will suppose that [c.sub.t], [k.sub.t], [tau], and [delta] are crisp parameters. If the discount factor and the total cost of the claims are done by FNs, we will actually obtain a fuzzy premium, which must be obtained by solving the fuzzy version of Equation (55): (60) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] Buckley and Qu (1990c) suggested obtaining the solution of a fuzzy equation from the solution of its crisp version. So, we will obtain P from (59) as (61) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] To determine the [alpha]-cuts of P, [P.sub.[alpha]], we must bear in mind that in Equation (59) the premium is a function of the value of the liabilities and the free risk discount function (or, alternatively a function of the expected evolution of interest rates), i.e., the premium in Equation (59) can be denoted as P = f(L, [f.sub.1], [f.sub.2]). Clearly, f(*) is an increasing function of L. On the other hand, it is not easy to know whether the premium increases (or decreases) when the discount function increases (the interest rates decreases) from the partial derivatives of P(L, [f.sub.1] [f.sub.2]). However, we do know by financial intuition that the price of the insurance is basically related to the present value of the liabilities, and that it is clearly increasing (decreasing) with respect to the values of the discount function (spot rates). Therefore, if we start from the discount function that define our TSIR, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], we can obtain [P.sub.[alpha]] by applying (7) as (62) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] However, from (9), we can approximate P by a STBN, i.e., P [approximately equal to] ([P.sub.C], [P.sub.R]) where (63) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] In the following example we consider the following variables of a property-liability insurance: [k.sub.1] = [k.sub.2] = 1%, [delta] = 5%, and [tau] = 34% and the values of the discount function in Table 3. The duration of the contract is 2 years and the total amount of the liability for one contract is L = (1000, 50). Table 5 shows the fuzzy premiums for several pairs ([c.sub.1], [c.sub.2]) when using the approximating formula (63) of L. Figure 5 represents the shapes of the true fuzzy premium (obtained from (62) and represented with a solid line) and our approximation (63), for the distribution of the claims [c.sub.1] = 0.5 and [c.sub.2] = 0.5. Clearly, the triangular approximation fits the real value of the premium well and is easier to interpret: the value of the fair premium must be 992.37 but there may be acceptable deviations no longer than 52.95. [FIGURE 5 OMITTED] In this section, we reflect on other applications of fuzzy regression to insurance problems. We will focus our discussion on the specific problem of estimating the incurred but not reported claims reserves, but we will also suggest other (in our opinion) promising applications. Calculating the Incurred But Not Reported Claims Reserves (IBNR reserves) is a classic topic in nonlife-insurance mathematics. Unfortunately, they may not be calculated from a wide statistical database. Straub (1997) states that taking into account experiences that are too far from the present can lead to unrealistic estimates. For example, if the claims are related to bodily injuries, the future losses for the company will depend on the behavior of the wage index grown that will be taken to determine the amount of indemnification, changes in court practices, and public awareness of liability matters. To calculate the IBNR reserve we begin from the historical data ordered in a triangle like that in Table 6. In Table 6, [Z.sub.i,j] is the accumulated incurred losses of accident year j at the end of development year i where j = 1 denotes the most recent accident year and j = n the oldest accident year. Obviously, we do not know, for the jth year of occurrence, the accumulated losses in the development years i = j + 1, ..., n and therefore, these losses must be predicted. It is well known that the classical method of predicting these losses is the Chain Ladder (CL). However, as it is pointed out in the survey England and Verrall (2002) claims reserving methods, during the last years greater interest of actuarial literature has been focused not only in calculating the best estimate of claim reserves but also on determining its downside potential from a stochastic perspective. Obviously, the final objective is to provide the actuary a well-founded mathematical tool to determine solvency margins for the reserves. One way to focus this problem consists of departing from the pure CL method and making statistical refinements over it. In this way, Benjamin and Eagles (1986) propose a slight generalization of the CL method, known as London Chain Ladder (LCL), which is based on the use of OLS regression over the accumulated claims. On the other hand, Mack (1993) and England and Verrall (1999) do not suppose a concrete structure of the underlying data. Concretely, Mack (1993) provides analytical expressions for the prediction errors in claims and reserve estimates whereas England and Verrall (1999) propose combining standard CL estimates and bootstrapping techniques to determine the variance of the error in the predicted reserves. Another extended way to focus this problem consists of modeling the incremental claims as random variables with a predefined distribution function. So, while Wright (1990) and Renshaw and Verrall (1998) use the Poisson random variables, Kremer (1982), Renshaw (1989), and Verrall (1989) use a log-normal approach. Then, the subsequent question to answer is how to define the associated parameters to these random variables with respect to the year of underwriting and delay period. There are several approaches in the literature to do it. The more extended way consists of using one separate parameter for each development period. However, to avoid over-parameterization, some articles propose using parametric expressions (e.g., the Hoerl curve) or nonparametric smoothing methods like the one given by England and Verrall (2001). It must be remarked that, as it is pointed out in Mack (1993), in spite of the fact that these approaches are based on CL "philosophy," some of them present fundamental differences with respect to the pure CL method. Our fuzzy sets approach to determine the value and variability of IBNR will be based on combining the generalization of CL by Benjamin and Eagles (1986), the LCL, and fuzzy regression. So, let us briefly expose Benjamin and Eagle's method. This is built from the hypothesis that the evolution of the claims of the accidents occurred in the year j from the ith year to the i+1th year of development can be approximated by the linear relation: (64) [Z.sub.i+1,j] = [b.sub.i] + [c.sub.i] [Z.sub.i,j] + [[epsilon].sub.I], where [b.sub.i] is the intercept, [c.sub.i] is the slope, and [[epsilon].sub.i] is the perturbation term. The coefficients [b.sub.i] and [c.sub.i] must be estimated by OLS. Of course, the estimates of [b.sub.i] and [c.sub.i], [b.sub.i] and [c.sub.i], respectively, must be obtained from the observations in the IBNR-triangle, i.e., from the pairs [{([Z.sub.i+1,j]; Z.sub.[i,j])}.sub.j[greater than or equal to]i]. Notice that the CL method is a special case of the LCL method when we consider that [b.sub.i] = 0. It is easy to check that the amount of the whole claims for the accidents occurred in the jth year at the end of the n years of development, [Z.sub.n,j], is (65) [Z.sub.n,j] = [b.sub.n-1] + [c.sub.n-1] { ... [b.sub.j+2] + [c.sub.j+2][[b.sub.j+1] + [c.sub.j+1]([b.sub.j] + [c.sub.j][Z.sub.j,j])]}. If we suppose that the expansion of the claims produced by the accidents in a given year is done within n years, then [Z.sub.n,j] is the estimate of the amount of all the claims corresponding to the year of occurrence j. Therefore, the IBNR reserves corresponding to the accidents of the year j, [R.sub.j], is obtained by calculating the difference between [Z.sub.n,j] and the amount of the claims reported, [Z.sub.j,j], i.e., [R.sub.j] = [Z.sub.n,j] - [Z.sub.j,j] So, the total IBNR reserve (R) (corresponding to all the accidents from year 1 to year n) is (7) (66) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] We think that this approach has several drawbacks. First, OLS is useful when we start from a wide sample, but it is not advisable for calculating IBNR reserves. Similarly, using all the information available in the IBNR-triangle requires estimating [Z.sub.n,j] not by exact values but by means of probabilistic confidence intervals. Unfortunately, this requires a great computational effort. In any case, we think that these considerations can be extended to the statistical methods discussed above. We will show that adapting LCL method to fuzzy regression can be a suitable alternative. It will allow us to use all the information provided by the IBNR-triangle more efficiently. So, let us assume that the evolution of the accumulated claims of the accidents happened in the year j from the ith to the i+1th developing years can be adjusted using a fuzzy linear relation [Z.sub.i+1,j] = [b.sub.i] + [c.sub.i] [Z.sub.i,j]. If we state that [b.sub.i] and [c.sub.i] are the STFN ([b.sub.iC], [b.sub.iR]), ([C.sub.iC], [C.sub.iR]), respectively, we can write (67) [Z.sub.i+1,j] = ([Z.sub.(i+1,j)C], [Z.sub.(i+1,j)R]) = ([b.sub.iC], [b.sub.iR]) + ([c.sub.iC], [c.sub.iR]) [Z.sub.i,j] = ([b.sub.iC] + [c.sub.iC] [Z.sub.i,j], [b.sub.iR] + [c.sub.iR] The estimates of [b.sub.i] and [c.sub.i], are symbolized as [b.sub.i] = ([b.sub.iC], [b.sub.iR]) and [c.sub.i] = ([c.sub.iC], [c.sub.iR]), respectively. Then, the prediction of the final cost of the accidents produced in year j, [Z.sub.n,j], is obtained from (65) as (68) [Z.sub.n,j] = [b.sub.n-1] + [c.sub.n-1] {... [b.sub.j+2] + [c.sub.j+2] [[b.sub.j+1] + [c.sub.j+1] ([b.sub.j] + [c.sub.j] [Z.sub.j,j])]}. Clearly, [Z.sub.n,j] is not an STFN, but it can be approximated reasonably well by a TSFN, i.e., [Z.sub.n,j] [approximately equal to] ([Z.sub.(n,j)C], [Z.sub.(n,j)R]). To do this, we must, in the fuzzy recursive calculation (68), use the approximating formula (11) for multiplication between two STFN. Finally, we obtain the IBNR reserve for the jth year of occurrence as the FN [R.sub.j] = ([R.sub.jC], [R.sub.jR]): [R.sub.j] = [Z.sub.n,j] - [Z.sub.j,j] = ([Z.sub.(n,j)C] - [Z.sub.(n,j)R]). (69) Therefore, the whole IBNR reserve is the STFN R = ([R.sub.C], [R.sub.R]), which is obtained from (66) as To illustrate our proposal, we develop the following example, which is similar to the one in Straub (1997, p. 106). Table 7 shows the IBNR triangle we will use. Table 8 shows the main results when nonfuzzy methods are used. The development coefficients when we use fuzzy regression with an inclusion level [alpha] = 0.5 and the fuzzy IBNR reserves are given in Table 9. Let us illustrate how we have used fuzzy regression. If we use the fuzzy LCL with intercept, to obtain the development coefficients from the year i = 1 to the year i = 2, i.e., the STFBs [b.sub.1] = ([b.sub.1C], [b.sub.1R]) and [c.sub.1] = ([c.sub.1C], [c.sub.1R]), and taking a level of inclusion [alpha] [greater than or equal to] [[alpha].sup.*] = 0.5, we must take the data in the second row of Table 7 and solve the following linear program: Minimize 5 [b.sub.1R] + 690[c.sub.1R] subject to: and solving [b.sub.1C] = -12; [b.sub.1R] = 0; [c.sub.1C] = 1.771; [c.sub.1R] = 0.057. Finally, we would like to mention some other promising applications of fuzzy regression in actuarial science. One useful application may be in trending claims costs. (8) When projecting future claims costs, standard actuarial practice uses time trend models. More academic approaches propose regressing the cost of the claims with respect to the value of an economic index (e.g., the consumer price index). So, to obtain the final prediction of the claims amount at a future moment we need to predict the value of the economic index at that moment. To do this, time trending may again be used. An alternative is to employ ARIMA time-series models. Several instruments are derived from the fuzzy regression model in this article that may provide suitable solutions. Watada (1992) proposed a fuzzy model for time series analysis based on polynomial time trending with STFNs. Tseng et al. (2001) combined conventional ARIMA models with Tanaka and Ishibuchi's regression method. Obviously, in both cases these fuzzy methods will lead to forecasting with STFNs. Another way of determining the discount rate of the liabilities is to use CAPM (see, e.g., Taylor, 1994, for a review of CAPM applied to insurance). With CAPM, the beta of the liabilities ([[beta].sub.L]) is obtained from the beta of the asset portfolio ([[beta].sub.A]) and the insurer's equity ([[beta].sub.E]), i.e., [[beta].sub.L] = f([[beta].sub.A], [[beta].sub.E]) and [[beta].sub.L] < 0. Cummins and Derrig (1997, p. 25) suggested fuzzifying the statistical estimates of the betas since, as they point out, there are several sources that disturb their quantification. Fuzzy regression provides a suitable way of doing this fuzzifycation and enables the decision maker to grade the level of uncertainty in the final estimates by choosing the appropriate level of congruency in fuzzy regression ([alpha].sup.*). Actuarial pricing requires an estimate of the behavior of future interest rates that are unknown when the valuation is made. In the last few years, the analysis of temporal structure of interest rates has become an important topic in actuarial science. Likewise, several articles in the actuarial literature consider that estimating the discount rates using fuzzy numbers is a good alternative. With this in mind, we have attempted to make up for the uncertainty about estimating interest rates with fuzzy numbers, i.e., to develop the hypothesis "an 'expert' subjectively estimates the yield rates." If we accept that the "experts" are the traders of the fixed income markets, these subjective estimates are implied in the price of the debt instruments and, therefore, in the yield curve of these instruments. Our method quantifies the experts' subjective estimates of the spot rates and spot interest rates for the future (the forward rates) with fuzzy numbers. This method, based on a fuzzy regression technique, uses all the prices of the bonds negotiated throughout one session in such a way that we do not lose any information. On the other hand, when we use an econometric method we must reduce these prices to representative ones, thus losing some information. We would like to remark that the results of our method are similar to those of Carriere (1999), but that he suggests fitting the yield curve using probabilistic confidence interval values whereas we fit the temporal structure of interest rates with fuzzy numbers. So, if we represent these fuzzy numbers by their [alpha]-cuts, we obtain a yield curve described by a direct analog of confidence intervals. We have adjusted the yield curve with symmetrical TFNs because the arithmetic is easy and interpreting the estimates is intuitive because they are well adapted to the way people make predictions. An interest rate given by (0.03, 0.005) indicates that we expect an interest rate of about 3 percent, and that we do not expect deviations from it to be greater than 50 basis points. We then discussed how to use a fuzzy TSIR to calculate the present value of a stream of nonrandom amounts and the net single premium for some life and property-liability insurance contracts. We showed that to obtain the fuzzy prices is easy because our method fits the TSIR by a discount function that is described with symmetrical TFNs. Finally, we discussed other actuarial problems where applying fuzzy regression is promising. We concentrated our discussion on calculating the IBNR reserves but also indicated other future areas of research such as trending cost claims and estimating the beta of liabilities. Notice that when our initial data is given by fuzzy numbers, the estimate of the objective magnitude (the premiums, the IBNR reserve, etc.) is not an exact value but a fuzzy number. For example, in "Fuzzy Financial Pricing of Property-Liability Insurance" the value of the premium for property-liability insurance was (992.37, 52.36). This can be understood as "the premium must be approximately 992.37." To obtain the definitive value of the magnitude, it must be transformed into a crisp value. To do this we need to apply a defuzzifying method--see Zhao and Govind (1990) for a wide discussion of fuzzy mathematics, and Cummins and Derrig (1997) or Terceno et al. (1996) for applications in fuzzy-actuarial analysis. Another way of doing this, which is very consistent with practice in the real world, is to consider the fuzzy quantification as a first approximation that allows a margin for the "actuarial subjective judgment" or upper and lower bounds for acceptable market prices. Finally, the actuary must use his/her intuition and experience to establish the crisp value of the fuzzy estimate. For example, for premium (992.37, 52.36), the actuary might decide that a final price 1017.37 is acceptable but 928 is not. Obtaining the Forward Rates From a Fuzzy TSIR The forward rate for the tth year, [r.sub.t], can be obtained from the fuzzy equation (A1) [f.sub.t-1] [(1 + [r.sub.t]).sup.-1] = [f.sub.t]. To solve this equation, we identify [(1 + [r.sub.t]).sup.-1] = [G.sub.t], where [G.sub.t] is the value in t-1 of one monetary unit payable at t according to the TSIR. Bearing in mind that the value of the discount function for any t is an STFN [f.sub.t] = ([f.sub.tC], [f.sub.tR]), the above equation can therefore be written using the [alpha]-cuts as: (A2) [[f.sub.(t-1)C] - [f.sub.(t-1)R] (1 - [alpha]), [f.sub.(t-1)C] + [f.sub.(t-1)R](1 - [alpha])] * [[G.sup.1.sub.t]([alpha]), [G.sup.2.sub.t] ([alpha])] = [[f.sub.tC] - [f.sub.tR](1 - [alpha]), [f.sub.tC] + [f.sub.tR] (1 - [alpha])]. It is easy to see that in Equation (A2): (A3) [G.sup.1.sub.t]([alpha]) = [(1 + [r.sup.2.sub.t]([alpha]).sup.-1] and [G.sup.2.sub.t]([alpha]) = [(1 + [r.sup.1.sub.t]([alpha]).sup.-1]. The solution of the [alpha]-cut Equation (A3) is: (9) (A4) [G.sub.t[alpha]] = [[G.sup.1.sub.t]([alpha]), [G.sup.2.sub.t]([alpha])] = [[f.sub.tC] - [f.sub.tR](1 - [alpha]) / [f.sub.(t-1)C] - [f.sub.(t-1)R] (1 - [alpha]), [f.sub.tC] + [f.sub.tR](1 - [alpha]) / [f.sub.(t-1)C] + [f.sub.(t-1)R(1-[alpha])]. Unfortunately, the solution (A4) might not exist (i.e., the expression (A4) does not correspond to a confidence interval). For example, consider that (A4) for a predefined a is given by [0.9, 0.95]. [[G.sup.1.sub.t]([alpha]), [G.sup.2.sub.t]([alpha])] = [0.875, 0.9]. Then [[G.sup.1.sub.t]([alpha]), [G.sup.2.sub.t]([alpha])] = [0.972, 0.947]. Clearly, it is not a confidence interval since 0.972 < 0.947. From Buckley and Qu (1990a, p. 46, Theorem 3), the necessary and sufficient condition for the existence of [G.sub.t] is (A5) [f.sub.tR] / [f.sub.(t-1)R] [greater than or equal to] [f.sub.tC] / [f.sub.(t-1)C] ?? [f.sub.(t-1)C] * [f.sub.tR] [greater than or equal to] [f.sub.tC] * [f.sub.(t-1)R]. If we define P in Equations (40d), (40e), (400, and (40g) as being lower than or equal to the periodicity used to obtain the spot rates and forward rates, and [g.sub.j(t)] as nondecreasing, it is easy to check that [f.sub.tR] [greater than or equal to] [f.sub.(t-1)R]. Moreover, if we combine the constraints (40d) and (40e), we see that [f.sub.(t-1)C [greater than or equal to] [f.sub.tC]. Therefore, [f.sub.(t-1)C] * [f.sub.tR] [greater than or equal to] [f.sub.tC] * [f.sub.(t-1)R], and we can obtain the [alpha]-cuts of [G.sub.t] with (A4). Then, from (A3) and (A4), we obtain the [alpha]-cuts of [r.sub.t], [r.sub.t[alpha]] by making: (A6) [r.sub.t[alpha]] = [[r.sup.1.sub.t]([alpha]), [r.sup.2.sub.t]([alpha])] = [[f.sub.(t-1)C] + [f.sub.(t-1)R](1 - [alpha]) / [f.sub.tC] + [f.sub.tR](1 - [alpha]) - 1, [f.sub.(t-1)C - [f.sub.(t-1)R (1 - [alpha]) / [f.sub.tC] - [f.sub.tR](1 - [alpha]) - 1]. It is easy to check that the extremes of [r.sub.t[alpha]], are not given as a linear expression of [alpha]. However, if we approximate the functions [r.sup.1.sub.t]([alpha]), [r.sup.2.sub.t]([alpha]) using Taylor's expansion to the first grade from [alpha] = 1, then: (A7) [r.sup.1.sub.t]([alpha]) [approximately equal to] [f.sub.(t-1)C] / [f.sub.tR] - 1 - [f.sub.(t-1)C] * [f.sub.tR] - [f.sub.tC] * [f.sub.(t-1)R] / [([f.sub.tC]).sup.2](1 - [alpha]) and [r.sup.2.sub.t]([alpha]) [approximately equal to] [f.sub.(t-1)C] / [f.sub.tR] - 1 + [f.sub.(t-1)C] * [f.sub.tR] - [f.sub.tC] * [f.sub.(t-1)R] / [([f.sub.tC]).sup.2](1 - [alpha]). In conclusion, from (A7) it is easy to check that [r.sub.t] can be approximated by an STFN [r.sub.t] [approximately equal to] ([r.sub.tC], [r.sub.tR]) where: (A8) [r.sub.tC] = [f.sub.(t-1)C] / [f.sub.tR] - 1 and [r.sub.tR] = [f.sub.(t-1)C] * [f.sub.tR] - [f.sub.tC] * [f.sub.(t-1)R] / [([f.sub.tC]).sup.2](1 - [alpha]). (1) The TFNs are a special case of a wider family of FNs called L-R FNs. For a detailed explanation see Dubois and Prade (1980). (2) If [l.sub.A] = [r.sub.A] = 0, A is the crisp number [a.sub.C]. (3) Many methods for ordering fuzzy numbers are proposed in the literature. Obviously, the choice of method depends on the problem. For our purpose we have chosen Ramik and Rimanek's criteria. (4) Clearly, the subtraction of two STFNs is an STFN because the subtraction is the sum of the first FN with the second one multiplied by -1. (5) Fedrizzi, Fedrizzi, and Ostasiewicz (1993) initially suggested fuzzy regression methods for economics. Some economic applications can be found in Ramenazi and Duckstein (1992) or Profillidis, Papadopoulos, and Botzoris (1999). (6) For a detailed explanation of how we constructed our [g.sub.j](t) and chose our knots, see McCulloch (1971). (7) Notice that with this approach, we do not consider investment income (i.e., the investment income is assumed to be 0%). However, this is the traditional approach to IBNR reserves. To simplify our explanation, we will continue with this hypothesis. (8) For an extensive exposition see Cummins and Derrig (1993). (9) In this case we apply the so-called solution "classical solution" in fuzzy sets literature. This solution does not always exist but as we will specify the functions [g.sub.j] as nondecreasing, we can ensure that it does, as we will show. Ang, A., and M. Sherris, 1997, Term Structure Models: A Perspective from the Long Rate, North American Actuarial Journal, 3(3): 122-138. Bababbell, D. F., and C. B. Merrill, 1996, Valuation of Interest-Sensitive Financial Instruments (Schaumburg: Society of Actuaries). Benjamin, S., and L. M. Eagles, 1986, Reserves in Lloyd's and the London Market, Journal of the Institute of Actuaries, 113(2): 197-257. Buckley, J. J., 1987, The Fuzzy Mathematics of Finance, Fuzzy Sets and Systems, 21: 57-73. Buckley, J. J., and Y. Qu, 1990a, Solving Linear and Quadratic Fuzzy Equations, Fuzzy Sets and Systems, 38: 43-59. Buckley, J. J., and Y. Qu, 1990b, On Using [alpha]-Cuts to Evaluate Fuzzy Equations, Fuzzy Sets and Systems, 38: 309-312. Buckley, J. J., and Y. Qu, 1990c, Solving Fuzzy Equations: A New Solution Concept, Fuzzy Sets and Systems, 39: 291-301. Carriere, J. F., 1999, Long-Term Yield Rates for Actuarial Valuations, North American Actuarial Journal, 3(3): 13-24. Chambers, D. R., W. T. Carleton, and D. W. Waldman, 1984, A New Approach to Estimation of the Term Structure of Interest Rates, Journal of Financial and Quantitative Analysis, 19: 233-252. Cummins, J. D., 1990, Multi-Period Discounted Cash Flow Rate-Making Models in Property-Liability Insurance, Journal of Risk and Insurance, 57(1): 79-109. Cummins, J. D., and R. A. Derrig, 1993, Fuzzy Trends in Property-Liability Insurance Claim Costs, Journal of Risk and Insurance, 60(3): 429-465. Cummins, J. D., and R. A Derrig, 1997, Fuzzy Financial Pricing of Property-Liability Insurance, North American Actuarial Journal, 1(4): 21-44. de Andres, J., 2000, Estimating the Temporal Structure of Interest Rates Using Fuzzy Numbers. Application to Financial and Actuarial Valuation and to the Analysis of the Life Insurer's Solvency, Rovira i Virgili University, Unpublished doctoral thesis (in Spanish). Delbaen, F., and S. Lorimier, 1992, Estimation of the Yield Curve and the Forward Rate Curve Starting From a Finite Number of Observations, Insurance: Mathematics and Economics, 11: 259-269. Derrig, R. A., and K. M. Ostaszewski, 1995, Fuzzy Techniques of Pattern Recognition in Risk and Claim Classification, Journal of Risk and Insurance, 62(3): 447-482. Derrig, R. A., and K. M. Ostaszewski, 1997, Managing the Tax Liability of a Property Liability Insurance Company, Journal of Risk and Insurance, 64: 695-711. Devolder, P., 1988, Le taux d'actualisation en assurance, The Geneva Papers on Risk and Insurance, 13: 265-272. Dubois, D., and H. Prade, 1980, Fuzzy Sets and Systems: Theory and Applications (New York: Academic Press). Dubois, D., and H. Prade, 1993, Fuzzy Numbers: An Overview, in: D. Dubois, H. Prade, and R. R. Yager, eds., Fuzzy Sets for Intelligent Systems (San Mateo, Calif.: Morgan Kaufmann Publishers), pp. England, P. D., and R. J. Verrall, 1999, Analytic and Bootstrap Estimates of Prediction Errors in Claim Reserving, Insurance: Mathematics and Economics, 25: 281-293. England, P. D., and R. J. Verrall, 2001, A Flexible Framework for Stochastic Claims Reserving, Proceedings of the Casualty Actuarial Society, pp. 1-38. England, P. D., and R. J. Verrall, 2002, Stochastic Claims Reserving in General Insurance, Paper presented to the Institute of Actuaries, Available at http://www. actuaries.org.uk/sessional/ Fedrizzi, M., M. Fedrizzi, and W. Ostasiewicz, 1993, Towards Fuzzy Modelling in Economics, Fuzzy Sets and Systems, 54: 259-268. Gerber, H. U., 1995, Life Insurance Mathematics (Heildelberg: Springer). Kaufmann, A., 1986, Fuzzy Subsets Applications in O.R. and Management, in: A. Jones, A. Kaufmann, and H.-J. Zimmermann, eds., Fuzzy Set Theory and Applications (Dordretch: Reidel), pp. 257-300. Kremer, E., 1982, IBNR Claims and the Two-Way of ANOVA, Scandinavian Actuarial Journal, I: 47-55. Lemaire, J., 1990, Fuzzy Insurance, ASTIN Bulletin, 20: 33-55. Li Calzi, M., 1990, Towards a General Setting for the Fuzzy Mathematics of Finance, Fuzzy Sets and Systems, 35: 265-280. Mack, T., 1993, Distribution-Free Calculation of the Standard Error of Chain-Ladder Reserve Estimates, ASTIN Bulletin, 23: 213-223. McCulloch, J. H., 1971, Measuring the Term Structure of Interest Rates, The Journal of Business, 34: 19-31. McCulloch, J. H., 1975, The Tax-Adjusted Yield Curve, Journal of Finance, 30: 811-829. Myers, S.C., and R.A. Cohn, 1987, A Discounted Cash-Flow Approach to Property-Liability Insurance Rate Regulation, in: J. D. Cummins and S. E. Harrington, eds., Fair Rate of Return in Property-Liability Insurance (Norwell, Mass.: Kluwer Academic Publishers). Ostaszewski, K., 1993, An Investigation into Possible Applications of Fuzzy Sets Methods in Actuarial Science (Schaumburg: Society of Actuaries). Profillidis, V. A., B. K. Papadopoulos, and G. N. Botzoris, 1999, Similarities in Fuzzy Regression and Application on Transportation, Fuzzy Economic Review, 4(1): 83-98. Ramenazi, R., and L. Duckstein, 1992, Fuzzy Regression Analysis of the Effect of University Research on Regional Technologies, in: J. Kacprzyk and M. Fedrizzi, eds., Fuzzy Regression Analysis (Heildelberg: Physica-Verlag), pp. 237-263. Ramik, J., and J. Rimanek, 1985, Inequality Relation Between Fuzzy Numbers and its Use in Fuzzy Optimization, Fuzzy Sets and Systems, 16: 123-138. Renshaw, A. E., 1989, Chain-Ladder Modelling and Interactive Modelling (Claims Reserving and GLIM), Journal of the Institute of Actuaries, 116: 559-587. Renshaw, A. E., and R. J. Verrall, 1998, A Stochastic Model Underlying the Chain Ladder Technique, British Actuarial Journal, 4: 903-923. Straub, E., 1997, Non-Life Insurance Mathematics (Berlin: Springer). Tanaka, H., 1987, Fuzzy Data Analysis by Possibilistic Linear Models, Fuzzy Sets and Systems, 24: 363-375. Tanaka, H., and H. Ishibuchi, 1992, A Possibilistic Regression Analysis Based on Linear Programming, in: J. Kacprzyk and M. Fedrizzi, eds., Fuzzy Regression Analysis (Heildelberg: Physica-Verlag), pp. 47-60. Taylor, G. C., 1994, Fair Premium Rating Methods and the Relations Between Them, Journal of Risk and Insurance, 61(4): 592-615. Terceno, A., G. Barbera, J. de Andres, and C. Belvis, 1996, Fuzzy Methods Incorporated to the Study of Personal Insurances, Fuzzy Economic Review, 1(2): 105-119. Tseng, F.-M., G.-H. Tzeng, H.-C. Yu, and B. J.-C. Yuan, 2001, Fuzzy ARIMA Model for Forecasting the Foreign Exchange Market, Fuzzy Sets and Systems, 118: 9-19. Vasicek, O. A., and H. G. Fong, 1982, Term Structure Modelling Using Exponential Splines, Journal of Finance, 37: 339-349. Verrall, R. J., 1989, A Space State Representation of the Chain Ladder Model, Journal of the Institute of Actuaries, 116: 589-610. Wang, H.-F., and R.-C. Tsaur, 2000, Insight of a Fuzzy Regression Model, Fuzzy Sets and Systems, 112: 355-369. Watada, J., 1992, Fuzzy Time-Series Analysis and Forecasting of Sales Volume, in: J. Kacprzyk and M. Fedrizzi, eds., Fuzzy Regression Analysis (Heildelberg: Physica-Verlag), pp. 211-227. Wright, T. S., 1990, A Stochastic Method for Claims Reserving in General Insurance, Journal of the Institute of Actuaries, 117: 677-731. Yao, Y., 1999, Term Structure Models: A Perspective From the Long Rate, North American Actuarial Journal, 3(3): 122-138. Yen, K. K., S. Ghoshray, and G. Roig, 1999, A Linear Regression Model Using Triangular Fuzzy Number Coefficients, Fuzzy Sets and Systems, 106: 167-177. Young, V. R., 1996, Insurance Rate Changing: A Fuzzy Logic Approach, Journal of Risk and Insurance, 63(3): 461-484. Zadeh, L. A., 1965, Fuzzy Sets, Information and Control, 8: 338-353. Zhao, R., and R. Govind, 1991, Defuzzification of Fuzzy Intervals, Fuzzy Sets and Systems, 43: 45-55. Jorge de Andres Sanchez and Antonio Terceno Gomez are from the Department of Business Administration, Faculty of Economics and Business Studies, Rovira i Virgili University, Spain. The authors wish to thank two anonymous reviewers for their valuable comments. TABLE 1 Prices of the Bonds Negotiated in the Spanish Debt Market on June 29, 2001 Coupon Maturity Maturity k Asset (annual) (days) (years) 1 T-Bill 0.00% 18 0.05 2 BOND 5.35% 163 0.45 3 T-Bill 0.00% 382 1.05 4 BOND 4.25% 391 1.07 5 T-Bill 0.00% 521 1.43 6 BOND 5.25% 576 1.58 7 STRIP 0.00% 576 1.58 8 BOND 3.00% 576 1.58 9 STRIP 0.00% 756 2.07 10 BOND 4.60% 757 2.07 11 BOND 4.50% 1,122 3.07 12 BOND 4.65% 1,216 3.33 13 BOND 3.25% 1,307 3.58 14 BOND 4.95% 1,490 4.08 15 BOND 10.15% 1,673 4.58 16 BOND 4.80% 1,945 5.33 17 BOND 7.35% 2,098 5.75 18 BOND 6.00% 2,402 6.58 19 STRIP 0.00% 2,775 7.60 20 BOND 5.15% 2,948 8.08 21 BOND 4.00% 3,134 8.59 22 BOND 5.40% 3,680 10.08 23 BOND 5.35% 3,771 10.33 24 BOND 6.05% 4,229 11.59 25 STRIP 0.00% 4,230 11.59 26 BOND 4.75% 4,774 13.08 27 BOND 6.00% 10,073 27.60 28 BOND 5.75% 11,351 31.10 [P.sup.k. [P.sup.k. k Asset sub.min] sub.max] 1 T-Bill 99.779 99.779 2 BOND 103.258 103.313 3 T-Bill 95.758 95.758 4 BOND 103.907 103.947 5 T-Bill 94.220 94.220 6 BOND 103.555 103.669 7 STRIP 93.579 93.749 8 BOND 99.337 99.376 9 STRIP 91.540 91.540 10 BOND 104.670 104.917 11 BOND 104.017 104.166 12 BOND 98.466 98.702 13 BOND 97.026 97.200 14 BOND 105.407 105.918 15 BOND 126.340 126.340 16 BOND 97.785 98.385 17 BOND 113.539 113.539 18 BOND 107.400 108.206 19 STRIP 68.412 68.412 20 BOND 104.101 104.307 21 BOND 92.679 93.473 22 BOND 97.716 98.923 23 BOND 96.966 97.749 24 BOND 108.098 108.168 25 STRIP 53.357 53.357 26 BOND 96.506 97.567 27 BOND 103.722 105.194 28 BOND 93.954 94.777 TABLE 2 Comparison of the Estimates of OLS With the Fuzzy Estimates of the TSIR in the Spanish Public Debt Market on June 29, 2001 Econometric Fuzzy Regression Methods [[alpha].sup.*] = 0.5 T [i.sub.t] [r.sub.t] [i.sub.t] [r.sub.t] 1 0.0435 0.0435 (0.0433, 0.0033) (0.0433, 0.0033) 2 0.0423 0.0412 (0.0431, 0.0023) (0.0430, 0.0012) 3 0.0440 0.0473 (0.0447, 0.0023) (0.0479, 0.0025) 4 0.0470 0.0563 (0.0472, 0.0029) (0.0547, 0.0047) 5 0.0496 0.0599 (0.0494, 0.0034) (0.0581, 0.0055) 6 0.0513 0.0600 (0.0510, 0.0037) (0.0594, 0.0052) 7 0.0525 0.0599 (0.0524, 0.0039) (0.0606, 0.0048) 8 0.0534 0.0595 (0.0536, 0.0039) (0.0619, 0.0043) 9 0.0540 0.0589 (0.0547, 0.0039) (0.0632, 0.0037) 10 0.0545 0.0592 (0.0556, 0.0039) (0.0645, 0.0035) 11 0.0551 0.0606 (0.0566, 0.0039) (0.0658, 0.0038) 12 0.0557 0.0619 (0.0574, 0.0039) (0.0670, 0.0042) 13 0.0562 0.0633 (0.0582, 0.0039) (0.0682, 0.0045) 14 0.0568 0.0647 (0.0590, 0.0040) (0.0693, 0.0049) 15 0.0574 0.0660 (0.0598, 0.0041) (0.0703, 0.0053) Fuzzy Regression D [[alpha].sup.*] = 0.75 T [i.sub.t] [r.sub.t] [i.sub.t] [r.sub.t] 1 (0.0433, 0.0066) (0.0433, 0.0066) 0.44% 0.44% 2 (0.0431, 0.0045) (0.0430, 0.0025) 1.92% 4.41% 3 (0.0447, 0.0047) (0.0479, 0.0049) 1.70% 1.31% 4 (0.0472, 0.0058) (0.0547, 0.0094) 0.38% 2.75% 5 (0.0494, 0.0068) (0.0581, 0.0109) 0.41% 2.92% 6 (0.0510, 0.0074) (0.0594, 0.0103) 0.52% 1.01% 7 (0.0524, 0.0077) (0.0606, 0.0096) 0.23% 1.27% 8 (0.0536, 0.0078) (0.0619, 0.0086) 0.35% 4.01% 9 (0.0547, 0.0078) (0.0632, 0.0074) 1.19% 7.30% 10 (0.0556, 0.0077) (0.0645, 0.0070) 2.02% 8.88% 11 (0.0566, 0.0077) (0.0658, 0.0076) 2.67% 8.61% 12 (0.0574, 0.0078) (0.0670, 0.0083) 3.19% 8.25% 13 (0.0582, 0.0079) (0.0682, 0.0090) 3.58% 7.79% 14 (0.0590, 0.0080) (0.0693, 0.0098) 3.87% 7.22% 15 (0.0598, 0.0082) (0.0703, 0.0106) 4.07% 6.52% TABLE 3 Free Default Discount Factors for the Next 5 Years [f.sub.1] [f.sub.2] [f.sub.3] [f.sub.4] [f.sub.5] (0.9585, (0.9190, (0.8770, (0.8315, (0.7858, 0.0030) 0.0040) 0.0059) 0.0093) 0.0128) TABLE 4 Fuzzy Pure Single Premiums for Several Kinds of Policies 35 Years 45 Years [[PI].sub. 1] (779.69, 12.72) (770.86, 12.58) [[PI].sub. 2] (6.76, 0.06) (16.50, 0.14) [[PI].sub. 3] (786.46, 12.78) (787.37, 12.72) 55 Years 65 Years [[PI].sub. 1] (752.42, 12.27) (711.67, 11.61) [[PI].sub. 2] (36.92, 0.31) (81.92, 0.69) [[PI].sub. 3] (789.34, 12.59) (793.59, 12.30) Table 5 Premiums for Several Payments of Losses Pair ([c.sub.1], (0.75, 0.25) (0.5, 0.5) (0.25, 0.75) Premium (1005.91, 53.54) (992.37, 52.96) (978.99, 52.35) TABLE 6 Accident Years Years 1 2 j 1 [Z.sub.1,1] [Z.sub.1,2] ... [Z.sub.1,j] 2 [Z.sub.2,2] ... [Z.sub.2,j] n - 1 Accident Years Years n - 1 n 1 ... [Z.sub.1,n-1] [Z.sub.1,n] 2 ... [Z.sub.2,n-1] [Z.sub.2,n] . . . . . . . . . . . . . . . . . . n - 1 [Z.sub.n-1,n-1] [Z.sub.n-1,n] n [Z.sub.n,n] TABLE 7 IBNR Triangle in Our Analysis Occurrence Year Develop- 1 2 3 4 5 6 ment Year (Y-6) (Y-5) (Y-4) (Y-3) (Y-2) (Y-1) TABLE 8 Determining IBNR Reserves by Nonfuzzy Methods Development Coefficients LCL with [b.sub.i] (A) Year of without Development [b.sub.i] (B) i [b.sub.i] [c.sub.i] [c.sub.i] 1 -1.619 1.702 1.690 2 61.398 1.336 1.592 3 46.844 1.233 1.359 4 158.854 0.927 1.228 Coefficients IBNR Reserves Year of Development Year of [R.sub.j] [R.sub.j] i Occurrence from (A) from (B) 1 Y-6 477.67 453.97 2 Y-5 384.62 353.18 3 Y-4 261.16 276.41 4 Y-3 118.68 125.14 5 Y-2/Y-1 0.00 0.00 R 1242.13 1208.70 TABLE 9 Determining IBNR Reserves With Fuzzy Regression Development Coefficients LCL with [b.sub.i] (A) Year of LCL without Development [b.sub.i] (B) i [b.sub.i] [c.sub.i] [c.sub.i] 1 (-12, 0) (1.771, 0.57) (1.689, 0.063) 2 (106.553, 0) (1.161, 0.11) (1.579, 0.12) 3 (-1.308, 0) (1.344, 0.12) (1.341, 0.12) 4 (158.85, 0) (0.927, 0) (1.226, 0.051) 5 (0, 0) (1, 0) (1, 0) Coefficients IBNR Reserves Year of Development Year of [R.sub.j] [R.sub.j] i Occurrence from (A) from (B) 1 Y-6 (476.13, 80.48) (439.62, 138.70) 2 Y-5 (385.49, 68.33) (339.72, 114.01) 3 Y-4 (259.12, 45.78) (256.66, 88.63) 4 Y-3 (118.68, 0) (123.93, 27.77) 5 Y-2/Y-1 (0, 0) (0, 0) R (1239.42, 194.59) (1168.93, 369.1)
{"url":"http://www.freepatentsonline.com/article/Journal-Risk-Insurance/111737963.html","timestamp":"2014-04-16T21:53:34Z","content_type":null,"content_length":"107155","record_id":"<urn:uuid:41bef16b-1e48-44e5-b9f5-be7812a0a312>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Historical notation question H. Enderton hbe at math.ucla.edu Mon Mar 1 15:02:00 EST 2004 Panu Raatikainen quoted Kleene as saying "that the metamathematical use of the notations Sigma^0_i and Pi^0_i to describe the levels in the arithmetical hierarchy were introduced by Addison 1958 and Mostowski I asked John Addison about that. The full notation (including the choice of those particular symbols) is indeed due to Addison. His 1954 dissertation dealt in part with the analogies between the Borel hierarchy and the arithmetical hierarchy. It was after writing it that he worked out his systematic notation for these and other hierarchies, now well known. Then in the spring of 1957 he was in Warsaw on an NSF postdoc. In e-mail Addison writes, "That spring I do recall presenting the case for the new notation very forcefully to Mostowski and eventually getting him to agree to use it." (Those of you who know John can imagine those conversations!) As I noted previously, Mostowski had been using P and Q. Addison formally introduced his systematic notation in a 1959 Fundamenta paper (JSL XXIX 60). The historical point is interesting. Its bearing on FoM is perhaps only that there are foundational issues involving these hierarchies, and the language -- and the notation -- we use to discuss them must be completely clear. --Herb Enderton More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2004-March/007960.html","timestamp":"2014-04-20T16:38:14Z","content_type":null,"content_length":"3647","record_id":"<urn:uuid:858c76f4-aa83-40db-b229-80fb5da6489f>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof That Girls Are Evil!!!! 1. Re: Proof That Girls Are Evil!!!! "headtrip" wrote: "BBQ Platypus" wrote: 3. The area of this region is calculated by âËâ€*«âËâ€*ž,1(dy/y), which has the same area as the integral calculated in step 2. The area above y=1 and to the right of x=1 are clearly infinity and not one. So âËâ€*ž + âËâ€*ž + 1 = âËâ€*ž Or we could conlude that âËâ€*ž just makes math more confusing. WRONG!!! As x->âËâ€*ž, 1/x->0. On the interval (1, âËâ€*ž) 0 < 1/x < 1. Because x->0 and 1/x <= 1 on that interval, the area of any finite integral of 1/x is always less than zero (think about it, then check it out on your calculator). To put it another way, what's zero times infinity? There are an infinite number of answers, all of which are finite numbers. You always need to go with the highest possible number, which, in this case, is one. "This is my timey-wimey detector. It goes ding when there's stuff." 2. Re: Proof That Girls Are Evil!!!! "BBQ Platypus" wrote: "headtrip" wrote: "BBQ Platypus" wrote: 3. The area of this region is calculated by âËâ€*«âËâ€*ž,1(dy/y), which has the same area as the integral calculated in step 2. The area above y=1 and to the right of x=1 are clearly infinity and not one. So âËâ€*ž + âËâ€*ž + 1 = âËâ€*ž Or we could conlude that âËâ€*ž just makes math more confusing. WRONG!!! As x->âËâ€*ž, 1/x->0. On the interval (1, âËâ€*ž) 0 < 1/x < 1. Because x->0 and 1/x <= 1 on that interval, the area of any finite integral of 1/x is always less than zero (think about it, then check it out on your calculator). To put it another way, what's zero times infinity? There are an infinite number of answers, all of which are finite numbers. You always need to go with the highest possible number, which, in this case, is one. 0 multiplied by any number is 0, wouldn't that make 0 x âËâ€*ž= 0? 3. Re: Proof That Girls Are Evil!!!! "WBLVikeBabe" wrote: Ohh and all you boys are like perfect little angesl?! Hahaha :roll: :twisted: Why yes. Yes we are. Thank you so much for noticing. 4. Re: Proof That Girls Are Evil!!!! "minvikes01" wrote: "BBQ Platypus" wrote: Proof #3 (for all you Calculus people) - âËâ€*ž = 3 It is a proven fact that âËâ€*«âËâ€*ž,0(dx/x)= âËâ€*ž. If we look at the graph of f(x)= 1/x, however, an interesting contradiction presents itself: 1. The area of the region bounded by the x and y axes, as well as by x=1 and y=1, is 1. 2. âËâ€*«âËâ€*ž,1(dx/x) = 1. This leaves only the area of the region on top of the square-shaped region in the corner to be determined. 3. The area of this region is calculated by âËâ€*«âËâ€*ž,1(dy/y), which has the same area as the integral calculated in step 2. 4. Therefore, the total area under the graph (which we all know is ALWAYS the same as the integral - cough, cough) is equal to the sum of the areas of these three regions, or 1 + 1 + 1 = 3. 5. Therefore, âËâ€*ž = 3. What do you mean you're not buying it? Okay, okay. How about a different conclusion: Therefore, graphs are worthless in determining imperfect integrals. The textbook that my calculus class used contained examples of imperfect integrals using graphs as visual aids. Therefore, the calculus textbook that my class used is worthless. Therefore, my calculus class was worthless. Worthless people should be stoned to death. Therefore, my calculus teacher should be stoned to death. hahaha, yeah, we did that in class a couple of weeks ago. What??? Stoned your calculus teacher??? :shock: BANNED OR DEAD...I'LL TAKE EITHER ONE Join Date Dec 1969 Re: Proof That Girls Are Evil!!!! Goods can be evil thats for sure. I've been burnt a time or two myself. 6. Re: Proof That Girls Are Evil!!!! Einstien eat your heart out. Join Date Dec 1969 Re: Proof That Girls Are Evil!!!! Here is proof that girls are evil, of my four closest friends, and myself, we have all been cheated on by our girlfriends at one time or another. We are good guys too, it just further proves that girls are evil, and want to see the demise of men. MC's run away when I kick it They act so chicken, they should come with a large drink and a biscuit 8. Re: Proof That Girls Are Evil!!!! "VikesfaninWis" wrote: "WBLVikeBabe" wrote: Ohh and all you boys are like perfect little angesl?! Hahaha :roll: :twisted: And the faster you realize that, the better off you will be.. :grin: I think you have that a little turned around. Any good man knows that their woman can do no wrong. :lol: 9. Re: Proof That Girls Are Evil!!!! oh well. i like girls better than the alternative. but thats just me. woo out just two corn cobs shy of a bushel 10. Re: Proof That Girls Are Evil!!!! "mr.woo" wrote: oh well. i like girls better than the alternative. but thats just me. evil girls are better than brokeback mountain
{"url":"http://www.purplepride.org/f5/proof-girls-evil-368330/index3.html","timestamp":"2014-04-16T05:14:17Z","content_type":null,"content_length":"97145","record_id":"<urn:uuid:8156e9b4-15b9-4e52-b8cc-46f2d652096b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
Perpendicular dimension? (self.CasualMath) submitted ago by timesculptor sorry, this has been archived and can no longer be voted on I don't exactly know how to phrase this but i wanted to ask if there is some reason we map dimensions perpendicular to each other. I mean to say, take the first quadrant of the Cartesian Plane. Any function mapped strictly in that quadrant can be said to simply be mapped in a square who edges are defined as real number lines. This is tantamount to drawing the 'picture of the function' inside of a square. What I want to know is why do we graph functions this way? Why don't we, for instance, graph things with respect to the diagonal of the square so that as one edge increases we move up along the diagonal? What if we drew a circle and graphed the function in that? Wouldn't this get rid of things such as irrationals as the way we measure the circle could be rationalized (Instead of measuring the distance along the perimeter of the circle in terms of perpendicular real number lines, we make the edges of the circle itself a real number line and conveniently measure a distance rationally. This also does away with things like negative numbers doesn't it? Anyway sorry if this is the wrong sub /r/ to be posting this in (if so please don't berate me, I haven't even had my first cake day. Just redirect me to the appropriate subreddit). Anyway, thanks in advance! EDIT: thanks for the replies! I admit I made some hasty conclusions and three out some off the cuff thoughts. That aside my real reason for posting was to ask why we plot dimensions perpendicular to each other all 10 comments [–]6 points7 points8 points ago sorry, this has been archived and can no longer be voted on You should look into the concept of vector spaces. You're basically constructing an intuitive generalisation of "dimension". A vector space is the abstractisation of what you're describing, and encompasses many things. The most obvious and common vector is the vector of points on a graph - most people just assume this is what a vector actually is (an arrow in the plane/space) if they haven't had the chance to study linear algebra. A vector space is defined abstractly - you need a concept of addition, and scaling (multiplication by a constant, usually real), but the definitions of these operations can be weird and wacky, like some of your ideas. In practice, we usually only use boring examples in maths, like arrows (vectors) in the plane, and functions and polynomials with pointwise and coefficient wise addition It turns out that for every vector space you can find a basis, a small set of vectors which is enough to reconstruct the whole space (we say it spans the space). The number of vectors in that basis gives you the dimension of the space (the dimension can be infinite, or even very infinite - uncountable if you know what that means). For example, in the plane, if you consider only the (0,1) and (1,0) vectors on the x and y-axis, you can get every single vector in the plane by adding and scaling them. Hence these two vectors form a basis for the plane, and the plane is of dimension 2. However, any two vectors that are not on the same line (we say they are linearly independent) would also form a basis, like (1,8) and (1,9). Or (pi, pi), (-pi, pi). There are many, many possible bases for every vector space, and you could choose any one of them. Some of them are more useful than others, however. The reason that we choose an orthonormal basis, which is basically your question - why are the axes always perpendicular - is simply because it makes stuff easier. Orthonormal means that in addition to being orthogonal (perpendicular) the vectors are normalised, ie. have norm one (ignore this if you don't know what that means). Orthonormal bases have nice properties, linked to the scalar product. When you have more dimensions, it just turns out to be the most convenient basis to use for many things. There are however cases where there are reasons to choose a different basis - in chemistry for example, molecular structure in space is often modelled on non-Cartesian (non-orthonormal) bases, in order to follow the crystal structure more intuitively. A lot of what I've said has been fuzzing the difference between the underlying algebraic (abstract) vector space and the spatial interpretation, which is just one possible implementation of it (I say interpretation because it turns out finite dimensional vector spaces are all isomorphic, which means equivalent, (the same) in a very special sense. Things become a lot more interesting with infinite dimensions.). In order to sort everything out in mathematical rigour, you'll have to spend some time studying linear algebra. If you have any more questions, I'd love to answer them. [–]Ravinex3 points4 points5 points ago sorry, this has been archived and can no longer be voted on Negative numbers and irrational numbers exist whether or not you can plot them on a graph. The graphs are just useful tools for visualizing them. I think you're confusing the graph with the function. The graph is a representation of a function; a lot of the time, it's possible to draw the same "picture" by shifting coordinates. That doesn't mean that we've suddenly changed the function or how it behaves; we've just changed how it's graph looks. For instance, in ordinary perpendicular Cartesian coordinates, to draw the graph of a circle of radius 1 around 0, you need to use the (relatively) ugly equation 1 = x ^2 + y ^2. We, could, in fact, use polar coordinates where the equation would be the simple r = 1. The key thing to realize is that just because the graphs are the same, doesn't mean the underlying functions are at all similar. The graph is a tool used to study functions, not the other way around. [–]verxix1 point2 points3 points ago sorry, this has been archived and can no longer be voted on This is where linear algebra knowledge comes into play. The answer is that we just happen to prefer to use an orthonormal basis so we do, even though in theory we could use any spanning and linearly independent set of vectors. I'm not sure if this answer makes any sense given your math background, but you should look up the terms you're not familiar with because they're at the core of the issue you're curious about. [–]patrickwonders1 point2 points3 points ago sorry, this has been archived and can no longer be voted on The other answers in this thread are all good. I'm just going to add a bit of less formal reasoning. Suppose you're doing a scatter-plot of "height vs. weight" for everyone in your city. If you use perpendicular axises, it is easy to glance and see the weight range for a given height or a height-range for a given weight. Left-right and up-down are much more natural for us, having grown up in a world with horizons and trees. Similarly, there's no reason we should write horizontally (or vertically) on a page rather than diagonally except that it's easier for us. And, the fact that we write in horizontal lines that progress vertically down the page is easier. If the top-margin weren't perpendicular to the bottom margin, then it would be harder to see the structure of the paragraphs. [–]suspiciously_calm0 points1 point2 points ago sorry, this has been archived and can no longer be voted on [–]olhmr0 points1 point2 points ago sorry, this has been archived and can no longer be voted on I'm no expert on the subject, just wanted to add my thoughts to the previous comments. First of all, like the others have pointed out, a graph of a function is just a visual representation, a tool if you like, and should not be confused with the actual function itself. Thus we can't get rid of any mathematical concepts through the way we graph them. Secondly, and related to this, is that a significant reason why we graph out functions the way we do is because of clarity and ease of use. It just makes a lot of sense. There are various ways we can graph things, and they are used in various contexts and for various purposes, but the overall goal is the achieve a visual representation that is easy to understand and can be easily translated back and forth to the mathematics underlying it. An example of when there isn't a uniform and standard way to graph things is in three dimensional computer graphics (not exactly math, I know, but I find it to be a useful example). The x axis is pretty much always assumed to go from left to right across the screen, however, depending on what graphics engine you're using the y axis can go top down or down up, and the z axis can go either towards or away from the screen. Tl;dr It looks the way it looks because it's easy to understand and construct. [–]romwell1 point2 points3 points ago sorry, this has been archived and can no longer be voted on All that, and some more: • we do sometimes "draw a circle and plot the function in that", i.e. we graph in ploar coordinates; • indeed, a function (being defined a a subset of pairs (a, b) where a is an element of a set we call "domain", and b is an element of a set we call "range") doesn't have anything to do with axes, dimensions or orthogonality. However, the set R^n naturally has all those. It is convenient to visualize a function f: R^n -> R as a surface in R^n+1, but there are many other ways (i.e. using colors instead of the extra axis). • A Pie Chart is a heat map on a circle - i.e. you use a circle as your "x" axis and the color as your "y" axis, and you plot a discrete-valued function (values could be, for example, {1, 2, 3, 4}, which correspond to some colors). • Expanding on the question "Why do we plot dimensions perpendicular to each other": because it's easier. It's easy to make graph paper, it's easy to measure distances on it. Once you have a graph, it's easy to see which point on the graph corresponds to a given value of the argument (the one right above it - easier than "the one along the line at a 30-degree angle). In fact, it is very natural to say that the argument corresponding to a value should be the point on the x-axis closest to the point on the graph - and that means perpendicular axes. Approximating the area under the graph is easier, too, and it's pretty important. • To reiterate, the coordinate axis don't have to be perpendicular straight lines. They just happen to be, for many reasons, the most convenient ones on the Euclidean plane. However, we don't always have this luxury. For instance the actual coordinates on our planet Earth - latitude and longitude - give coordinate axes which are not straight lines in any meaningful sense. That is, if you are on a ship, you'd have to be constantly turning to be going along a fixed latitude, unless you're at the equator! So, almost all latitude lines are "bent" (but the meridian lines are, indeed, "straight"). • If I give you the longitude of an airplane going from NYC to Tokyo as a a function of latitude, you can still plot it on the plane with perpendicular axes. However, you will get incorrect length of the path if you measure it on such a graph (by the way, that's called a "cylindrical projection"). However, you can plot it straight up on a globe - with the "bent" axes - you'll get everything right. • In general, "change of coordinates" is what you will eventually learn about. It tells you how to do precisely what you ask - plot in different coordinates. [–]ingannilo0 points1 point2 points ago sorry, this has been archived and can no longer be voted on You're hinting at some pretty interesting ideas. I'll start with the obvious • You can use coordinates systems other than the traditional rectangular "x-y" that we meet in middle school. In fact, you can actually use a single number to determine location in the plane (via space filling curves). You could also use any two straight lines that aren't paralell as the basis for your coordinate system, with relative ease. The nice thing about using a basis which is orthogonal, is that it is easy to decompose well studied objects from geometry in terms of orthogonal vectors. • Getting rid of irrationals. There are schools of thought (see finitism) where the ideas associated with Georg Cantor and his study of infinite sets are still contested. Most math-folk don't bend this way. The idea of "getting rid of irrationals" by using a new coordinate system has a problem, and a rough intuitive question can highlight the problem: "How many points are there in a line segment?" The subtleties are important at this level, and without some background, there's not much we can say. Essentially, no bijection can be formed between two sets of strictly different size, and a "whole line segment" has way more points than any subset of numbers which does not include the irrationals. I've bolded words that you may want to google. Hope this helps. Study on, dude!
{"url":"http://www.reddit.com/r/CasualMath/comments/14gvpr/perpendicular_dimension/","timestamp":"2014-04-20T11:57:05Z","content_type":null,"content_length":"82948","record_id":"<urn:uuid:565c6a17-d47e-4819-9092-7293398f72c6>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: 3 Problems in Panel Data Analysis [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: 3 Problems in Panel Data Analysis From "Fardad Zand" <fardad.zand@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: 3 Problems in Panel Data Analysis Date Tue, 14 Oct 2008 10:41:49 +0200 Hi Nils, Indeed, thank you so much for your insightful, careful and complete answers. Just to clarify some remaining issues, I post some complementary questions, if you can answer them. I really appreciate your time. 1- concerning point 1: do you know an other test in place of Hausman test? Is there any formal way to test for the conditions of RE (i.e. correlation between unobserved heterogeneity and the variables of interest)? How can tell something justifiable about this correlation? Only based on theoretical arguments or is there any test whatsoever for this purpose? As an alternative you suggest using IV. But you suggest that the IV should not be correltated with the outcome. I think you meant the other way around. Right? Nevertheless, in my case there is no meaningful, relevant IV available; so, this approach is out of 2- to be honest, I didn't exactly get your point. Sorry for my limited econometric knowledge. What I know is that if my error terms are heteroskedasticit, then the estimates will be biased. As a remedy, robust coefficients should be estimated. Is there any other way to deal with the problem? Could you explain what you meant in your My specific problem is that -xttobit- in contrast to -xtreg- doesn't have any robust options in Stata. How would you recommend me to reduce the unwanted effects of heteroskedasticity? 3- You refer to RHS variables in your answer. Do you mean variables of interest in the set of explanatory variables? With respect to your suggestion, do you think SYS-GMM will resolve the problems of both simultaneity and unobserved heterogeneity in my sample? what are the commands to use first-difference and lagged independent variables at the same time in Stata, if any? To be specific, how would compare xtabond, xtabond2 and xtdpdsys with each others? Which one would you compare? What are the required conditions to be able to safely use these methods? 4- As to my still remaining question, in a panel data setting, what pre- and post-tests do you recommend in general to check for the underling conditions and assumptions? What can one do to increase the reliability and validly of the results? I'm really thankful to your support and will definitely aknowledge that if my efforts results in any publications. That's the minmum to compensate your time..... My kind regards from Holland, On Tue, Oct 7, 2008 at 4:08 PM, Nils Braakmann <nilsbraakmann@googlemail.com> wrote: > Hello Fardad, > some more comments in addition to Martin's: > > 1) FE, RE, or BE? > > ***What should I do? What is the valid approach to pursue? How should > > I justify using RE or BE? Is there any alternative tests or methods > > that can be used? What specific conditions should I check (and how?) > > to be sure about using RE for my estimations? > Essentially it depends on whether you believe that you have unobserved > heterogeneity that is correlated with your variables of interest. If > that is the case BE and RE will be inconsistent and you should rely on > the FE estimates (your Hausman tests seem to suggest that). The > dropped variables are most likely time-constant within firms so there > is no (within) variance that could be used for estimation. Similarly, > the insignificance of the remaining variables you refer to is most > likely caused by too few variation within firms so these effects are > estimated poorly. There is in fact no simple solution to that. Some > things that come to my mind: You could either (a) use BE or RE (or > pooled OLS) (with a lot of control variables to control for as much of > the unobservables as possible) and acknowledge that your results may > in fact be caused by unoberserved heterogeneity rather than by your > variable of interest (and, if possible, include a statement in your > paper why you do not believe that unobserved heterogeneity is a > problem in this estimation or provide some explanation on the likely > direction of the bias) or (b) you could try to find some outside > instruments that are uncorrelated with the outcome and your unobserved > heterogeneity but correlated with your variables of interest and apply > some sort of instrumental variable estimator. > > 2) Robust standard errors? > > > > ***What would you suggest? How would you correct for > > heteroskedasticity? Is there any other important characteristics that > > I need to check before I can be sure about the validity and > > reliability of my results? What pre- or post-tests do you suggest? > Stata now provided clustered standard errors (on the panel id > variable) when you request the usual robust errors as the latter are > inconsistent in a panel context (see Stock, James H. und Mark Watson, > 2008: "Heteroskedasticity-Robust Standard > Errors for Fixed Effects Panel Data Regression", Econometrica 76(1): > 155-174). You should use these (for a discussion of standard errors in > a panel context see e.g. chapter 21.2.3 and the example in chapter > 21.3.2 in Cameron, A. Colin and Prvain K. Trivedi, 2005 > "Microeconometrics - Methods and Applications", Cambridge University > Press). > >3) SYS-GMM method? > >***How can I successfully implement this method in Stata? Is there any > >alternatives that you would suggest? In general, how would you correct > >for simultaneity problem, if you don't have access to good > >instruments? > System GMM is implemented in -xtabond2- by David Roodman who has also > written two(?) "pedagogical" papers on the practical implementation > (available on the web, don't have the links right now). Stata also has > several commands: -xtabond- and from 10.0 onwards -xtdpdsys- and > -xtdpd-. For your general question: In a panel context you might want > to consider using first differences to get rid of the unobserved > heterogeneity and then use lags as instruments to get rid of any > remaining (contemporaneous) correlation between your RHS variables and > the error. However, this does not solve your problem of too few within > variation... > Hope this helped. > Best regards, > Nils * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-10/msg00725.html","timestamp":"2014-04-20T15:56:31Z","content_type":null,"content_length":"12102","record_id":"<urn:uuid:1c4f622a-0d10-4acb-bf5d-5770e7ff6f52>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
invariant theory The topic invariant theory is discussed in the following articles: • ...Cayley’s study of various properties of forms that are unchanged (invariant) under some transformation, such as rotating or translating the coordinate axes, established a branch of algebra known as invariant theory. • Turnbull’s work on invariant theory built on the symbolic methods of the German mathematicians Rudolf Clebsch (1833-1872) and Paul Gordan (1837-1912). His major works include The Theory of Determinants, Matrices, and Invariants (1928), The Great Mathematicians (1929), Theory of Equations (1939), The Mathematical Discoveries of Newton (1945),...
{"url":"http://www.britannica.com/print/topic/921028","timestamp":"2014-04-16T19:38:06Z","content_type":null,"content_length":"7116","record_id":"<urn:uuid:612c6f46-11ec-4b42-9563-2c002aae2591>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
Stationary Mode Distribution and Sidewall Roughness Effects in Overmoded Optical Waveguides In this paper, the authors investigate analytically the transformation from the initial guided mode distribution to the stationary state and the effects of the bidimensional roughness profile, in multimode polymeric buried waveguides. In these structures, due to the geometrical dimensions and the operating wavelength, about a thousands of guided modes can propagate, even for weak core/ cladding dielectric contrast. The coupling coefficients are computed by exploiting the geometrical features of the optical channels, such as the waveguide dimensions and the roughness surface statistics. The analysis gives insight on the guided/guided and guided/radiated mode interaction, and higher order solution is proposed, in the case of a great number of modes interacting over distances that are extremely long as compared to the signal wavelength and the roughness correlation length. Experimental results are valuated by means of semicontact atomic force microscopy as well as compared with existing numerical models. © 2010 IEEE Andrea Di Donato, Marco Farina, Davide Mencarelli, Agnese Lucesoli, Silvia Fabiani, Tullio Rozzi, Giordano M. Di Gregorio, and Giacomo Angeloni, "Stationary Mode Distribution and Sidewall Roughness Effects in Overmoded Optical Waveguides," J. Lightwave Technol. 28, 1510-1520 (2010) Sort: Year | Journal | Reset 1. K. B. Yoon, C. In-Kui, A. Seung-Ho, "10-Gb/s data transmission experiments over polymeric waveguides with 850-nm wavelength multimode VCSEL array," IEEE Photon. Technol. Lett. 16, 2147-2149 2. C. H. Seo, R. B. Sup, K. Saekyoung, C. Mu Hee, P. Hyo-Hoon, S. Kyoung-Up, H. Sang-Won, R. Byoung-Ho, K. Dong-Su, J. S. Tea, K. Taeil, "Demonstration of 2.5 Gb/s optical interconnection using 45/ spl deg/-ended connection blocks in fiber- and waveguide-embedded PCBs," Proc. 54th Conf. Electron. Compon. Technol. (2004) pp. 1547-1551. 3. C. Choi, L. Lin, Y. Liu, J. Choi, L. Wang, D. Haas, J. Magera, R. T. Chen, "Flexible optical waveguide film fabrications and optoelectronic devices integration for fully embedded board-level optical interconnects," J. Lightw. Technol. 22, 2168-2176 (2004). 4. J. V. DeGroot, S. O. Glover, M. J. Dyer, W. K. Bischel, "Polymeric optical interconnect for chip-to-chip communication," Technical Digest OFC/NFOEC AnaheimCA5 (2005). 5. L. El-Hang, L. Seung Gol, O. Beom Hoan, P. Se Geun, "Optical printed circuit board (O-PCB): A new platform toward VLSI microphotonics?," Proc. Dig. LEOS Summer Top. Meetings Biophoton./Opt. Interconnects VLSI Photon./WBM Microcavities (2004) pp. 2. 6. D. Marcuse, "Mode conversion caused by surface imperfections of a dielectric slab waveguide," Bell Syst. Tech. J. 48, 3187-3215 (1969). 7. D. Marcuse, "Power distribution and radiation losses in multimode dielectric waveguides," Bell Syst. Tech. J. 51, 429-454 (1972). 8. D. W. Kim, D. M. Lim, M. H. Cho, M. Kang, H. H. Park, "Ray trace analysis in optical PCB systems using micro-optic connectors for 90$^{\circ}$ deflection of light beams," Proc. 9th Int. Conf. Adv. Commun. Technol. (2007) pp. 1402-1406. 9. I. Papakonstantinou, K. Wang, D. R. Selviah, F. A. Fernández, "Transition, radiation and propagation loss in polymer multimode waveguide," Opt. Exp. 15, 669-679 (2007). 10. T. Rozzi, G. Cerri, F. Chiaraluce, R. De Leo, R. F. Ormondroyd, "Finite curvature and corrugations in dielectric ridge waveguide," IEEE Trans. Microw. Theory Tech. MTT-36, 68-79 (1988). 11. C. G. Poulton, C. Koos, M. Fujii, A. Pfrang, T. Schimmel, J. Leuthold, W. Freude, "Radiation modes and roughness loss in high index—contrast waveguides," IEEE Sel. Top. Quantum Electron 12, 1306-1321 (2006). 12. E. Griese, "Modeling of highly multimode waveguides for time-domain simulation," IEEE Sel. Top. Quantum Electron 9, 433-442 (2003). 13. D. Lenz, D. Erni, W. Bächtold, "Modal power loss coefficients for highly overmoded rectangular dielectric waveguides based on free space modes," Opt. Exp. 12, 1150-1156 (2004). 14. D. L. Lee, Electromagnetic Principles of Integrated Optics (Wiley, 1986). 15. T. Rozzi, M. Mongiardo, Open Electromagnetic Waveguides (Electromagnetic Wave Series) (Institution of Electrical Engineers, 1997). 16. E. S. Ventsel, Teoria Delle Probabilità (Mir, 1983). 17. V. I. Smirnov, A Course of Higher Mathematics (Pergamon/Oxford, 1964). 18. T. Bierhoff, T. Bierhoff, Y. Sonmez, J. Schrage, A. Himmler, E. Griese, G. Mrozynski, "Influence of the cross-sectional shape of board-integrated optical waveguides on the propagation characteristics," Proc. IEEE Proc. (2002) pp. 47-50. 19. K. Halbe, E. Griese, "A modal approach to model integrated optical waveguides with rough core-cladding interfaces," Proc. IEEE Workshop Signal Propag. Interconnects (2006) pp. 133-136. 20. D. Marcuse, Theory of Dielectric Optical Waveguides (Academic, 1974). 21. I. Papakonstantinou, R. James, D. R. Selviah, "Radiation and bound mode propagation in rectangular multimode dielectric channel waveguides with sidewall roughness," J. Lightw. Technol. 27, 4151-4163 (2009). OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/jlt/abstract.cfm?uri=jlt-28-10-1510","timestamp":"2014-04-21T00:59:02Z","content_type":null,"content_length":"73707","record_id":"<urn:uuid:4f44d15d-2196-4a1f-a605-37998ece7f88>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
Waverley Calculus Tutor ...I also have lots of hands-on experience with circuits; I have designed circuits for school projects and for my professional work, built my own loudspeaker crossovers, and upgraded the wiring in my vintage car. I occasionally tutor introductory electrical engineering courses; when I do, students ... 8 Subjects: including calculus, physics, SAT math, differential equations ...I also periodically helped some of the other people in the class. I found the material very intuitive and still remember almost all of it. I've also performed very well in several math competitions in which the problems were primarily of a combinatorial/discrete variety. 14 Subjects: including calculus, geometry, GRE, algebra 1 ...Do you want to get the most from your classes? Is something holding you back from doing your best? Would you like to ace that entrance exam? 34 Subjects: including calculus, reading, English, geometry ...I have the philosophy that anything can be understood if it is explained correctly. Teachers and professors can get caught up using too much jargon which can confuse students. I find real life examples and a crystal clear explanation are crucial for success. 19 Subjects: including calculus, chemistry, Spanish, public speaking ...My primary area of specialty is math - I majored in Applied Mathematics at Harvard, I scored a 5 on the Calculus BC Advanced Placement Exam, and a perfect score on the SAT II Mathematics test. I also taught everything from pre-algebra through pre-calculus when I was a full time teacher at an eli... 29 Subjects: including calculus, reading, geometry, GED
{"url":"http://www.purplemath.com/Waverley_Calculus_tutors.php","timestamp":"2014-04-16T16:37:39Z","content_type":null,"content_length":"23786","record_id":"<urn:uuid:87b0c90b-bd1a-401c-9bda-8d121f6a4276>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
Downey, CA Algebra 2 Tutor Find a Downey, CA Algebra 2 Tutor ...I make sure that they understand their materials thoroughly before I move on to different topic/subject. I myself am a highly motivated and goal oriented person. I am a passionate and considerate tutor. 23 Subjects: including algebra 2, reading, English, statistics I am an experienced tutor in math and science subjects. I have an undergraduate and a graduate degree in electrical engineering and have tutored many students before. I am patient and will always work with students to overcome obstacles that they might have. 37 Subjects: including algebra 2, chemistry, statistics, English ...If interested, my coursework includes the following: Calculus, Multivariable Calculus, Linear Algebra, Differential Equations, Partial Differential Equations, Analysis on the Real Line, Analysis in n-Space, Complex Analysis, Abstract Algebra (Groups, Rings, further Theoretical Linear Algebra), Ge... 20 Subjects: including algebra 2, chemistry, reading, calculus ...I took a course titled Genetics and Evolution. Among other topics, this course covered Mendelian and non-Mendelian inheritance, quantitative genetics, genetic mapping, evidence for evolution, natural selection, genetic drift, kin selection, speciation, molecular evolution, phylogenetic analysis,... 27 Subjects: including algebra 2, chemistry, Spanish, physics I have had a history with tutoring, from kids in elementary to middle schoolers. My passion is math since I went to school and majored in engineering, I know my way around numbers. My other interests also involve molecular biology and American history. 12 Subjects: including algebra 2, Spanish, geometry, biology
{"url":"http://www.purplemath.com/Downey_CA_algebra_2_tutors.php","timestamp":"2014-04-21T15:10:57Z","content_type":null,"content_length":"23851","record_id":"<urn:uuid:6fc5f3d8-754a-47bc-8dd5-aab4dc769847>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Harwood Heights Algebra 2 Tutor Find a Harwood Heights Algebra 2 Tutor ...I can also help students who are preparing for the math portion of the SAT or ACT. When teaching lessons, I put the material into a context that the student can understand. My goal is to help all of my students obtain a solid conceptual understanding of the subject they are studying, which provides a foundation to build upon. 12 Subjects: including algebra 2, calculus, geometry, algebra 1 ...I've been tutoring test prep for 15 years, and I have a lot of experience helping students get the score they need on the ACT. I've helped students push past the 30 mark, or just bring up one part of their score to push up their overall score. In the past 5 years, I've written proprietary guides on ACT strategy for local companies. 24 Subjects: including algebra 2, calculus, physics, GRE ...I have definitely had success in helping students to acquire these skills. The SAT writing test is unusual because it tests students rhetorical skills considerably more than the ACT writing (English) test does. Rhetorical questions can be tricky, and even subjective. 20 Subjects: including algebra 2, reading, English, writing ...As the only English teacher at a small private school, I was frequently asked to help proofread and rephrase webpages, business communication, and peers' papers. When reviewing student assignments, I consistently and thoroughly indicate any problems in content, organization and style, but my mar... 17 Subjects: including algebra 2, reading, writing, English ...I also wish to share this valuable sense I developed at work: of what's extremely important, of what's important, of what's somewhat important, and what's not important. My background education in terms of mathematics ranges from the secondary basics in high school (algebra, trigonometry, precal... 7 Subjects: including algebra 2, geometry, algebra 1, precalculus
{"url":"http://www.purplemath.com/harwood_heights_algebra_2_tutors.php","timestamp":"2014-04-19T15:03:09Z","content_type":null,"content_length":"24349","record_id":"<urn:uuid:2ccdb1d0-fb7c-4ffa-aad5-48b4e2ec11d0>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
Simulator of arbitrary fixed and infinite precision binary floating point Simulator of arbitrary fixed and infinite precision binary floating point Rob Clewley rob.clewley at gmail.com Mon Aug 11 06:07:08 CEST 2008 Dear list, I have written a module to simulate the machine representation of binary floating point numbers and their arithmetic. Values can be of arbitrary fixed precision or infinite precision, along the same lines as python's in-built decimal class. The code can be found here: The design is loosely based on that decimal module, and the primary intended use of this module is educational. You can play with different IEEE 754 representations with different precisions and rounding modes, and compare with infinite precision Binary numbers. For instance, it is easy to explore machine epsilon, representation/rounding error using a much simpler format such as a 4-bit exponent and 6-bit The usual arithmetic operations are permitted on these objects, as well as representations of their values in decimal or binary form. Default contexts for half, single, double, and quadruple IEEE 754 precision floats are provided. Binary integer classes are also provided, and some other utility functions for converting between decimal and binary string representations. The module is compatible with the numpy float classes and requires numpy to be installed. The source code is released under the BSD license, but I am amenable to other licensing ideas if there is interest in adapting the code for some other purpose. Full details of the functionality and known issues are in the module's docstring, and many examples of usage are in the accompanying file test_binary.py (which also acts to validate the common representations against the built-in floating point types). I look forward to hearing feedback, especially in case of bugs or suggestions for improvements. Robert H. Clewley, Ph. D. Assistant Professor Department of Mathematics and Statistics Georgia State University 720 COE, 30 Pryor St Atlanta, GA 30303, USA tel: 404-413-6420 fax: 404-413-6403 <P><A HREF="http://www2.gsu.edu/~matrhc/binary.html">Binary 0.1</A> - Simulator of arbitrary fixed and infinite precision binary floating point. (12-Aug-08) More information about the Python-announce-list mailing list
{"url":"https://mail.python.org/pipermail/python-announce-list/2008-August/006799.html","timestamp":"2014-04-16T11:43:13Z","content_type":null,"content_length":"5152","record_id":"<urn:uuid:0189a221-5f97-4572-8c75-13c396e0a918>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
Heisenberg's uncertainty relation not invariant ? I'm not sure whether this has been discussed before. If one just looks at Heisenberg's uncertainty relation (energy-time), one easily sees that this is not Lorentz invariant. Even very simple results, such as the energy of a particle in a potential well seems not to transform according to the Lorentz transformation. How can it be that such simple and basic results from quantum physics are in such striking contradiction with special relativity ? I think there was a short thread on this a while back. The short answer is that quantum mechanics, in the form of quantum field theory, is fully covariant, i.e. compatible with special relativity, and this includes the Heisenbereg relationships. The interesting thing is that there are some issues with quantum mechanics as a single particle theory that disappear when you go to the mulitple particle theory of QFT. I'm not sure exactly where this is discussed, the Wikipedia has a short discussion of some of these aspects, which isn't totally satisfactory and unfortunatley doesn't cite it's sources. The second problem arises when trying to reconcile the Schrödinger equation with special relativity. It is possible to modify the Schrödinger equation to include the rest energy of a particle, resulting in the Klein-Gordon equation or the Dirac equation. However, these equations have many unsatisfactory qualities; for instance, they possess energy eigenvalues which extend to –∞, so that there seems to be no easy definition of a ground state. Such inconsistencies occur because these equations neglect the possibility of dynamically creating or destroying particles, which is a crucial aspect of relativity. Einstein's famous mass-energy relation predicts that sufficiently massive particles can decay into several lighter particles, and sufficiently energetic particles can combine to form massive particles. For example, an electron and a positron can annihilate each other to create photons. Such processes must be accounted for in a truly relativistic quantum theory. This problem brings to the fore the notion that a consistent relativistic quantum theory, even of a single particle, must be a many particle theory. However, this isn't really a "quantum mechanics is incompatible with SR" sort of deal. It should be understood that in the form of QFT , quantum mechanics is fully covariant, i.e. fully compatible with special relativity.
{"url":"http://www.physicsforums.com/showthread.php?t=147160","timestamp":"2014-04-17T04:00:16Z","content_type":null,"content_length":"28855","record_id":"<urn:uuid:5f37003c-4821-4c29-98a1-003f975b3f1a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry Terms Get down with the lingo Alternate Exterior Angles The pair of angles on the outside of the two lines cut by the transversal and on alternate sides of the transversal. Alternate exterior angles are congruent if and only if the two lines crossed by the transversal are parallel. Alternate Interior Angles The pair of angles in between the two lines cut by the transversal and on alternate sides of the transversal. Alternate interior angles are congruent if and only if the two lines crossed by the transversal are parallel. Consecutive Exterior Angles The pair of angles on the outside of the two lines cut by the transversal and on the same side of the transversal. They're also called same-side exterior angles, for obvious reasons. Consecutive exterior angles are supplementary if and only if the two lines crossed by the transversal are parallel. Consecutive Interior Angles The pair of angles in between the two lines cut by the transversal and on the same side of the transversal. They're also called same-side interior angles, for obvious reasons. Consecutive interior angles are supplementary if and only if the two lines crossed by the transversal are parallel. Corresponding Angles Two angles that are in the same relative place compared to each of the two lines and the transversal that cuts them. Corresponding angles are congruent if and only if the two lines crossed by the transversal are parallel. Two lines that are on the same plane but never intersect. They're always in sight, but never touch…sort of sad, ain't it? Parallel Postulate Euclid's fifth postulate, and it's quite a tricky one compared to the previous four. It states that if two lines are crossed by a third and both angles on the interior and same side add up to less than the sum of two right angles, the two original lines will eventually intersect on that side. What a mouthful, Euclid. Sheesh. Two lines that intersect at exactly 90°, forming four right angles. A closed two-dimensional shape that is made of only straight line segments. No curves allowed. Sorry, Beyoncé. Regular Polygon A shape whose sides are all equal in length and whose angles are all equal in measure. A line that intersects two other lines, forming a total of eight angles. If the other two lines are parallel (and they usually are), then all these angles are special in some way.
{"url":"http://www.shmoop.com/parallel-perpendicular-lines/terms.html","timestamp":"2014-04-16T18:58:22Z","content_type":null,"content_length":"26266","record_id":"<urn:uuid:b60f6184-87b8-4423-98de-8515126f3ec6>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
array and number I need help finding numbers within an interval. I have to follow by this function and im lost as to what to do next. int nrVal = findNrGPA(double array[], int length, double minlimit, double maxlimit) ___while (int i=0; i++; array[] < maxlimit) _______if (minlimit > 1) note the interval is from a gpa of 1.0 to 4.0 assuming the array has already been sorted if someone can help explain step by step that would be much appreciated Last edited on Not sure what you're trying to do. Is this an array of grades, and you're trying to calculate a GPA? Your code has quite a bit of syntax errors, but the logic could be something like this - Pass to the function all the arguments it needs: an array of grades, an array where to store the grades within the interval, the lenght of the array, the lower bound and the upper bound of the interval (there is also another way to do this but it involves dynamic memory allocation, which is a bit harder) - Check every element of the grades array. If they are within the interval copy them to the valid grades array the array is declared in the int main() the array has randomized grades with a size of 10 values // findNrGPARange = find the number of GPA values in the array // that are between a range of values i'm guessing i have to create a range for the values to be found in the range has to be within 1.0 and 4.0 i hope this helps I still have no idea what you're doing. If you're looking for GPA of all values in the array, just iterate through adding them all together, then divide by 4. im telling the array that i need to find certain values within a set range of the array. So i have to create a range for the function to search into to. basically i'll search for a value in the array but i need to create a range for my search function to search into. this range has to be from 1.0 to 4.0. im lost on how to create this range of the array If it's sorted, all you have to pull the numbers from the beginning to the first number > 4.0. Assuming it's sorted lowest to highest. If not, just do the reverse of that. Nothing fancy needs to be If it's not sorted, just iterate through the array and check each number to see if it falls within that range (compound conditional). Topic archived. No new replies allowed.
{"url":"http://www.cplusplus.com/forum/beginner/81418/","timestamp":"2014-04-18T13:27:47Z","content_type":null,"content_length":"12778","record_id":"<urn:uuid:836642e1-dcdc-4deb-ba6e-9683d7b3685a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
4.17 Equation Formatting Unfortunately there no HTML elements for formatting equations. HTML 3 proposed a rudimentary mathematical markup language, but this approach was dropped several years ago -- it is only supported by the experimental Arena browser, and even there to only a limited degree. There is no longer any effort at adding mathematics support to HTML. Instead, the W3C has developed a language called MathML designed specifically for representing mathematical expressions. This language is defined in detail on the World Wide Web Consortium Web site, at http://www.w3.org/Math/. Note that there are a few browsers and browser plugins that can display MathML documents -- links to these are found at the page listed above. A useful option for the mathematically inclined is to use LaTex2HTML to convert LaTex files (containing equations) to gif files, and to then use "giftrans" (see the the above URL) which can convert GIF's into transparent GIF89a format. These can be included within the document as inlined images using the IMG tag. This is far from ideal, but is useful for those who are familiar with TeX/LaTeX. If you want to include greek symbols and simple equations, but don't want the bother of using Latex and latex2html (yes, it can be a real bother!), then have a look at Karen Strom's collection of GIF images. They are really very good, and suprisingly easy to use You will find more information at http://donald.phast.umass.edu/kicons/greek.html.
{"url":"http://www.utoronto.ca/webdocs/HTMLdocs/NewHTML/equations.html","timestamp":"2014-04-17T03:48:51Z","content_type":null,"content_length":"3783","record_id":"<urn:uuid:84b96e7d-fa61-40f4-93cd-7eed3b79d966>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Feasible schedules for rotating transmissions Noga Alon Motivated by a scheduling problem that arises in the study of optical networks we prove the following result, which is a variation of a conjecture of Haxell, Wilfong and Winkler. Let k, n be two integers, let wsj, 1 s n, 1 j k be non-negative reals satisfying j=1 wsj < 1/n for every 1 s n and let dsj be arbitrary non-negative reals. Then there are real numbers x1, x2, . . . , xn so that for every j, 1 j k, the n cyclic closed intervals s = [xs + dsj, xs + dsj + wsj], (1 s n), where the endpoints are reduced modulo 1, are pairwise disjoint on the unit circle. The proof is based on some properties of multivariate polynomials and on the validity of the Dyson Conjecture. 1 Introduction Motivated by the study of information transmission in optical networks, the authors of [3] consid- ered several variants of the following problem. Given n transmitters T1, T2 . . . , Tn and k receivers R1, R2, . . . , Rk, our objective is to design a rotating schedule that will enable the transmitters to transmit information to the receivers. We scale time so that the total length of the period in our
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/479/1705809.html","timestamp":"2014-04-20T01:35:28Z","content_type":null,"content_length":"8333","record_id":"<urn:uuid:502135c6-0350-448a-9314-039341837310>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
Conic Sections 1. Anticipatory Set: In groups, have students slice a cone into the four different conic sections. 2. Power Point presentation: Discuss include real life applications of conics, pictures, and definitions. 3. Folding Conic Sections activity: Have students read directions to create four conic sections by folding wax paper. 4. Casio Graphing Calculator Exploration: Give students the four worksheets (Parabolas, Circles, Ellipses, and Hyperbolas) and a graphing calculator to guide them through their exploration. First, give students a brief overview of the graphing calculator. Then have students complete the four worksheets. 5. Discussion: Ask students what they found about the different components of the equations and how each component affected the graphs. Show students one example of each from the Power Point and have students compare their answers. 6. Conclusions: Review the four conics, their graphs and equations. 7. Flashlight Demonstration: Use a flashlight to create the four different conics by changing the position and direction of the flashlight.
{"url":"http://www.mathinscience.info/public/conic_sections/conic_sections.htm","timestamp":"2014-04-16T07:13:29Z","content_type":null,"content_length":"27106","record_id":"<urn:uuid:33938799-6180-4345-8307-90a51d4a6f0d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Taniyama-Shimura theorem Taniyama-Shimura theorem establishes an important connection between elliptic curves, which are objects from algebraic geometry , and modular forms , which are certain periodic holomorphic functions investigated in number theory If p is a prime number and E is an elliptic curve over Q, we can reduce the equation defining E modulo p; for all but finitely many values of p we will get an elliptic curve over the finite field F[p ], with n[p] elements , say. One then considers the sequence a[p] = n[p] - p, which is an important invariant of the elliptic curve E. Every modular form also gives rise to a sequence of numbers, by Fourier transform. An elliptic curve whose sequence agrees with that from a modular form is called modular. The Taniyama-Shimura theorem states: "All elliptic curves over Q are modular." was first conjectured by Yutaka Taniyama in September . With Goro Shimura he improved its rigor until . Taniyama died in . In the it became associated with the Langlands program of unifying conjectures in mathematics, and was a key component thereof. The conjecture was picked up and promoted by André Weil in the , and Weil's name was associated with it in some quarters. Despite the interest, some considered it beyond proving. It attracted considerable interest in the 1980s when Gerhard Frey suggested that the Taniyama-Shimura conjecture (as it was then called) implies Fermat's last theorem. He did this by attempting to show that any counterexample to Fermat's last theorem would give rise to a non-modular elliptic curve. Kenneth Ribet later proved this result. In 1995, Andrew Wiles and Richard Taylor proved a special case of the Taniyama-Shimura theorem (the case of semistable elliptic curves) which was strong enough to yield a proof of Fermat's Last Theorem. The full Taniyama-Shimura theorem was finally proved in 1999 by Breuil, Conrad, Diamond, and Taylor who, building on Wiles' work, incrementally chipped away at the remaining cases until the full result was proved. Several theorems in number theory similar to Fermat's last theorem follow from the Taniyama-Shimura theorem. For example: no cube can be written as a sum of two relatively prime n-th powers, n ≥ 3. (The case n = 3 was already known by Euler.) In March 1996 Wiles shared the Wolf Prize with Robert Langlands. Although neither of them had originated nor finished the proof of the full theorem that had enabled their achievements, they were recognized as having had the decisive influences that led to its finally being proven. • Henri Darmon: A Proof of the Full Shimura-Taniyama-Weil Conjecture Is Announced, Notices of the American Mathematical Society, Vol. 46 (1999), No. 11. Contains a gentle introduction to the theorem and an outline of the proof. • Brian Conrad, Fred Diamond, Richard Taylor: Modularity of certain potentially Barsotti-Tate Galois representations, Journal of the American Mathematical Society 12 (1999), pp. 521–567. Contains the proof. Some notation Q denotes the field of rational numbers. F[p] is also called a Galois field.
{"url":"http://www.fact-index.com/t/ta/taniyama_shimura_theorem.html","timestamp":"2014-04-17T12:29:25Z","content_type":null,"content_length":"7785","record_id":"<urn:uuid:59314259-5680-44c6-9ac3-db304e5ec274>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: how do you graph this function?please show work! g(x)=5|x-4|+2 Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ee181e3e4b0a50f5c55e3e0","timestamp":"2014-04-16T10:20:20Z","content_type":null,"content_length":"80978","record_id":"<urn:uuid:4548b1f0-f700-4857-9190-38b95375f820>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
Quick reference 5.1: Quick reference Created by: CK-12 SI Units The watt. This SI unit is named after James Watt. As for all SI units whose names are derived from the proper name of a person, the first letter of its symbol is uppercase (W). But when an SI unit is spelled out, it should always be written in lowercase (watt), with the exception of the “degree Celsius.” from wikipedia SI stands for Système Internationale. SI units are the ones that all engineers should use, to avoid losing spacecraft. SI units energy one joule 1J power one watt 1W force one newton 1N length one metre 1m time one second 1s temperature one kelvin 1K prefix kilo mega giga tera peta exa symbol k M G T P E factor $10^3$ $10^6$ $10^9$ $10^{12}$ $10^{15}$ $10^{18}$ prefix centi milli micro nano pico femto symbol c m $\mu$ n p f factor $10^{-2}$ $10^{-3}$ $10^{-6}$ $10^{-9}$ $10^{-12}$ $10^{-15}$ SI units and prefixes My preferred units for energy, power, and transport efficiencies My preferred units, expressed in SI energy one kilowatt-hour 1 kWh 3600000 J power one kilowatt-hour per day 1 kWh/d $\left ( \frac{1000}{24} \right )W \simeq 40W$ force one kilowatt-hour per 100 km 1 kWh/100 km 36N time one hour 1h 3600s one day 1d $24 \times 3600s \simeq 10^5 s$ one year 1y $365.25 \times 24 \times 3600s \simeq \pi \times 10^7 s$ force per mass kilowatt-hour per ton-kilometre 1 kWh/t-km $3.6 \ m/s^2 \ (\simeq 0.37g)$ Additional units and symbols Thing measured unit name symbol value humans person p mass ton t $1 \ t = 1000 \ kg$ gigaton Gt $1 \ Gt = 10^9 \times 1000 \ kg = 1 \ Pg$ transport person-kilometre p-km transport ton-kilometre t-km volume litre l $1 \ l = 0.001 \ m^3$ area square kilometre sq km, $km^2$ $1 \ sq \ km = 10^6 m^2$ hectare ha $1 \ ha = 10^4 m^2$ Wales $1 \ \text{Wales} = 21000 \ km^2$ London (Greater London) $1 \ \text{London} = 1580 \ km^2$ energy Dinorwig $1 \ \text{Dinorwig} = 9 \ GWh$ Billions, millions, and other people’s prefixes Throughout this book “a billion” (1 bn) means a standard American billion, that is, $10^9$$10^{12}$$(10^9)$ In continental Europe, the abbreviations Mio and Mrd denote a million and billion respectively. Mrd is short for milliard, which means $10^9$ The abbreviation m is often used to mean million, but this abbreviation is incompatible with the SI – think of mg (milligram) for example. So I don’t use m to mean million. Where some people use m, I replace it by M. For example, I use Mtoe for million tons of oil equivalent, and $MtCO_2$$CO_2$ Annoying units There’s a whole bunch of commonly used units that are annoying for various reasons. I’ve figured out what some of them mean. I list them here, to help you translate the media stories you read. The “home” is commonly used when describing the power of renewable facilities. For example, “The £300 million Whitelee wind farm’s 140 turbines will generate 322 MW – enough to power 200000 homes.” The “home” is defined by the BritishWind Energy Association to be a power of 4700 kWh per year [www.bwea.com/ukwed/operational.asp]. That’s 0.54 kW, or 13 kWh per day. (A few other organizations use 4000 kWh/y per household.) The “home” annoys me because I worry that people confuse it with the total power consumption of the occupants of a home – but the latter is actually about 24 times bigger. The “home” covers the average domestic electricity consumption of a household, only. Not the household’s home heating. Nor their workplace. Nor their transport. Nor all the energy-consuming things that society does for Incidentally, when they talk of the $CO_2$$CO_2$ Power stations Energy saving ideas are sometimes described in terms of power stations. For example according to a BBC report on putting new everlasting LED lightbulbs in traffic lights, “The power savings would be huge – keeping the UK’s traffic lights running requires the equivalent of two medium-sized power stations.” news.bbc.co.uk/1/low/sci/tech/specials/sheffield_99/449368.stm What is a medium-sized power station? 10 MW? 50 MW? 100 MW? 500 MW? I don’t have a clue. A google search indicates that some people think it’s 30 MW, some 250 MW, some 500 MW (the most common choice), and some 800 MW. What a useless unit! Surely it would be clearer for the article about traffic lights to express what it’s saying as a percentage? “Keeping the UK’s traffic lights running requires 11 MW of electricity, which is 0.03% of the UK’s electricity.” This would reveal how “huge” the power savings are. Figure I.2 shows the powers of the UK’s 19 coal power stations. Figure I.2: Powers of Britain’s coal power stations. I’ve highlighted in blue 8 GW of generating capacity that will close by 2015. 2500 MW, shared across Britain, is the same as 1 kWh per day per Cars taken off the road Some advertisements describe reductions in $CO_2$if Virgin Trains’ Voyager fleet switched to 20% biodiesel – I emphasize the “if” because people like Beardie are always getting media publicity for announcing that they are thinking of doing good things, but some of these fanfared initiatives are later quietly cancelled, such as the idea of towing aircraft around airports to make them greener – sorry, I got distracted again. Richard Branson says that if Virgin Trains’ Voyager fleet switched to 20% biodiesel, then there would be a reduction of 34 500 tons of $CO_2$ $\text{“one car taken off the road†} \longleftrightarrow - 1.5 \ \text{tons per year of} \ CO_2.$ The calorie is annoying because the diet community call a kilocalorie a Calorie. 1 such food Calorie = 1000 calories. $2500 \ kcal = 3 \ kWh = 10000 \ kJ = 10 \ MJ.$ An annoying unit loved by the oil community, along with the ton of oil. Why can’t they stick to one unit? A barrel of oil is 6.1 GJ or 1700 kWh. Barrels are doubly annoying because there are multiple definitions of barrels, all having different volumes. Here’s everything you need to know about barrels of oil. One barrel is 42 U.S. gallons, or 159 litres. One barrel of oil is 0.1364 tons of oil. One barrel of crude oil has an energy of 5.75 GJ. One barrel of oil weighs 136 kg. One ton of crude oil is 7.33 barrels and 42.1 GJ. The carbon-pollution rate of crude oil is 400 kg of $CO_2$$6$$CO_2$ The gallon would be a fine human-friendly unit, except the Yanks messed it up by defining the gallon differently from everyone else, as they did the pint and the quart. The US volumes are all roughly five-sixths of the correct volumes. $1 \ \text{US gal} = 3.785 \ l = 0.83 \ \text{imperial gal}. \ 1 \ \text{imperial gal} = 4.545 \ l.$ Tons are annoying because there are short tons, long tons and metric tons. They are close enough that I don’t bother distinguishing between them. 1 short ton (2000 lb) = 907 kg; 1 long ton (2240 lb) = 1016 kg; 1 metric ton (or tonne) = 1000 kg. BTU and quads British thermal units are annoying because they are neither part of the Système Internationale, nor are they of a useful size. Like the useless joule, they are too small, so you have to roll out silly prefixes like “quadrillion” $(10^{15})$ 1 kJ is 0.947 BTU. 1 kWh is 3409 BTU. A “quad” is 1 quadrillion BTU = 293 TWh. Funny units Cups of tea Is this a way to make solar panels sound good? “Once all the 7000 photovoltaic panels are in place, it is expected that the solar panels will create 180000 units of renewable electricity each year – enough energy to make nine million cups of tea.” This announcement thus equates 1 kWh to 50 cups of tea. As a unit of volume, 1 US cup (half a US pint) is officially 0.24 l; but a cup of tea or coffee is usually about 0.18 l. To raise 50 cups of water, at 0.18 l per cup, from $15^\circ C$$100^\circ C$ So “nine million cups of tea per year” is another way of saying “20 kW.” Double-decker buses, Albert Halls and Wembley stadiums “If everyone in the UK that could, installed cavity wall insulation, we could cut carbon dioxide emissions by a huge 7 million tons. That’s enough carbon dioxide to fill nearly 40 million double-decker buses or fill the new Wembley stadium 900 times!” From which we learn the helpful fact that one Wembley is 44000 double decker buses. Actually, Wembley’s bowl has a volume of $1140000 \ m^3$ “If every household installed just one energy saving light bulb, there would be enough carbon dioxide saved to fill the Royal Albert Hall 1,980 times!” (An Albert Hall is $100000 \ m^3$ Expressing amounts of $CO_2$$CO_2$$CO_2$ mass of $CO_2 \leftrightarrow$ $2 \ kg \ CO_2 \leftrightarrow 1 \ m^3$ $1 \ kg \ CO_2 \leftrightarrow 500 \ litres$ $44g \ CO_2 \leftrightarrow 22 \ litres$ $2g \ CO_2 \leftrightarrow 1 \ litre$ Volume-to-mass conversion More volumes A container is 2.4m wide by 2.6m high by (6.1 or 12.2) metres long (for the TEU and FEU respectively). One TEU is the size of a small 20-foot container – an interior volume of about $33 \ m^3$$67.5 \ m^3$ A swimming pool has a volume of about $3000 \ m^3$ One double decker bus has a volume of $100 \ m^3$ One hot air balloon is $2500 \ m^3$ The great pyramid at Giza has a volume of 2500000 cubic metres. Figure I.4:A twenty-foot container (1 TEU). The area of the earth’s surface is $500 \times 10^6 km^2$$150 \times 10^6 km^2$ My typical British 3-bedroom house has a floor area of $88 \ m^2$$(216 \ m^2)$ hectare = $10^4 \ m^2$ acre = $4050 \ m^2$ square mile = $2.6 \ km^2$ square foot = $0.093 \ m^2$ square yard = $0.84 \ m^2$ If we add the suffix “e” to a power, this means that we’re explicitly talking about electrical power. So, for example, a power station’s output might be 1 GW(e), while it uses chemical power at a rate of 2.5 GW. Similarly the suffix “th” may be added to indicate that a quantity of energy is thermal energy. The same suffixes can be added to amounts of energy. “My house uses 2 kWh(e) of electricity per day.” Land use area per person $(m^2)$ percentage – domestic buildings 30 1.1 – domestic gardens 114 4.3 – other buildings 18 0.66 – roads 60 2.2 – railways 3.6 0.13 – paths 2.9 0.11 – greenspace 2335 87.5 – water 69 2.6 – other land uses 37 1.4 Total 2670 100 Land areas, in England, devoted to different uses. Source: Generalized Land Use Database Statistics for England 2005. [3b7zdf] $1000 \ \text{BTU per hour} & = 0.3 \ kW = 7 \ kWh/d\\1 \ \text{horse power} (1 \ hp \ \text{or} \ 1 \ cv \ \text{or} \ 1 \ ps) & = 0.75 \ kW = 18 \ kWh/d\\& \qquad \ \ 1 \ kW = 24 \ kWh/d$ $1 \ \text{therm} &= 29.31 \ kWh\\1000 \ Btu &= 0.2931 \ kWh\\1 \ MJ &= 0.2778 \ kWh\\1 \ GJ &= 277.8 \ kWh\\1 \ \text{toe (ton of oil equivalent)} &= 11 630 \ kWh\\1 \ kcal &= 1.163 \times 10^{-3} \ $1 \ kWh &= 0.03412 \quad 3412 \quad 3.6 \quad 86 \times 10^{-6} \quad 859.7\\& \quad \ \text{therms \quad Btu \quad MJ \quad \ toe \qquad \quad kcal}$ How other energy and power units relate to the kilowatt-hour and the kilowatt-hour per day. If we add a suffix “p” to a power, this indicates that it’s a “peak” power, or capacity. For example, $10 \ m^2$ $1 \ kWh/d & = \frac{1}{24} \ kW.\\1 \ toe/y & = 1.33 \ kW.$ Petrol comes out of a petrol pump at about half a litre per second. So that’s 5 kWh per second, or 18 MW. The power of a Formula One racing car is 560 kW. UK electricity consumption is 17 kWh per day per person, or 42.5 GW per UK. “One ton” of air-conditioning = 3.5 kW. World power consumption World power consumption is 15 TW. World electricity consumption is 2 TW. Useful conversion factors To change TWh per year to GW, divide by 9. 1 kWh/d per person is the same as 2.5 GW per UK, or 22 TWh/y per UK To change mpg (miles per UK gallon) to km per litre, divide by 3. At room temperature, $1 \ kT = \frac{1}{40}eV$ At room temperature, $1 \ kT$ Meter reading How to convert your gas-meter reading into kilowatt-hours: • If the meter reads 100s of cubic feet, take the number of units used, and multiply by 32.32 to get the number of kWh. • If the meter reads cubic metres, take the number of units used, and multiply by 11.42 to get the number of kWh. Calorific values of fuels Crude oil: 37 MJ/l; 10.3 kWh/l. Natural gas: $38 \ MJ/m^3$$kg/m^3$ 1 ton of coal: 29.3 GJ; 8000 kWh. Fusion energy of ordinary water: 1800 kWh per litre. See also table. inland water 0.083 rail 0.083 truck 0.75 air 2.8 oil pipeline 0.056 gas pipeline 0.47 int’l water container 0.056 int’l water bulk 0.056 int’l water tanker 0.028 Energy intensity of transport modes in the USA. Source: Weber and Matthews (2008). Heat capacities The heat capacity of air is $1 \ kJ/kg/^\circ C$$29 J/mol/^\circ C$$1.2 \ kg/m^3$$1.2 \ kJ/m^3/^\circ C$ Latent heat of vaporization of water: 2257.92 kJ/kg. Water vapour’s heat capacity: $1.87 \ kJ/kg/^\circ C$$4.2 \ kJ/l/^\circ C$ Steam’s density is $0.590 \ kg/m^3$ Atmospheric pressure: $1 \ bar \simeq 10^5 \ Pa$ I assumed the following exchange rates when discussing money: $\epsilon 1 = \1.26; \ £1 = \1.85; \ \1 = \1.12$ Greenhouse gas conversion factors Figure I.9: Carbon intensity of electricity production ($gCO_2$ Fuel type emissions (g$CO_2$ natural gas 190 refinery gas 200 ethane 200 LPG 210 jet kerosene 240 petrol 240 gas/diesel oil 250 heavy fuel oil 260 naptha 260 coking coal 300 coal 300 petroleum coke 340 Emissions associated with fuel combustion. Source: DEFRA’s Environmental Reporting Guidelines for Company Reporting on Greenhouse Gas Emissions. Figure I.11: Greenhouse-gas emissions per capita, versus GDP per capita, in purchasing-power-parity US dollars. Squares show countries having “high human development;” circles, “medium” or “low.” See also figures 30.1 and 18.4. Source: UNDP Human Development Report, 2007. Figure I.12: Greenhouse-gas emissions per capita, versus power consumption per capita. The lines show the emission-intensities of coal and natural gas. Squares show countries having “high human development;” circles, “medium” or “low.” See also figures 30.1 and 18.4. Source: UNDP Human Development Report, 2007. You can only attach files to None which belong to you If you would like to associate files with this None, please make a copy first.
{"url":"http://www.ck12.org/book/Sustainable-Energy-%25E2%2580%2593-Without-the-Hot-Air--/r1/section/5.1/","timestamp":"2014-04-17T08:05:59Z","content_type":null,"content_length":"148717","record_id":"<urn:uuid:f85ffa32-4baf-40fe-bc6c-fee7b9324cc2>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/gg/asked","timestamp":"2014-04-18T23:48:26Z","content_type":null,"content_length":"103318","record_id":"<urn:uuid:f5d5b8e2-807b-401f-bfc9-d4befc4903b8>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
Preprints 2003 The three-letter code attached to the preprint number indicates the scientific programme during which the paper was written. Click on the code to see the programme details. Preprint No. Author(s) Title and publication details NI03001-NST Besser A and De Jeu R The syntomic regulator for K-theory of fields NI03002-CMP Brightwell GR and Winkler P A second threshold for the hard-core model on a Bethe lattice NI03003-CPD Buffa A and Hiptmair R A coercive combined field integral equation for electromagnetic scattering NI03004-CPD C Carstensen An adaptive mesh-refining algorithm allowing for an H^1-stable L^2-projection onto courant finite element spaces NI03005-CPD M Ainsworth Discrete dispersion relation for hp-version finite element approximation at high wave number NI03006-CMP M Jerrum, JB Son, P Tetali and E Vigoda Elementary bounds on Poincaré and log-Sobolev constants for decomposable Markov chains NI03007-CPD T Huttunen, P Monk, F Collino and JP Kaipio The ultra weak variational formulation for elastic wave problems NI03008-CPD M Ainsworth Robust a posteriori error estimation for non-conforming finite element approximation NI03009-CMP B Bollobás and GR Brightwell How many graphs are unions of k-cliques? NI03010-CPD C Carstensen, R Lazarov and S Tomov Explicit and averaging a posteriori error estimates for adaptive finite volume methods NI03011-NPA EF Toro and VA Titarev TVD fluxes for the high-order ADER schemes NI03012-NPA D Serre Hyperbolicity of the non-linear models of Maxwell's equations NI03013-CPD T von Petersdorff and C Schwab Numerical solution of parabolic equations in high dimensions Accepted for publication in RAIRO Anal. Numerique (M2AN) 2003. NI03014-NPA PG LeFloch and M Shearer Nonclassical Riemann solvers with nucleation NI03015-NPA GQ Chen and M Feldman Steady transonic shocks and free boundary problems in infinite cylinders for the Euler equations NI03016-CPD D Boffi and L Gastaldi Stability and geometric conservation laws for ALE formulations NI03017-CPD F Brezzi, LD Marini and A Russo On the choice of a stabilizing subgrid for convection-diffusion problems NI03018-NPA D Drikakis Advances in turbulent flow computations using high-resolution methods. Review paper, accepted for publication in the Journal Progress in Aerospace Sciences (Copyright © 2003 Elsevier Science Ltd.) NI03019-CPD M Costabel, M Dauge and S Nicaise Singularities of eddy current problems NI03020-CPD PJ Davies and DB Duncan Stability and convergence of collocation schemes for retarded potential integral equations NI03021-NPA H Li and P Marcati Existence and asymptotic behavior of multi-dimensional quantum hydrodynamic model for semiconductors NI03022-NPA E Romenski, D Zeidan, A Slaouti and EF Toro Hyperbolic conservative model for compressible two-phase flow NI03023-CPD K Eriksson, C Johnson and A Logg On explicit time-stepping for stiff odes NI03024-NPA PG LeFloch and MD Thanh The Riemann problem for fluid flows in a nozzle with discontinuous cross section NI03025-NPA P Goatin and PG LeFloch The Riemann problem for a class of resonant hyperbolic systems of balance laws NI03026-NPA E Romenski and EF Toro Shock waves in compressible two-phase flows NI03027-CPD K Mikula Computational solution, applications and analysis of some geometrical nonlinear diffusion equations NI03028-CPD AK Pani, JY Yuan and PD Damázio On linearized backward Euler method for the equations of motion arising in the Oldroyd model NI03029-CPD J Sun, F Collino, PB Monk and L Wang An eddy current and micromagnetism model with applications to disk write heads NI03030-CPD B Guo and N Heuer The optimal convergence of the h-p version of the boundary element method with quasiuniform meshes for elliptic problems on polygonal NI03031-NPA VA Titarev Towards fully conservative numerical methods for the nonlinear model Boltzmann equation NI03032-NPD VA Galaktionov On higher-order viscosity approximations of one-dimensional conservation laws NI03033-SFM MH Rosas and BE Sagan Symmetric functions in noncommuting variables NI03034-CPD K Deckelnick and CM Elliott Uniqueness and error analysis for Hamilton-Jacobi equations with discontinuities NI03035-CPD B Guo Best approximation for the p-version of the finite element method in three dimensions in the framework of the Jacobi-weighted Besov NI03036-CPD AK Pani and JY Yuan Semidiscrete finite element Galerkin approximations to the equations of motion arising in the Oldroyd method NI03037-NPA EF Toro Multi-stage predictor-corrector fluxes for hyperbolic equations NI03038-CMP G Grimmett The random-cluster model NI03039-CMP G Grimmett and S Janson On smallest triangles NI03040-CMP G Grimmett and S Winkler Negative correlation of edge events on uniform spanning forests NI03041-CPD O Lakkis and RH Nochetto A posteriori error analysis for the mean curvature flow of graphs NI03042-CPD CM Elliott, D Kay and V Styles Finite element analysis of a current density - electric field formulation of Bean's model for superconductivity NI03043-CPD M Ainsworth Dispersive behaviour of high order discontinuous Galerkin finite element methods NI03044-CPD CM Elliott, B Gawron, S Maier-Paape and ES Van Discrete dynamics for convex and non-convex smoothing functionals in PDE based image restoration NI03045-CPD A Spira and R Kimmel An efficient solution to the Eikonal equation on parametric manifolds NI03046-NPA GQ Chen and EF Toro Centred schemes for nonlinear hyperbolic equations NI03047-CPD BD Bonner, IG Graham and VP Smyshlyaev The computation of conical diffraction coefficients in high-frequency acoustic wave scattering NI03048-CPD A Bermúdez, P Gamallo and R Rodríguez Finite element methods in local active control of sound NI03049-CPD S Langdon and SN Chandler-Wilde A wavenumber independent boundary element method for an acoustic scattering problem NI03050-CPD JM Melenk HP-interpolation of non-smooth functions NI03051-CPD JW Barrett and Robert Nürnberg Finite element approximation of a degenerate Stefan problem with Joule heating NI03052-NPA VA Titarev and EF Toro Eno and Weno schemes based on upwind and centred TVD fluxes NI03053-CPD L Prigozhin and V Sokolovsky AC losses in type-II superconductors induced by non-uniform fluctuations of external magnetic field NI03054-NPA GQ Chen and M Feldman Free boundary problems and transonic shocks for the Euler equations in unbounded problems NI03055-CPD P Monk and GR Richter A discontinuous Galerkin method for linear symmetric hyperbolic systems in inhomogeneous media NI03056-IGS W König and P Mörters Brownian intersection local times: exponential moments and law of large masses NI03057-NPA VA Titarev and EF Toro Finite-volume WENO schemes for three-dimensional conservation laws NI03058-CPD Z Chen and G Ji Sharp L^1 a posteriori error analysis for nonlinear convection diffusion problems NI03059-CPD K Deckelnick, G Dziuk and CM Elliott Fully discrete semi-implicit second order splitting for ansiotropic surface diffusion of graphs NI03060-GPF S Faria and S Kipfstuhl Preferred slip band orientations and bending observed in the Dome Concordia ice core NI03061-CPD JW Barrett and JF Blowey Finite element approximation of a nonlinear cross-diffusion population model NI03062-CPD W Dörfler and M Ainsworth Reliable a Posteriori error control for non-conforming finite element approximation of Stokes flow NI03063-NPA VA Titarev and EF Toro ADER schemes for scalar hyperbolic conservation laws in three space dimensions NI03064-GPF AJ Hogg and D Pritchard The effects of hydraulic resistance on dam-break and other shallow inertial flows NI03065-IGS SA Klokov and AY Veretennikov Mixing and convergence rates for a family of Markov processes approximating SDEs NI03066-IGS AY Veretennikov On ergodic measures for McKean-Vlasov stochastic equations NI03067-CPD A Lasis and E Süli Poincaré-type inequalities for broken Sobolev spaces NI03068-CPD A Lasis and E Süli HP-version discontinuous Galerkin finite element methods for semilinear parabolic problems NI03069-CPD C Bahriawati and C Carstensen Three matlab implementations of the lowest-order Raviart-Thomas MFEM with a posteriori error control NI03070-CPD C Carstensen and D Praetorius Effective stimulation of a macroscopic model for stationary micromagnetics NI03071-GPF PM Reis, T Mullin and G Ehrhardt Segregation phases in a vibrated binary granular layer NI03072-GPF K Hutter, Y Wang and SP Pudasaini The Savage-Hutter avalanche model. How far can it be pushed? NI03073-NPA LF Dinu Shock-turbulence interaction: an exhaustively classifying linearized approach NI03074-GPF SP Pudasaini and K Hutter Rapid motion of free-surface avalanches over natural terrains and their simulations through curved and twisted channels NI03075-IGS EA Perchersky, YM Suhov and ND Vvedenskaya Large deviation in a two-servers system with dynamic routing NI03076-GPF AJ Hogg and D Pritchard Cross-shore suspended sediment transport under tidal currents NI03077-GPF VA Chugunov, JMNT Gray and K Hutter Exact solutions of the Savage-Hutter equations for one-dimensional granular flows NI03078-IGS G Ben Arous, LV Bogachev and SA Molchanov Limit theorems for random exponentials NI03079-IGS AY Veretennikov On ergodic measures for McKean-Vlasov stochastic equations 2 NI03080-GPF JT Jenkins and MA Koenders The incremental response of random aggregates of identical round particles NI03081-GPF JT Jenkins and MA Koenders Hydrodynamic interaction of rough spheres NI03082-GPF M Davis, H-J Köhler, MA Koenders and R Schwab Hydraulic failure and soil-structure deformation due to wave and draw down loading NI03083-GPF K M Hákonardóttir, A J Hogg and J Batey Flying avalanches NI03084-NPA M Feldman, S-Y Ha and M Slemrod A Geometric level-set formulation of a plasma-sheath interface NI03085-IGS B Tóth and B Valkó Perturbation of singular equilibria of hyperbolic two-component systems: a universal hydrodynamic limit NI03086-IGS C Maes On the origin and the use of Fluctuation relations for the entropy NI03087-CPD S Bartels, C Carstensen, K Hackl and U Hoppe Effective relaxation for microstructure simulations: algorithms and applications NI03088-IGS M D Penrose Random Minimal directed spanning trees and Dickman-type distributions NI03089-GPF P M Reis, G Ehrhardt, A Stephenson and T Mullin Gases, liquids and crystals in granular segregation
{"url":"http://www.newton.ac.uk/preprints2003.html","timestamp":"2014-04-16T07:14:11Z","content_type":null,"content_length":"30490","record_id":"<urn:uuid:616079f2-218a-429d-aa6c-f04912f63e1a>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
Effective Lattice Point Counting - J. Algebraic Combin "... Abstract. We present a multivariate generating function for all n×n nonnegative integral matrices with all row and column sums equal to a positive integer t, the so called semi-magic squares. As a consequence we obtain formulas for all coefficients of the Ehrhart polynomial of the polytope Bn of n×n ..." Cited by 6 (1 self) Add to MetaCart Abstract. We present a multivariate generating function for all n×n nonnegative integral matrices with all row and column sums equal to a positive integer t, the so called semi-magic squares. As a consequence we obtain formulas for all coefficients of the Ehrhart polynomial of the polytope Bn of n×n doubly-stochastic matrices, also known as the Birkhoff polytope. In particular we derive formulas for the volumes of Bn and any of its faces. 1. , 2007 "... Abstract. We provide an explicit combinatorial formula for the volume of the polytope of n×n doubly-stochastic matrices, also known as the Birkhoff polytope. We do this through the description of a generating function for all the lattice points of the closely related polytope of n×n real non-negativ ..." Cited by 4 (0 self) Add to MetaCart Abstract. We provide an explicit combinatorial formula for the volume of the polytope of n×n doubly-stochastic matrices, also known as the Birkhoff polytope. We do this through the description of a generating function for all the lattice points of the closely related polytope of n×n real non-negative matrices with all row and column sums equal to an integer t. We can in fact recover similar formulas for all coefficients of the Ehrhart polynomial of the Birkhoff polytope and for all its faces. 1. , 2005 "... 3 4FOREWARD ..." , 2003 "... Representation of a given nonnegative multivariate polynomial in terms of a sum of squares of polynomials has become an essential subject in recent developments of sums of squares optimization and SDP (semidefinite programming) relaxation of polynomial optimization problems. We disscuss effective me ..." Add to MetaCart Representation of a given nonnegative multivariate polynomial in terms of a sum of squares of polynomials has become an essential subject in recent developments of sums of squares optimization and SDP (semidefinite programming) relaxation of polynomial optimization problems. We disscuss effective methods to obtain a simpler representation of a “sparse” polynomial as a sum of squares of sparse polynomials by eliminating redundancy. Key words.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=687809","timestamp":"2014-04-20T01:36:00Z","content_type":null,"content_length":"21533","record_id":"<urn:uuid:fed1c991-6382-428b-99e1-8966ab2da62c>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Video Analysis Tutorial and a cat | Science | WIRED • By Rhett Allain • 04.06.09 | • 6:55 pm | Cats can be entertaining – especially when they are someone else’s cat and that someone made a video. Really, this post is about analyzing video with Logger Pro (in a tutorial type fashion). I just happens that I chose this cat video to analyze. Here is the video: I am going to look at the part where the cat gets on the fan. I will try to step through the analysis so you can do your own. Get the video Actually, the first step is to find a video. YouTube has tons of stuff. Also, you could make your own video with a camera. A couple of things to make your life easier: • A non-zoom video. If the camera is zooming during the interesting part, you will have to do a lot more work. • A non-panning video. Again, if the camera stays stationary this is much easier for you. • Motion of interest is perpendicular to the video camera. Suppose someone was throwing a football. If they threw it right at the camera, the data would be complicated because of the perspective. • Motion not too close to the camera. Even if the motion is perpendicular, but very close, there will still be perspective issues. • It is better if the object in motion is at least rigid so that it can mostly be represented by one point. For instance, a person that moves arms and legs around is much more complicated than a • Finally, a clear shot at the object in motion. If you can’t see it the whole time, you can’t get data from it the whole time. Ok. I will assume you have your video. I have mine. What next? Well, if it is on youtube, you need it as a file. There are several ways to get the video from youtube, but I find KickYouTube.com is the most straight forward. Basically, find your youtube video and add the word “kick” in front of youtube.com in the URL. For the cat video, it would look like this: This new URL will take you to KickYouTube.com where it will give you the options of video formats to download. I would recommend choosing “MP4″ as it is the one that will likely give you the least troubles. Also, MP4 should work fine with both Logger Pro and Tracker Video. Click the download button and save your file. Remember where you parked! For this next part, I made a little screen cast using ScreenToaster. So, from that I have the period of the oscillating fan (at about 1 second). Clearly, there is some error involved here – but I am just getting some rough data. Here is the horizontal position of the cat as a function of time: I am not really sure what is going on with this data. The data table shows several points with the same time, but it doesn’t graph that way. Oh well, good enough. Now for the other piece of data – the angle that the cat swings at. I know that Tracker Video has an angle tool, but Logger Pro does not. I am just going to use a drawing program (I will use KeyNote, but you could use lots of things for this). For this measurement, I simply drew two lines as a reference. Keynote will give you the angle of a line if you rotate it. I rotated until it looked like it was on the string. This gives an angle of about 28 degrees below the horizontal. Great. I now have the period and the angle. But, what do I want to do with it? Maybe I could calculate the tension in the string. Here is a free body diagram for the cat: There are only two forces acting on the cat, the tension and the gravitational force. Clearly these two forces do not make a net force of zero. Since the cat is moving in a circle, there needs to be a net force towards the center of the circle. The acceleration of an object moving in a circle is: Where v is the linear velocity, r is the radius of the circular motion and r-hat is a unit vector pointing outward (thus the negative sign). Perhaps I should post a derivation of this formula sometime. For the case of the cat, I would rather use the angular velocity (although it really doesn’t matter). For an object moving in a circle, the linear velocity is related to the angular Using this, the magnitude of the circular (centripetal) acceleration is: In the video, I measured the period of oscillation (usually denoted with a T). So the angular velocity is: Ok. Now, back to the forces. Newton’s law still works, so I can write the following: Let me call the vertical direction z and the direction towards the center -r. Then I can write Newton’s law in its two components: For the r-direction, the only force is the component of the tension. From above, I will use the angle I found of the rope from the horizontal (call it theta). This gives: To help keep things a little clearer, I am using T for tension and T[period] for the period. Both the force and the acceleration are in the negative r direction (which means towards the center of the circle). Now for the vertical equation. I hope I didn’t confuse you with the vector “g” and the constant g. Oh well. If I want the tension (my original statement) I can use either of these two equations. Both have the mass. What is the mass of a typical cat? According to wikipedia, 2.5 to 7 kg is typical. I will use 5 kg. So, from the vertical equation, I get a tension of: This is quite a bit larger than if the cat were just hanging (and not spinning – which would be 49 N). Now for the other equation. I can either guess the radius and calculate the tension, or I can calculate what the radius should be. Let me just calculate r. This seems kind of small, and it is. The reason is that it is wrong. I made the assumption that the string was connected to the rotation point, but it wasn’t. The string was connected to the end of the fan. Oh well. I am not going to fix it. The purpose was to show how to do video analysis in Logger Pro. I think I accomplished that.
{"url":"http://www.wired.com/2009/04/video-analysis-tutorial-and-a-cat/","timestamp":"2014-04-17T05:09:48Z","content_type":null,"content_length":"109234","record_id":"<urn:uuid:ef9d93e6-bd24-4778-bcbc-063facbfd0ad>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Lithia Springs Algebra 2 Tutor Find a Lithia Springs Algebra 2 Tutor ...Students will explore the factors that affect the rates of a reaction and apply them to the theory of dynamic equilibrium. They will predict changes in equilibrium based on LeChatelier’s principle, and apply them to 2 acid-base theory. Oxidation-reduction will be applied to real-life situations and energy interactions will be used to explain reaction spontaneity. 14 Subjects: including algebra 2, chemistry, physics, SAT math ...I was consistently the best student. After completing Calculus sequence, I started tutoring the course to junior students at various levels. While Calculus involves a lot of limits, derivatives, integrations, functions, vectors, trigonometry and algebra, it is not limited to only those areas. 36 Subjects: including algebra 2, calculus, geometry, algebra 1 ...While enjoying the classroom again, I also passed 6 actuarial exams covering Calculus (again), Probability, Applied Statistics, Numerical Methods, and Compound Interest. It's this spectrum of mathematics, from high school through post baccalaureate, which I feel most comfortable tutoring. I also became even more proficient with Microsoft Excel, Word, and PowerPoint. 21 Subjects: including algebra 2, calculus, statistics, geometry ...Trigonometry can be intimidating. But when you get the fundamentals down, everything starts to fall into place. I can help you get through the learning curve and make sense of the sometimes confusing language. 32 Subjects: including algebra 2, chemistry, physics, geometry ...I am completing my degree in Information, Science, and Technology at Pennsylvania State University. During my time in high school and college, I did well in my Math (Calculus I-II), Chemistry, and Physics courses and have tutored in all of these subjects. Currently, I co-teach Math 1 and GPS Algebra 1. 13 Subjects: including algebra 2, chemistry, physics, geometry
{"url":"http://www.purplemath.com/Lithia_Springs_Algebra_2_tutors.php","timestamp":"2014-04-17T22:05:53Z","content_type":null,"content_length":"24311","record_id":"<urn:uuid:f17fcf9f-156b-47b4-80b0-5b04624d8a07>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
Performing Calculations using Measured Values that Include Uncertainty In this activity, students practice performing calculations using measured values that include uncertainty. Students measure the mass (using an electronic balance) and volume (by water displacement) of pennies and use the values to calculate the density of the pennies. All measured values include uncertainty, and students practice using the rules for making calculations using numbers that include uncertainty. As the students increase the number of pennies they use, the relative uncertainty of their calculated density decreases. Students will see factors that affect the uncertainty of each measurement and also how the uncertainty of each measurement contributes to the uncertainty of their calculated results. Provided here is a data set for copper pennies (pre-1982) and zinc/copper pennies (post 1982). Students can use the data to identify what these pennies are made of, but only when the uncertainty of their calculated density is lower than the difference between the density of copper and the density of zinc. Learning Goals This activity is a building block in the Measurement and Uncertainty module. This activity is intended to teach students to perform calculations using measurement values that include uncertainty. In addition, students will see an example of how to reduce the uncertainty of a measurement for volume by increasing the number of pennies they measure. Finally, they will practice making conclusions based on the amount of uncertainty in their results. Context for Use This activity in intended for any science course where students are collecting and interpreting numerical data. Specifically, it is intended for use in course where teachers are using the methods described in the module Measurement and Uncertainty Description and Teaching Materials This activity can be done in the lab, as an interactive lecture demonstration, or as a worksheet. The worksheet provided here contains sufficient information to see how the lab would be conducted. worksheet for measuring the density of pennies (Microsoft Word 1MB Aug31 10) If the activity is done in the lab, the worksheet provided here can be used as an assessment for the lab. Teaching Notes and Tips Although the data (photos) provided in the worksheet above are sufficient to perform the calculations, it would be preferable for students to make the measurements themselves. No fancy equipment is needed -- just pennies, graduated cylinders, and some kind of scale or balance. The composition of US pennies changed in 1981-82. Pre-1982 pennies are mostly copper and pennies made after 1982 are mostly zinc. Sort pennies into pre- and post- 1982 piles. Pennies made in 1982 could be either alloy, depending which mint made them. The US mint in Denver continued to make pennies of mostly copper through 1982. The instructor should sort the pennies into pre-1982, post-1982, and, for an extra mystery treat, 1982 D (Denver). This way, students will get different results and reach different conclusions, which makes for interesting class discussions. If this activity is done using actual equipment, then the images included here can be used as a written assessment after the completion of the lab activity. References and Resources Composition of the Cent from the US Mint - Includes a history of the composition of the US cent coin.
{"url":"http://serc.carleton.edu/quantskills/teaching_methods/uncertainty/examples/48732.html","timestamp":"2014-04-18T16:09:56Z","content_type":null,"content_length":"27638","record_id":"<urn:uuid:5731f0da-82e1-4711-b130-9cdd1db6cc9e>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: what is the theme of the poem "i am of the earth"? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50edef6ce4b0d4a537cd6b5f","timestamp":"2014-04-17T22:11:10Z","content_type":null,"content_length":"72101","record_id":"<urn:uuid:da8ddc3f-99e3-4188-9deb-3ec128906d29>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: PHASE-CONVEX ARCS F. P´erez , C. Abdallah * ETSI Telecomunicacion, Universidad de Vigo, 36200-VIGO, SPAIN EECE Dept., University of New Mexico, Albuquerque, NM 87131, USA. This paper considers the problem of identifying regions in the complex-plane, such that the phase of polynomials having roots in those regions, is bounded by that of a few extreme polynomials. Applications of the results are also 1 Introduction This short paper considers the problem of identifying re- gions in the complex-plane, such that the phase of poly- nomials having roots in those regions, is bounded by that of a few extreme polynomials. More specifically, given a family of polynomials P(z-1
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/062/2081439.html","timestamp":"2014-04-17T22:38:28Z","content_type":null,"content_length":"7840","record_id":"<urn:uuid:aa44b209-66e9-4b52-b486-43886e259f98>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
Working With Data Imagine reading through thousands of pages of data collected from the last census: number of people in the household, age, nationality, household income, address, etc. How can all that data makes sense? This topic leads to fun classroom activities that deal with gathering data, displaying data, and summarizing and organizing data. It also links mathematics to the real world. Welcome to the world of mathematics known as statistics. How is data collected? Surveys, questionnaires, and telephone solicitation are just a few of the ways that data is gathered. From these collections, tallies, graphs, and line plots are created to help reduce the data into meaningful, visual presentations. However, if data is merely collected and displayed, we would be missing the most important feature, analysis of the collected data. This allows us to summarize, organize, and even make predictions for the future. Analyzing data is essential for the growth of children's mathematical understanding. Lead students to realize the need for identifying numbers that can accurately represent the entire data set. These numbers, called measures of central tendency, help to condense data into a few numbers. This process of analysis begins in Grade 2 by analyzing the spread in the numbers, from lowest to highest. This idea is known as the range. Another descriptive concept covered at this level is the one piece of data that occurs most frequently. This is known as the mode. Let's look at an example. The table shows the number of tickets sold at a theater in one week. Keeping Track and Displaying Data: Range and Mode: Notice that the number of tickets sold varied from 123 on Wednesday, to 396 on Saturday. This means that the range of the data is 396 123, or 273 tickets. Also notice that on Friday and Sunday the same number of tickets were sold. This means that 365 is the mode; it occurs most frequently. (If a different number of tickets were sold each day, we would say that there was no mode.) As concepts are developed, help children to connect concepts they learned last year to concepts they are learning this year. Last year, they represented and compared data using tally marks. They also answered questions about a survey. This year they will collect and record data from a picture, a survey, and a tally chart. In addition, they will learn about range and mode.
{"url":"http://www.eduplace.com/math/mathsteps/2/b/index.html","timestamp":"2014-04-18T13:48:56Z","content_type":null,"content_length":"7917","record_id":"<urn:uuid:42a850a1-c42e-4c52-996b-783123166ee3>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
How can I make something like determinants tangible? Are there real life examples where determinants are used? Real life in the sense that the student will find motivating or likely to come across a particular example in physics, engineering, economics - really looking for a stimulating example. Thanks for the quick response. You can think of the vector cross product as a determinant. The vector cross product shows up in electricity and magnetism, in Maxwell's Equations, as well as other places. As a somewhat less spectacular example, you can think of Kramer's Rule for solving linear systems of equations. Kramer's Rule has more theoretical interest than practical, because computing determinants using, say, a Jacobian expansion is computationally intensive. For large problems, you'd probably try to diagonalize, or at least upper-triangularize (if that's a word) a matrix in order to compute its determinant as the product of the elements along the main diagonal.
{"url":"http://mathhelpforum.com/advanced-algebra/182765-determinants.html","timestamp":"2014-04-18T15:56:37Z","content_type":null,"content_length":"38386","record_id":"<urn:uuid:d0dd52aa-0e8e-4b1f-a545-0e704a0326df>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
what is321 base 4 + 123 base 4 equal? what is321 base 4 + 123 base 4 equal? what is321 base 4 + 123 base 4 equal? confused wrote:what is321 base 4 + 123 base 4 equal? You can either add the base-4 numbers, remembering the "carry" any time you have a value of "4" or more (just as you "carry" in regular decimal arithmetic when you have a value of "10" or more), or else you can convert the numbers to decimal form (that is, to regular base-10 numbers), do the addition, and then convert back. I don't know what you've learned, or what your preference might be. Please reply showing what you've tried so far. If you're needing a review first, please study this number-bases lesson Thank you! Re: what is321 base 4 + 123 base 4 equal? so 321 base 4 is 1*1+4*2+16*3=1+8+48=57 and 123 base 4 is 3*1+2*4+1*16=3+8+16=27 then 57+27=84 this is 64+16+4=1*64+1*16+1*4+0*1=1110 base 4 is that rite?
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=5&t=169","timestamp":"2014-04-21T15:19:56Z","content_type":null,"content_length":"21480","record_id":"<urn:uuid:e3ef12fd-9a8b-4ddc-b2f2-401993b44556>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: July 1999 [00243] [Date Index] [Thread Index] [Author Index] Re: Rationalizing the denominator • To: mathgroup at smc.vnet.net • Subject: [mg18671] Re: [mg18633] Rationalizing the denominator • From: "Tomas Garza" <tgarza at mail.internet.com.mx> • Date: Thu, 15 Jul 1999 01:45:55 -0400 • Sender: owner-wri-mathgroup at wolfram.com Drago Ganic [drago.ganic at in2.hr] wrote: > How can I get > Sqrt[2]/2 > instead of > 1/Sqrt[2] > as a result for Sin[Pi/4]. > When it comes to complex numbers Mathematica never returns 1/I - > she always > returns -I. > Why is the behaviour for irrationals different ? Hi, Drago! As often has been the advise in this group, look at FullForm: 1/Sqrt[2] // FullForm Power[2, Rational[-1, 2]] You can't expect Mathematica to go back from this to the "rational" form Sqrt[x]/x. In fact, if you write Sqrt[x]/x you'll get 1/Sqrt[x]: Of course, if you still want to "rationalize" 1/Sqrt[x] you may use a transformation rule together with HoldForm: Sin[Pi/4] /. Power[x_, Rational[-1, 2]] -> HoldForm[Power[x, Rational[1, 2]]*Power[x, -1]] which, from the point of view of Mathematica, is a waste of time since this last expression, if released, will be always return 1/Sqrt[2] as shown in In[2] above: On the other hand, 1/I // FullForm Complex[0, -1] which explains why 1/I returns -I. The behavior is consistent: internally, Mathematica has no division. Tomas Garza Mexico City
{"url":"http://forums.wolfram.com/mathgroup/archive/1999/Jul/msg00243.html","timestamp":"2014-04-18T03:09:18Z","content_type":null,"content_length":"35577","record_id":"<urn:uuid:b8ec2800-ab7a-4243-9966-da03a5c0a4bf>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
D-Wave makes HPCwire's top 10 best stories of 2011 D-Wave makes HPCwire’s top 10 best stories of 2011 Hit: Quantum Computing Goes Commercial In May, D-Wave Systems sold the world’s first quantum computer. The buyer was Lockheed Martin Corporation, who did not disclose how they intend to use the machine. The system, named D-Wave One, employs a 128-qubit chip, called Rainier, and uses superconducting technology to generate “adiabatic quantum computing” (that some claim is not true quantum computing). The cost of the system was not disclosed, but undoubtedly this is one of those cases in which if you have to ask, you probably can’t afford it. It still bothers me (marginally) that the claim that AQC is not “true quantum computing” still festers in the collective conscious. One of the things I’ve learned over the past ten years is that dogma is extremely difficult to dislodge. Opinions and beliefs have tremendous inertia — even ones that are wrong and/or harmful. I think the gate model of quantum computing set back the field of actually building real quantum computers by 20 years or so. I can imagine a parallel universe where the ideas of experimental condensed matter physicists drove the underlying theory of quantum computation, instead of theoretical computer scientists and mathematicians. In this parallel universe, by now we’d likely have dozens of real working quantum computers of all sorts of types. The main problem with the gate model is that, while it is beautiful for theoretical computer scientists, it is astronomically horrible from the implementation side. Somehow we got into a situation where experimental physicists (ie implementers) bought the story that the gate model was “real” quantum computing and other ideas Someone (I think maybe it was Eric) had a classic line that I sometimes think about when this subject comes up. When questioned about whether what we’ve built was a “real” quantum computer, he said “How about we race our 25,000 Josephson junction superconducting adiabatic quantum computer against your powerpoint deck [editor's comment: powerpoint deck == most advanced gate model quantum computer ever built] and see who wins.” The point is that no gate model quantum computer has ever been built. I have speculated for some time that no useful gate model quantum computer will *ever* be built, because of a long list of inter-related challenges that no-one — even though lots of smart people have tried — even has the faintest notion of how to solve. 9 thoughts on “D-Wave makes HPCwire’s top 10 best stories of 2011” 1. So do you think we’ll have dozens of different types of quantum computers in the next 20 years? (or will the dwave architecture become the “IBM-PC” of quantum computing?) □ @nn: Our objective is to provide to our customers and partners the fastest and most efficient computing systems on earth. If other computing systems (quantum or not) are developed, we’ll do our best to make sure that our gear obliterates them. 2. It is true that the article was in error to cast doubt on adiabatic quantum computing as “true quantum computing”; there is no doubt in the theoretical computer science community that adiabatic quantum computing (just like topological quantum computing using universal anyons, or measurement on cluster states, or others) is universal for BQP and therefore fully equivalent to the gate model and deserving to be called true quantum computation. There is a far more serious error in the article, however, when it states that the D-Wave One accomplishes adiabatic quantum computing, which it does not. D-Wave may yet build a true (i.e. scalable and universal) adiabatic quantum computer if it is able to implement tunable ZZ (or YY) coupling in addition to XX, and is able to improve decoherence times enough to remain in the exact ground state avoiding thermal transitions between states while evolving slowly enough to avoid Landau-Zener transitions. The first group to build a true adiabatic quantum computer, whoever it is, will be able to translate and run any quantum algorithm from any model of quantum computation, including for example Shor’s algorithm to break RSA. Whether the hardware implements the gate model or some other universal quantum architecture such as adiabatic quantum computing is irrelevant; any true quantum computer can factor large composite numbers, and any machine that cannot factor large composite numbers is not a quantum computer. Recent progress has convinced me that the D-Wave’s machines do indeed employ quantum and not just classical effects. But that does not make any of them a “true quantum computer” any more than a bank of rectifiers is made a “true computer” from the fact that it depends vitally on electronic components (diodes) with a non-linear response, which is also the key to the operation of transistors. The accurate statement would be something like: “The D-Wave One may not be a quantum computer, but it is a powerful optimization engine, of a type never built before, that depends crucially on quantum tunneling for its correct operation.” Or, as a soundbite: “It’s not a quantum computer, but it is a new kind of computer that exploits quantum effects.” I’d also accept “quantum annealing machine” and might even go so far as to countenance “special-purpose adiabatic quantum calculator”. The progress in superconducting circuits made by your group and others has been very impressive, to the point that if I had to guess I would now predict that the first scalable architecture for universal quantum computation that ever exists will consist of a network of Josephson junctions. (My favored scenario is a sort of quantum metamaterial: a repeating pattern of loops whose interactions define a local Hamiltonion, with a spectal gap wider than the operating temperature, whose elementary exitations have the statistics of Fibonacci anyons.) Whatever progress has been made, though, the day of the true quantum computer is not yet, and it does the field no service to claim it prematurely to the public media. 3. I’ve argued that computers built to run a specific quantum algorithm (like ours) should be referred to as quantum computers. Ultimately this just boils down to what you define as a quantum computer — the definition I prefer is related to the previous point. If a machine can run a quantum algorithm then it deserves to be called a quantum computer. While universal quantum computers are quite interesting for a variety of reasons, practically there isn’t a whole lot of reason to try to implement one right now. There is an extreme gap between the difficulty of building a practical quantum computer to implement quantum annealing for optimization and a practical computer to do, say, generic quantum simulation (which seems to be the only commercially useful application of a universal QC so far). By the way it may be possible to implement an efficient factoring algorithm using our system (not Shor’s — a different algorithm for factoring using quantum annealing) — we’ll see. We do know how to implement XZ couplers into our processor architecture, and have in fact designed and built some, but there is no compelling reason to try to build a universal AQC currently — there are simply no practically useful algorithms for one (except for optimization, and you don’t need XZ for that — at least not yet). If we had a uAQC now, the only thing we’d know what to do with it would be optimization. While in principle you can map gate model algorithms into universal AQC, the overhead makes doing so impractical. In order for uAQC to be worth doing, algorithms solving valuable problems would need to be developed. Another point to consider is that the T=0 version of quantum annealing is not as computationally powerful as the finite T version. Adding thermal transitions can substantially increase the success probability of quantum annealing algorithms. 4. I personally enjoy the comparisons of the D-Wave One System to the Altair 8080 and the general QC community climate to circa 1970s before the personal computing boom. Though I believe we are still far far away from operating systems or quantum mobile apps, the enthusiasm in the field is just as genuine as when Paul Allen flew to New Mexico to test Altair BASIC. Naysayers and proponents of the gate model will come and go but I love to see that D-Wave has a clear mission and a plan to continue developing higher-level qubit processors. Like the lead horse in a race, keep running in stride, history is never made by those living in the past. Best wishes – Nashid 5. Doesn’t it bother your Geordie, that you yourself don’t know how strong is decoherence in your quantum computer? Because even small decoherence rising possiblity of wrong result exponentionaly. Say, somebody think, that if your computer is classic, then it can give almost as good answer as classical anealing, as they speak. Why so many counting on this classical anealing? Analog computer with 100-1000 qubbits/adders if worthless. Say, need add 128 real numbers with double precision. It for desktop computer is very very easy task and takes only about 0.0000001 s. Because, seems your computer can’t do anything more if it is just some superanalog computer. Also latency to read answer, because of speed of light, because intel chip is not very near that cool stuff, which you call quantum chip (rainer processor), so additionally waist of time for inputing and reading answer. If you would have 10^12 qubits and it sum up numbers, then it could be as fast as 1000 desktop computers to sum up some 10^12 (trilion) real numbers. So how you don’t figure it out yet, if your computer is classic or quantum or superanalog (classical anealing, which I heard supose to blind from real answer about if it is simple analog bullshit or real quantum)? So do you admit, that in case, you computer is classic (analog or not), that it still is fast or in that case it mast be very slow and worthless (if it is classic and not a little bit real quantum computer)? 6. Keep in mind that intel i7 CPU waisting all transistors on cache and thus it have only about 4 cores and each core have about forth 32 bit precision blank spaces. Maybe with new AVX 256bit (instead 128 bit) intel instructions this is 8 blank spaces for numbers, which will be added or multiplied. So even I don’t count cores, because many programing programs (free pascal etc) using only single core, so only one place for 128 bit precision number (yes, it is the best what can do CPU with all that SSE-SSE4 power). So 3GHz and it is about 10^9 – 10^10 real numbers addition operations per second or multiplication operations. Maybe with single precision up to 10^11, say, addition operations per second. With your computer setting of qubit how much it can be +1 (from 0 to +1, like 0.2) and how much it can be -1 (from 0 to -1) and what is field on qubit, so another multiplication operation (from -1 to +1 range). So it don’t seems as a lot can be computing power from some classic tricks with magnetic field on ‘qubits’; it must be still something like analog computer, like I said about summing up real numbers (of course it seems, that with trillion real numbers analog addition should be very unprecise, but as you say approximate answer may do a trick and who knows maybe analog can sum up quite precisely even trillion numbers). But, 128 ‘qubits’ for some analog classical computer, excuse me, that can not even compare to desktop computer, because is very unpowerfull. So why you don’t just stop using, that fraze, that you computer is very fast (and computationally powerfull and faster than desktop computer) without quantum computer effects. Or you still can tell me, that 128 qubits Rainer without quantum computer effects is faster than desktop computer (say intel Core i7)? 7. Geordie could you reveal what the next d-wave processor is going to be after the 512q-bit is it a 1024 or 2048? do you expect d-wave to be able to follow the doubling of q-bits every year or will there be a slow down? □ Hi Kasper, the historical trend has been doubling the number every year. This has held now for about 8 years.
{"url":"http://dwave.wordpress.com/2011/12/23/d-wave-makes-hpcwires-top-10-best-stories-of-2011/","timestamp":"2014-04-17T12:33:09Z","content_type":null,"content_length":"69804","record_id":"<urn:uuid:df04828e-bbc5-4377-bfd6-4699f684bcc4>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
Open Source Math CmdrTaco posted more than 6 years ago | from the wouldn't-it-be-nice dept. An anonymous reader writes "The American Mathematical society has an opinion piece about open source software vs propietary software used in mathematics. From the article : "Increasingly, proprietary software and the algorithms used are an essential part of mathematical proofs. To quote J. Neubüser, 'with this situation two of the most basic rules of conduct in mathematics are violated: In mathematics information is passed on free of charge and everything is laid open for checking.'"" cancel × Lol (5, Funny) Matt867 (1184557) | more than 6 years ago | (#21398025) Thanks for the article, now some crazed company is going to try to copyright math. Re:Lol (1) Roager (1188827) | more than 6 years ago | (#21398077) 10 bucks on Microsoft! Those fucking porpriartaryers can learn from riaa (-1, Troll) | more than 6 years ago | (#21398295) That data wants to be free including binaryies. It obivous to me and I am a expert all around this feilds Re:Those fucking porpriartaryers can learn from ri (0) | more than 6 years ago | (#21398319) How is it I can tell you're a Linux user? Re:Lol (5, Funny) | more than 6 years ago | (#21398181) I am going to copyright 0 = 1. Any software that contains i = i+1 must license my math. Re:Lol (4, Funny) Dunbal (464142) | more than 6 years ago | (#21398441) Sorry, but I've already patented the systematic use and manipulation of abstract symbols representing real world quantities in order to derive relationships. Re:Lol (4, Funny) Plutonite (999141) | more than 6 years ago | (#21398957) Sorry, but I've already patented the systematic use and manipulation of abstract symbols representing real world quantities in order to derive relationships. And I've copyrighted proverbial hand-waving. Together, we hold the scientific community hostage! Maths...... (5, Funny) Seoulstriker (748895) | more than 6 years ago | (#21398695) Look around you. Look around you! [youtube.com] That's how I learned maths in high school. I don't know what to say (-1, Offtopic) | more than 6 years ago | (#21398051) I'm just so tired. Re:I don't know what to say (1) fbjon (692006) | more than 6 years ago | (#21398433) I don't know what to say. I'm just so tired. Like this [xkcd.com]. Re:I don't know what to say (1) Wonko the Sane (25252) | more than 6 years ago | (#21398963) I am convinced that this is the best xkcd [xkcd.com] ever. It's all... (2, Insightful) Shikaku (1129753) | more than 6 years ago | (#21398057) about the money. Python is part of the answer (5, Insightful) Ckwop (707653) | more than 6 years ago | (#21398061) I am no a mathematician but surely if you're going to submit a computer aided proof you must submit a full copy of the program. The are all manor of subtle mistakes that can be made in a program that could cause serious problems with a proof. Suppose you inspect the source and find it to be faultless, how can you trust [cryptome.org] the compiler. And if you hand compile the compiler, how can you trust the CPU [wikipedia.org]? Surely it's turtles all the way down. In many ways, establishing the correctness of a computer-aided proof is very much like security engineering. You want to verify that the whole software stack is operating correctly before you can trust the result. Having the source-code is a pre-requisite to this exercise. Changing to topic slightly, I was particularly heartened to see that the open-source mathematics framework being developed one of the authors of the article involves the use of Python. My immediate thought when seeing the title to the article was "Python is the answer." When some problem or algorithm intrigues me the first thing that happens is that I reach for the Python Python seems to deftly marry precision with looseness. When code is laid out in Python I find it is easier to see what it's trying to do than other languages. It's aesthetic qualities aside, it supports a number of features out of the box which I imagine would be ideal of mathematicians. To list a few, it's treating of lists and tuples as first class objects, support for large integers, complex numbers, it's ability to integrate with C for high-performance work. I often think of Python as "basic done right" and it's ideal for mathematicians (or anybody) who don't want to think about programming but the problem at hand. Re:Python is part of the answer (5, Interesting) snarkh (118018) | more than 6 years ago | (#21398313) I have seen from personal experience, how a compiler error (some sort of incorrect optimization) led to a subtle difference in the results of a simple classification task. The insidious thing about that particular result was that it looked very similar to the correct. In fact the difference would not have been found if two people did not run different versions of code independently (and more or less coincidentally) arriving to slightly different error rates. Re:Python is part of the answer (3, Informative) jelle (14827) | more than 6 years ago | (#21398737) From your description, it sound as if you found that the code returned different results at different optimization settings for the compiler, but did not pinpoint what instruction sequence exactly caused the difference. Unless you were using an experimental compiler, that usually means a bug in the code, not a bug in the compiler. Run the code with valgrind, you'll probably find out-of-bound addressing, or uninitialized reads (the signs of the problem being in the code, not the compiler)... Or if you use threads, it can also be in your locks... The reason for that is that such code bugs often result in different code execution at different compiler optimization settings. Re:Python is part of the answer (5, Informative) nwbvt (768631) | more than 6 years ago | (#21398323) I used Python fairly extensively in my number theory course back in college, it did the job fairly well. Its support for large integers was especially important for that class. And the fact that it was very familiar to me (I was a double major in CS and math), it was very easy for me to crank out an algorithm in it. However, most of the book's examples were in Mathematica, which I ended up getting as well. It was a neat tool, but now that my student license has expired and I don't feel like spending a few grand on another license, everything I wrote in that is useless. However I can still pull out my old Python programs and see what it was I was doing. Re:Python is part of the answer (5, Informative) El_Isma (979791) | more than 6 years ago | (#21398549) Let me recommmend you Maxima http://maxima.sourceforge.net/ [sourceforge.net] It's a GPL Computer Algebra System and it's in active development. I use it all the time. Re:Python is part of the answer (1) argiedot (1035754) | more than 6 years ago | (#21398789) With a little work you may be able to do something with Octave, it's partly compatible with Mathematica code. Re:Python is part of the answer (3, Informative) jrminter (1123885) | more than 6 years ago | (#21398827) In addition to octave and maxima, there is sage. [sagemath.org] I have been impressed. Re:Python is part of the answer (0) noidentity (188756) | more than 6 years ago | (#21398365) In many ways, establishing the correctness of a computer-aided proof is very much like security engineering. You want to verify that the whole software stack is operating correctly before you can trust the result. Having the source-code is a pre-requisite to this exercise. I never thought about it until now, but I'd say that math "proofs" done by a computer shouldn't be given as solid a status as those done by humans. It's too easy for the computer to have a glaring bug. Maybe if more than one independently developed proof checking program were run over it (simulating more than one fallible human going over a proof), but how will that happen with patented, proprietary math programs? Re:Python is part of the answer (1) ndevice (304743) | more than 6 years ago | (#21398373) ironically, it's all a house of cards Re:Python is part of the answer (3, Insightful) Dunbal (464142) | more than 6 years ago | (#21398415) The are all manor of subtle mistakes that can be made in a program that could cause serious problems with a proof. No mistakes. After all, the Ultimate Answer really is 42. My program proves it! #define MYANSWER "42" int main() printf("The result is: %s.", MYANSWER); No, you CAN'T have the source code... but look, my program proves it! LOOK AT THE PROGRAM! Re:Python is part of the answer (1) otomo_1001 (22925) | more than 6 years ago | (#21398423) Not to troll or anything, but every one of your reasons for using Python is why I use Ruby. Some *very* recent others that make me like it: * I can now use versions of Ruby that work with dtrace on Leopard and Solaris/Opensolaris (haven't tried FreeBSD yet). * Ruby on Rails, yes despite the hype I like it. Though there are annoyances. * I can also build Ruby (and Python) programs in osx without Coacoa/Objective C. Supported too, yay. * (Not recent, but the reason I prefer Ruby to Python) Whitespace is optional, as are parentheses. I am looking at Perl right now. Faults in 1.8 I don't like: * longjmp/setjmp threading versus native threads in the interpreter. Sort of annoying to have to restrict certain things to the main "thread". * Some functional aspects end up using insane amounts of memory if used. In either case use what works for you, I use Ruby since it lets me work on the solution to the problem. If Python does that more power to you. Back to the topic, shouldn't the math community be promoting a specific language then if they want to develop proofs with computers? Something like Haskell version XYZ should be used for all submitted proofs to verify everything? If we distrust every component of the computing stack we might as well throw them away as being useless. Although if we have a test framework/harness to verify proper operation we can leave most of this up to the interpreter/compiler. I am sure I will get proven wrong on all this so be gentle! Re:Python is part of the answer (1) aldheorte (162967) | more than 6 years ago | (#21398531) Second that on Ruby. I think Ruby is where the brain share and community is going, nothing against Python per se. You have to be careful with Python and Ruby though. For example, I wrote a symbolic math interpreter for simplifying algebraic equations in Ruby. I then realized that I had reinvented LISP. I do not actually program LISP, but in the end, LISP rules all as a programming language, especially when pure math is considered. Re:Python is part of the answer (4, Insightful) poopdeville (841677) | more than 6 years ago | (#21398495) I am no a mathematician but surely if you're going to submit a computer aided proof you must submit a full copy of the program. The are all manor of subtle mistakes that can be made in a program that could cause serious problems with a proof. I am a mathematician. Your referees might ask to inspect the source code. This is akin to a biologist being asked to produce her raw data. But it's pointless anyway. Because... In many ways, establishing the correctness of a computer-aided proof is very much like security engineering. You want to verify that the whole software stack is operating correctly before you can trust the result. Having the source-code is a pre-requisite to this exercise. The AMS isn't worried about the correctness of these "proofs." They aren't proofs. It is logically possible for one of these programs to return the wrong answer, even if the program is correctly implemented. Ergo, it is not a proof. Computing, in mathematics, is a source of fresh problems and a vehicle to explore and gain insight about mathematical structures. The AMS is far more concerned about good exploratory algorithms getting swept up by Wolfram Inc., and Mathworks, and the like, and never being seen by mathematicians again. Regarding which language is approriate for mathematics, the answer is whichever clearly expresses the idea you're trying to write. Lexical scoping is familiar to us. I know I prefer it, since it lessens my cognitive load. I prefer dynamically typed languages. I need the ability to construct anonymous functions efficiently. And I would prefer automatic memoization. Development time is always an issue. Most languages don't come with extensive mathematical algorithm libraries. So you'll either have to write them yourself (time consuming; boring, unless you're into that stuff) or find some. I've used Perl, Ruby, Scheme, and C. Re:Python is part of the answer (2, Interesting) Anonymous Brave Guy (457657) | more than 6 years ago | (#21398605) I fear you and/or the AMS are giving too much credit to the big names in mathematical software. Sure, they have some bright people and they do some useful research in their own right, but they're still only human. They make mistakes, their software has bugs, and they don't know lots of deep secrets that the rest of academia don't. In fact, the development practices at certain high profile mathematical software companies leave a lot to be desired; they tend to hire PhD types, who know a lot about mathematics but may or may not know jack about how to write good software. I rather doubt they're about to kidnap all the leading edge research and make it disappear from everyone not working for them. Disclosure: I work for a mathematical software firm well known in its industry, and I've encountered some of the others in a professional context. I am speaking personally and not on behalf of anyone else here. Re:Python is part of the answer (2, Interesting) poopdeville (841677) | more than 6 years ago | (#21398881) I fear you and/or the AMS are giving too much credit to the big names in mathematical software. I can see why you might think that, but my point had little to do with commercial software houses. My main point was that computer-assisted "proofs" are not proofs in the mathematical sense. They're "results" that rest "scientifically" on the software and hardware and real world. It really doesn't matter whether I use my implementation of Newton's Method or Mathematica's. Neither should be trusted in a proof. I forget who it was (Wiles maybe?), but a famous mathematician once described doing mathematical research as groping around a dark cave, trying to find an exit. A computer program is like a flashlight. Not an exit, but a helpful tool for finding it. Re:Python is part of the answer (1) rucs_hack (784150) | more than 6 years ago | (#21398719) The AMS isn't worried about the correctness of these "proofs." They aren't proofs. It is logically possible for one of these programs to return the wrong answer, even if the program is correctly implemented. Ergo, it is not a proof. I might be wrong, but it occurs to me that a program which 'proves' a mathematical hypothesis can only, on inspection, be shown to be a proof of the program itself, not the initial hypothesis. The problem with software is that it can be made to do anything. Want to model colliding galaxies that mimic observed or hypothesised behaviour? Easy, jut twealk till you get the right result. The result however will not be a 'proof' of the true mechanisms underlying the event in the real universe. The issue then is what you are trying to prove. If it's something that is outside of the domain of the computer, then you can't use it as a proof, since you almost certainly cannot reproduce enough of the influencing factors, in most cases you need to simplify to model in silico. If, on the other hand you aim is solely to produce a model that looks the same, but is not said to be trying to prove the mechanism of galaxy collision, then you can say the software is the 'proof' of your simulation being able to poduce something that looks the same. If the problem being demonstrated is itself solely in the domain of software, then the software can be the proof, in way, albeit not the conventional meaning of the word proof as used in mathematics. Consider machine learning as applied to pattern recognition. You design your classification data structure/algorithm, then construct software that optimises it to perform that pattern recognition as well as can be cheived. In that case the result of the software can be considered the the evidence that supports the hypothesised performance, and the souce code would, in effect, be the 'proof' in loose terms, as it would be the means by which the aproach is shown to be valid. In most cases, it would just be the result that mattered, but unless you can provide a description of the software, or the software itself, so the method can be independantly implemented and verified as producing the results you show, you wouldn't get taken seriously. In that way the software performs the same function as a mathematical proof. It can be independantly checked. Re:Python is part of the answer (1, Funny) | more than 6 years ago | (#21398973) I am a mathematician. Yeah right. I don't see any publications by you on MathSciNet, Mr. poopdeville. Ruby could be the answer as well (4, Interesting) Gadzinka (256729) | more than 6 years ago | (#21398547) Python seems to deftly marry precision with looseness. When code is laid out in Python I find it is easier to see what it's trying to do than other languages. It's aesthetic qualities aside, it supports a number of features out of the box which I imagine would be ideal of mathematicians. To list a few, it's treating of lists and tuples as first class objects, support for large integers, complex numbers, it's ability to integrate with C for high-performance work. I often think of Python as "basic done right" and it's ideal for mathematicians (or anybody) who don't want to think about programming but the problem at hand. I could also recommend Ruby for the job. It has all the features you recommend, and more. If you could forget for a moment about the monstrosity that is Rails (I don't know, lobotomy might do the trick), the language in itself is quite beautiful. There is one special feature of Ruby, that I miss in every single programming language I used since: iterator methods. Any time I want to iterate over elements of an array or hash I just do: myhash.each_pair do |key,val| puts "#{key}: #{val}" That's it, instant "anonymous function" given as a parameter in estetically pleasing syntax. In fact, "for" loop in Ruby is just obfuscated way of calling method #each on an object. But the madness doesn't stop here: File::open("somefile.txt") do |fh| fh.each do |line| puts line It's a pity that so many people disregard Ruby as a "platform for Rails". It is a feature complete countepart to Python, and as my company high volume systems can attest, can handle anything other languages can handle. Re:Python is part of the answer (0) | more than 6 years ago | (#21398575) "The are all manor of subtle mistakes" Or not so subtle spelling mistakes. A manor is a large house. Manner. The word you want is manner. Coq is another interesting tool (3, Informative) DrYak (748999) | more than 6 years ago | (#21398637) We may also mention Coq [wikipedia.org], a proof assistant wich is available under LGPL and runs on OCaml (which in turn is also open sourced and available on Linux). This is a tool that can help mathematician prove their theorems. It was notably being used in the proof of the four color theorem [wikipedia.org], as mentioned on /. [slashdot.org] (article about machine assisted proofs). Re:Python is part of the answer (1) Have Brain Will Rent (1031664) | more than 6 years ago | (#21398923) It seems to me that if you were looking for a language for mathematicians that it would be something that is syntactically very close to mathematical notation... APL was/is such a language, and with all the interactiveness of Basic... but mathematicians aren't using it in any significant number and never really did. Re:Python is part of the answer (2, Insightful) Dare nMc (468959) | more than 6 years ago | (#21398937) You want to verify that the whole software stack is operating correctly before you can trust the result. Having the source-code is a pre-requisite to this exercise. I disagree, it is certainly possible to prove to a reasonable certainty what a black box is doing. It may be easier, or more though to prove looking into the box. As you say, for all practicality no one is going to be able to confirm the entire software stack, by looking at the code for any proof. unless your running the final step on a basic stamp. But if you re-run the program multiple times with the same result, and you run multiple iterations of very similar problems that you know the results of, and they all agree, you can build a reasonable proof. Re:Python is part of the answer (1) Bert64 (520050) | more than 6 years ago | (#21398965) Well, you're only really responsible for the correctness of your own code. As to the compiler and CPU, so long as you use a combination that have been verified as correct by other mathematicians you should be fine. speaking of proprietary (3, Insightful) larry bagina (561269) | more than 6 years ago | (#21398069) The article (which is actually a PDF, thanks for the warning) uses proprietary fonts (LucidaBright). While it was typeset with TeX (open), only the PDF (closed and uneditable) is provided. Re:speaking of proprietary (5, Funny) Main Gauche (881147) | more than 6 years ago | (#21398289) "While it was typeset with TeX (open), only the PDF (closed and uneditable) is provided." Indeed. Now we are left wondering whether the TeX code is buggy. Like maybe an extra character accidentally slipped into the file. therefore mathematics software should %not be open source! Now we'll never know. PDF rant. (4, Insightful) serviscope_minor (664417) | more than 6 years ago | (#21398347) Why does this keep coming up on ./? What is wrong with PDF? It's undeitable, sure, that's kind of the point. However, the spec is accessible, and there are plenty of open readers, e.g. xpdf and Really, what is wrong with PDFs and why should they require a warning? By the way, all scientific papers are disseminated by PDF. Re:PDF rant. (1) Tango42 (662363) | more than 6 years ago | (#21398405) "By the way, all scientific papers are disseminated by PDF." Actually, most scientific papers I see are disseminated as PostScript (often with a PDF option for people without ghostscript or similar installed - basically, non-academics). Re:PDF rant. (1) serviscope_minor (664417) | more than 6 years ago | (#21398445) Actually, most scientific papers I see are disseminated as PostScript (often with a PDF option for people without ghostscript or similar installed - basically, non-academics). Not in my experience. PS is opten an option, but not always. LNCS (Springer?) for instance only offer as PDF. I think Elsevier and the IEEE are like that as well. Re:PDF rant. (1) saforrest (184929) | more than 6 years ago | (#21398497) Actually, most scientific papers I see are disseminated as PostScript (often with a PDF option for people without ghostscript or similar installed - basically, non-academics). Perhaps it depends on the field. In my experience, in computer science all recent papers are provided either as PDF alone or PDF + PostScript, and in my (very limited) experience with refereed publications, PDF is the accepted standard. PDF has a lot of advantages over PostScript, the most obvious of which is internal hyperlinks. Re:PDF rant. (1) Tango42 (662363) | more than 6 years ago | (#21398657) It may well depend on the field - my experience is with Maths papers. Also, I'm thinking of pre-prints rather than papers from journals - journals are more commonly PDF, now I think about it. But my point stands - PDF is far from universal. Re:PDF rant. (1) lahvak (69490) | more than 6 years ago | (#21398977) I think lot of preprints used to be postscript because people simply ran TeX and dvips. With pdftex becoming more popular, I expect that is probably soon going to change. Re:PDF rant. (2) visualight (468005) | more than 6 years ago | (#21398561) I would like a warning because I usually don't click on links to PDFs unless I really need the info. Not because it's proprietary or whatever, they just take a long time to load, and if it's a big one, my browser hangs while it's rendering. Re:PDF rant. (2, Informative) serviscope_minor (664417) | more than 6 years ago | (#21398649) I would like a warning because I usually don't click on links to PDFs unless I really need the info. Not because it's proprietary or whatever, they just take a long time to load, and if it's a big one, my browser hangs while it's rendering. Then get a better PDF reader. Even on a very slow computer, xpdf or ghostview have subsecond load times. If you use mozilla related browsers, then plugger will let you "embed" decent PDF readers. In fact if you install mozplugger under Ubuntu, it uses evince by default. If you don't use mozilla, then set it up to use $viewer as an external helper application. My guess is that your bias against PDF comes from the awful Adobe viewer. Re:PDF rant. (1) Eivind Eklund (5161) | more than 6 years ago | (#21398857) PDFs sucks in the default reader, and it often requires external shitty setup. This makes the format suck (on the web) for many/most people. Thus, it is courteous to give a warning. Whether a warning is unnecessary for YOU doesn't matter - it's courteous because the format is annoying for a large enough fraction to matter. For me, I find it particularly annoying because the default Adobe PDF plugin on Windows sometimes crash my browser. I think that's true for many others, too, though I don't know that for sure. Re:PDF rant. (1) serviscope_minor (664417) | more than 6 years ago | (#21398969) PDFs sucks in the default reader, and it often requires external shitty setup. This makes the format suck (on the web) for many/most people. Thus, it is courteous to give a warning. Whether a warning is unnecessary for YOU doesn't matter - it's courteous because the format is annoying for a large enough fraction to matter. For me, I find it particularly annoying because the default Adobe PDF plugin on Windows sometimes crash my browser. I think that's true for many others, too, though I don't know that for sure. Really? The default reader under Ubuntu seems OK. It sounds like you're using Windows, where there isn't a default reader. Sounds like you chose to install a bad reader. An interesting choice given that you don't like it and it crashes your browser. Do you think that a website aimed at techs should give a warning because some of its users are unable to install software that they like? Perhaps slashdot should stop using HTML, since many people use internet explorer which is a sucky browser. Or perhaps you should try browsing under a good Linux distro. It sounds like a much more pleasant experience. Re:PDF rant. (1) mfnickster (182520) | more than 6 years ago | (#21398673) > Really, what is wrong with PDFs and why should they require a warning? Well, for one thing, if you use unusual fonts or special symbols, you can never be 100% sure that the reader on the other end will see them properly. PDF should include an option for graphically rendering fonts which the user doesn't have installed. After all, I've never taken a piece of paper to another location and suddenly seen the writing on it turn to gobbledygook - something I can't say for PDF. Re:speaking of proprietary (0) | more than 6 years ago | (#21398381) The PDF from the 1srt version is a proprietary but open format at the same time and only the DRM part is closed, even if it have patents they are royalty free and only used for preventing incompatible implementation, for their history visit: http://www.acrobatusers.com/blogs/leonardr/history-of-pdf-openness/ [acrobatusers.com] Re:speaking of proprietary (3, Informative) StormReaver (59959) | more than 6 years ago | (#21398401) "While it was typeset with TeX (open), only the PDF (closed and uneditable) is provided." PDF is neither closed nor uneditable. Adobe publishes the complete PDF format for anyone to use free of charge. It may not be FSF Free (since Adobe requires that implementers adhere to certain rules that violate the principle of Free), but it's definitely not closed. Also, KWord will import it for further editing, text and images, so it's not uneditable (even if it's not ideal). I agree with your main point, but let's cut PDF some slack. Re:speaking of proprietary (2, Informative) 1u3hr (530656) | more than 6 years ago | (#21398507) The article (which is actually a PDF, thanks for the warning) uses proprietary fonts (LucidaBright). While it was typeset with TeX (open), only the PDF (closed and uneditable) is provided. I think (hope) you're joking, but several people who responded seem to be taking this at face value. It's wrong in several ways. PDF is an open format, and if you look at the file info, you see that this particular PDF was generated with Ghostscript. And it's quite simple to edit PDFs. Not as easy as, say HTML, but much easier than if it were, say, a TIFF file. I personally use Adobe Acrobat, but a great many free and commercial apps can read, write, and manipulate PDF files. That's why the format was created, for use in DTP, not a locked document format as some business people seem to seriously, wtf? (4, Informative) tetromino (807969) | more than 6 years ago | (#21398929) The article (which is actually a PDF, thanks for the warning) uses proprietary fonts (LucidaBright). While it was typeset with TeX (open), only the PDF (closed and uneditable) is provided. Oh, where to begin... 1. The only reason you would need a "PDF warning" is that you use an operating system with poor support for the format (i.e. Windows). Switching to a real OS, among other benefits, will make reading math papers (which are almost always in PDF format) a pleasure. 2. PDF is an open standard [adobe.com], which has been implemented by many different parties: Adobe and Apple have closed-source implementations; freedesktop.org's poppler and cairo libraries are Free software. 3. The fontface chosen by AMS is orthogonal to the content of the paper - you can easily copy-paste the text and use Computer Modern, Dejavu, Liberation or any other open-source font of your choice. Why would a proprietary font embedded in a PDF file bother you any more than the proprietary fontface of a book? 4. First of all, PDF is editable [petricek.net]. And second, why would you want to edit this particular document? Remember, it's copyrighted by AMS - if you can't prove fair use, you do not have the right to distribute a modified version. quote is out of context (1) mr.Peabody (56428) | more than 6 years ago | (#21398083) The J.Neubuser quote is in reference to using proprietary, closed source software for proofs. The point being that without seeing the guts of the software it is hard to tell if the proof is correct, or dependent on a flaw in the software. Re:quote is out of context (0) | more than 6 years ago | (#21398725) This is highly misleading. A proof can be checked without knowing how it was obtained. (As a matter of fact, in some sense that's the whole point of a proof.) Proof checking is far easier than generating a proof in the first place. If a proof is generated by closed-source software, it can still be checked for errors quite easily. Access to the proof generator itself is not necessary. The idea of "proof carrying code" (PCC) is fundamentally based on this observation. Of course, a result of the form "software XYZ says it is true" is not a proof in the above sense. Openness is Fundamental to Mathematics (2, Interesting) aproposofwhat (1019098) | more than 6 years ago | (#21398087) The article is a very well argued opinion piece, and is correct in that only open-source software should ever be used in a proof. It is fundamental to mathematics that other mathematicians in the same field can check a proof, and the use of closed source software makes that logically impossible, for without access to the source of the application, it is not possible to guarantee that any particular operation has been implemented correctly. He's also plugging his own open source project, SAGE [sagemath.org] - I might have to download it and see if the rusty old brain cells can figure out how to play with it ;) Re:Openness is Fundamental to Mathematics (1) tcgroat (666085) | more than 6 years ago | (#21398335) If software is used in a formal mathmetical proof, then the software itself must be subjected to rigorous mathematical proof. Every step must be justified based on accepted postulates and previously proven theorems, or else the work isn't rigorous and doesn't qualify as mathemetically "proven". As I repeatedly tell my daughter about her alegbra, you must show your work: it isn't just coming up with the "right answer", it's about how you know it's the right answer. Opaque software isn't mathematic proof, it's saying "Trust me!". That line doesn't relieve the doubt, it only confirms suspicion that the proof is incomplete. Re:Openness is Fundamental to Mathematics (0) | more than 6 years ago | (#21398623) If software is used in a formal mathmetical proof, then the software itself must be subjected to rigorous mathematical proof. You phrased that awkwardly, almost as if you know what all the words mean. I doubt you know what you're talking about, though I hope I'm not insulting you as I say it. A program cannot ever be verified for correctness enough to make results returned from it "proved." For example, consider the four-color theorem. It was "proved" in the 70s by enumerating all possible 5 element graph colorings and verifying that they could be turned into 4 color graphs. The source code has been poured over many times. And yet, it is not a proof. It is logically possible that the program produced an error despite having been implemented correctly. Proofs do not share this property. Re:Openness is Fundamental to Mathematics (0) | more than 6 years ago | (#21398843) The four-color-theorem has meanwhile be re-proved using the Coq theorem proving assistant. While I agree that the 70s program was not a proof, the Coq proof certainly is. Re:Openness is Fundamental to Mathematics (3, Insightful) s20451 (410424) | more than 6 years ago | (#21398571) Well, don't get your panties in a big bunch over this. Humans make mistakes in proofs all the time, many of which are not caught before publication (and many not even for some time afterward). Also, although it's not in the field of theorem-proving, the mathematical package I use the most -- MATLAB -- is a million times better than the open source equivalent, Octave. I'm not going to use Octave simply because I can inspect the code, because who does that? An error in a software proof would be pretty obvious if it were checked with another independently written piece of software. With MATLAB, I can write my own alternative algorithm using C if I need to, though with significantly more effort and annoyance. Furthermore, mathematicians are smart people who are fully aware of the implications of their assumptions, probably moreso than any other group of people I have encountered. Reading the set of comments accompanying this article, saying what mathematicians should and should not consider a proof, is like watching monkeys trying to use a can opener. Re:Openness is Fundamental to Mathematics (0) | more than 6 years ago | (#21398811) I don't have MATLAB. If you showed me a proof that included "proof by MATLAB", I wouldn't believe you. You can't get away with that shit in most mathematics journals I read. Re:Openness is Fundamental to Mathematics (1) moosesocks (264553) | more than 6 years ago | (#21398909) I'm not 100% sure, but I'm pretty sure that the source for many of MATLAB's functions (albeit copywrighted) is available for inspection. Should journals reject such proofs? (3, Insightful) davidwr (791652) | more than 6 years ago | (#21398107) Algorithms cannot be protected by copyright, only by patents and trade secrets. If the algorithm is a trade secret, it has no place in a mathematical proof because it cannot be shared with the world and verified or refuted by anyone interested in doing so. If the algorithm is part of a patented device or piece of software, its use in a mathematical proof is not subject to the patent on the grounds that pure math cannot be patented. If journals and academic societies refused to publish proofs based on trade secrets and insisted on a covenant not to enforce the patent against researchers doing purely mathematical research or those who publish the research, the problem would mostly go away. An alternative to the covenant is congressional action or a court ruling that says with absolute clarity that mathematical research is exempt from math-related patents directly related to the research. Personally, I'm against all such patents but I'm not holding out hope that Congress or the Courts will agree with me. Re:Should journals reject such proofs? (1, Informative) | more than 6 years ago | (#21398435) An alternative to the covenant is congressional action or a court ruling that says with absolute clarity that mathematical research is exempt from math-related patents directly related to the Research is already exempt [wikipedia.org] from patent infringement. The "OMG, yuo cant do research because of teh patents!!!!" stuff you read here is pure fearmongering. Not Proven (3, Insightful) nagora (177841) | more than 6 years ago | (#21398141) If a "proof" is published with some steps or information excluded then it's not a proof, it's just an assertion. I agree... (1) TheSHAD0W (258774) | more than 6 years ago | (#21398199) ...and I don't think journals should accept papers which don't include proofs verifiable only with closed-source software. Re:I agree... (1) TheSHAD0W (258774) | more than 6 years ago | (#21398213) Er, delete the second "don't". Sorry. X_X Re:Not Proven (4, Informative) ciaohound (118419) | more than 6 years ago | (#21398251) As a high school math teacher, I am familiar with some of the details of Thomas Hales' proof of Kepler's "Cannonball" Conjecture, concerning the most efficient way to stack spheres. When he first published his proof in 1996, he included the source code for the programs that were used to do the calculations for the thousands of possible sphere configurations. I think most of the code was actually written by his graduate assistant. At first that struck me as cheating -- "... and then this program runs. Q.E.D." -- but then I realized that if anyone else was to verify his results, they would need the programs. There are just too many calculations to perform without software, which is why the conjecture went unproven for four hundred years. But without the source code, it would smack of charlatanism. Propriatary Software (4, Funny) calebt3 (1098475) | more than 6 years ago | (#21398185) Increasingly, proprietary software and the algorithms used are an essential part of mathematical proofs Like Excel's 65,535-equals-100,000 formula? Re:Propriatary Software (1) calebt3 (1098475) | more than 6 years ago | (#21398205) Misspelled Proprietary. file under self-serving nonsense (0) | more than 6 years ago | (#21398187) One of the authors acknowledges being the founder of an OSS project that provides... guess what... a comprehensive toolkit for mathematical research! So if research journals require computationally based proofs to use OSS, guess who's company will be the big beneficiary? And guess who personally will be in big demand to travel to conferences around the world explaining the concepts and methods of this suddenly necessary piece of software. Bottom line is that computational techniques should be explained so that they can be duplicated or refuted using any software of the colleague's choosing. The aim should not be to turn pure mathematicians into maintenance programmers, having to gain proficiency with the 10 or so different languages and platforms listed by the authors at the piece. Proof exchange format (2, Interesting) David Deharbe (1150399) | more than 6 years ago | (#21398301) IMHO, it would be an important contribution to establish an open proof exchange format that make it possible to... exchange proofs between different tools: theorem prover, proof checker, etc. Possibly this format would have a translator to a human-readable format (e.g. based on TeX) that would also make it possible for humans to review the proof process. Re:file under self-serving nonsense (0) | more than 6 years ago | (#21398659) Why couldn't the author be right even though he stands to benefit from being so? If you perceive a need for more use of OSS in a field, isn't it natural to make it more feasible by founding an OSS project to make a readily available comprehensive toolkit? Why would using open source instead of closed source products magically turn 'pure mathematicians' into maintenance programmers? Most matematicians I know are bright people, and have learned some pretty arcane computer stuff to use in their work, e.g. TeX/LaTeX and Fortran. Welcome to the world of modern research ... (4, Interesting) MacTO (1161105) | more than 6 years ago | (#21398209) This problem goes beyond mathematics, and reaches into many of the sciences. Mathematicians and scientists often place undue trust in complex software systems, simply as a matter of getting the work done faster rather than producing higher quality research. Sometimes it is a case of handling large volumes of data, in which case human intelligence and discretion is a bottleneck. Sometimes it is a matter of finding numerical solutions where analytic ones are difficult (if not impossible) to find at present. And, in the case of mathematics, I'm guessing that they are using it as a shortcut for those difficult analytic solutions. Then again, I must really ask if the mathematician in question understands what they are doing if they are using software as a shortcut for difficult analytic solutions. After all, if they don't understand the algorithms well enough to do the work themselves, who is going to say that they understand the limitations of the rules that they are asking the computer to apply. Re:Welcome to the world of modern research ... (2, Insightful) jhfry (829244) | more than 6 years ago | (#21398321) I thought the same thing... shouldn't mathematic proofs be independent of outside influence, shouldn't they stand on their own and make as few assumptions as possible. I figured that a proof, properly done, would be a large step by step solution to the problem. Then I realized that many proofs aren't concerned with single-input single-output situations, but instead may require thousands of iterations based upon large sets of inputs. You can't do that by I am certain, that because computers/software are being used we will eventually find an accepted proof that is scrapped because it exploited (inadvertently) a bug/limitation of the software used to test it. Unfortunately there is nothing to be done! Re:Welcome to the world of modern research ... (2, Interesting) mathcam (937122) | more than 6 years ago | (#21398835) And, in the case of mathematics, I'm guessing that they are using it as a shortcut for those difficult analytic solutions. This is certainly one application, but the use of computers in the more "pure" aspects of mathematics is nothing to sneeze at either. Programs like GAP for group theory, PARI for number theory, and Macaulay for commutative algebra and algebraic geometry play a significant role in the development of their respective subjects. For example, there's very little you can say about the Monster group [wikipedia.org] without the aid of computer calculations -- it's not that researchers don't understand the algorithms involved, it's that it's physically impossible (given reasonable time constraints) to say anything non-trivial without computer aid. To address the other concerns, unlike the numerical solutions, there are frequently completely independent algorithms for checking the results of your first algorithm, so that trusting the original algorithm is less of an issue. From the flip side... (1) 3seas (184403) | more than 6 years ago | (#21398231) ...is this saying the American Mathematical Society is accepting proprietary software used in proofs? Seems the only problem here is one of the position of the AMS regarding what is acceptable. Re:From the flip side... (1) poopdeville (841677) | more than 6 years ago | (#21398687) No, they aren't. Computer-assisted "proofs" are not proofs. They're "results". Subtly different. Proofs have the force of logic behind them. "Results" aren't guaranteed to. A computer-assisted proof cannot be a mathematical proof because it is logically possible for a correctly implemented program to return a false result. This is true whether the source is available or not. But the proof steps are known, right? (1) DrEasy (559739) | more than 6 years ago | (#21398305) I don't understand: don't these automatic theorem provers provide the steps they took to prove the theorem? As long as those steps are provided and can be verified, I don't see why we care how the proof was obtained. We don't always know how proofs obtained by humans were obtained either; they don't tell us what they had for breakfast that day or what inspired them. There's probably not much insight that can be obtained by the source code of the theorem prover, you can always just assume that it was brute force with some optimization tweaks. As long as you don't just take the proof at face value and that you verify the proof you should be fine, no? And if you used another software tool to do the verification itself, then verify the verification manually. And so on. Verifying the proof of a theorem should always be easier than coming up with the proof, so this is not a hopeless process. Re:But the proof steps are known, right? (2, Informative) flajann (658201) | more than 6 years ago | (#21398509) The advantage of having the source code is that, in a lengthy proof that involves thousands of steps that may be hard to follow, one may have an easier go at proving that the software did the steps correctly. At least, if a bug were found that would save you many hours over sweating over the actual proof!!! Open Formats (2, Insightful) iamacat (583406) | more than 6 years ago | (#21398357) Proprietary math software is not a problem as long as the end result can be exported into a fully documented format and can be then verified by open software, including human mathematicians. openmodelica.... (2, Interesting) dohmp (13306) | more than 6 years ago | (#21398409) not entirely on-topic, but i figured the slashdot community might be interested in this tool. OpenModelica [ida.liu.se] a very nice modelling package that can help you with practical mathematics issues like mathematica might. Not necessarily bad in all cases... (4, Insightful) Ardeaem (625311) | more than 6 years ago | (#21398459) There are some programs which can aid proofs that are closed source. This doesn't HAVE to mean that steps of the proof are omitted. Take, for example, Mathematica for the Web [calc101.com]. It can spit out a result, including all the steps (try a derivative). Or check out a sample Otter proof [anl.gov]. Mathematica is closed source, Otter is open source. However, even if both of these were closed source, all the steps would be laid bare for all to see. In other cases, like the proof of the four color theorem, it seems like the source code is important to see, but not essential. Pseudocode should suffice. Providing pseudocode is akin to saying things like "Simplifying expression (1) yields..."; we don't have to provide EVERY step, but with pseudocode you have enough to determine whether the algorithm itself will work. Checking the source code beyond that is akin to checking someone's algebra. Just because we don't know how the program arrived at the steps it did doesn't mean that we shouldn't use it; we can usually check the steps. After all, the human brain has been a closed-source proof machine for thousands of years, and no one has complained about that :) Just require pseudocode in computer aided proofs, and it should be sufficient. Re:Not necessarily bad in all cases... (2, Insightful) mopslik (688435) | more than 6 years ago | (#21398683) In other cases, like the proof of the four color theorem, it seems like the source code is important to see, but not essential. Pseudocode should suffice. Providing pseudocode is akin to saying things like "Simplifying expression (1) yields..."; we don't have to provide EVERY step, but with pseudocode you have enough to determine whether the algorithm itself will work. Checking the source code beyond that is akin to checking someone's algebra. Perhaps I'm being too pessimistic, but shouldn't the source code have to be provided alongside the pseudocode? If the pseudocode is 100% spot-on, then there would really be no need for the computer-assisted proof in the first place --- you will have provided a proof in the form of verifiable instructions. But the FCT was proved by some amount of brute-force, IIRC. Who is to say that the coder who translated from pseudocode to source code didn't mess something up? I mean, if my pseudocode reads INCREMENT current value by ONE OUTPUT result of long computations and my source code is entered as value += value++; then even if the pseudocode is verified, the program may still be producing an erroneous result. In other words, you're assuming that IF the pseudocode is correct THEN the program itself is also correct, which may not be the case. Re:Not necessarily bad in all cases... (1) Ardeaem (625311) | more than 6 years ago | (#21398887) But with the pseudocode, you can write your own program in whatever language you like to verify the results. In my opinion, proving something doesn't obligate you to show every single step. We all omit things in proofs, especially steps which can be verified easily by others, like algebraic simplification, etc. The pseudocode is the minimum acceptable transparency in a computer-aided proof. Re:Not necessarily bad in all cases... (1) HiThere (15173) | more than 6 years ago | (#21398975) I've encountered too many programs where the source code doesn't match the documentation. For some of your arguments, that's not fatal. If the entire proofs are made explicit, then you can argue that it's like not being able to peer inside the skull of the mathematician. In the cases, however, where you are depending on the results of computational steps (as in the four color proof), those steps need to be made open and explicit. Pseudocode is not sufficient. You don't know that it actually reflects the code that was used. first: prove the correctness of your software (1) petes_PoV (912422) | more than 6 years ago | (#21398515) If you are using a software tool/package, then it must have been subject to mathematically rigourous tests to demonstrate it's own correctness. If not, then the foundation of any proofs that use it must be in doubt. So, if you use a closed product, how can that have been proved corect (independently of the supplier, of course) without recourse to the source code? What about hardware? (3, Insightful) LM741N (258038) | more than 6 years ago | (#21398525) I would think that hardware errors would be an even worse problem, like the old Pentium bug, since they are so insidious. Why I don't trust Python (1) Nuwdle (1190721) | more than 6 years ago | (#21398679) Python 2.5.1 (current)... Command Line: >>> 1.00 - 0.01 I hope I'm not the only one that thought of this one. Re:Why I don't trust Python (1) William Stein (259724) | more than 6 years ago | (#21398753) The program Sage http://sagemath.org/ [sagemath.org] mentioned in the article uses Python extensively, but with a few changes when used interactively. In particular, all floating point literals are created as Python objects that wrap MPFR C-library objects http://www.mpfr.org/ [mpfr.org] which have better semantics. In particular, your example above in Sage becomes: sage: 1.00 - 0.01 Likewise, in Python one has the confusing (to a mathematician): >>> 1/3 In Sage integer literals wrap GMP integers http://gmplib.org/ [gmplib.org] (which are vastly faster than Python's large integers), and one has: sage: 1/3 -- William, http://wstein.org/ [wstein.org] (author of the article being discussed) Re:Why I don't trust Python (3, Informative) Just Some Guy (3352) | more than 6 years ago | (#21398841) >>> 1.00 - 0.01 I'm too lazy to see if that's the IEEE 754 result or not (but I suspect it is). But three things in Python's defense: 1. Floats can only store exact values for the fractional part when the denominator is a power of 2. The "100" in "1/100" isn't a power of two, so IEEE 754 cannot represent it perfectly. 2. .999999999... == 1, so the answer is still correct. 3. If you must have exact answers, use the Decimal type: >>> 1 - decimal.Decimal(".01") Re:Why I don't trust Python (4, Informative) fredrikj (629833) | more than 6 years ago | (#21398895) Python calculated exactly what its documentation says it will do: ((1 minus the IEEE-754 double closest to 1/100) rounded to the nearest IEEE-754 double). It's not Python's fault if you don't know the basics of floating-point arithmetic. Mathematicians who use or write numerical software do. I recommend reading What Every Computer Scientist Should Know About Floating-Point Arithmetic [sun.com]. bad analysis, bad results (2, Insightful) fermion (181285) | more than 6 years ago | (#21398699) This seems to fall under the realm of researchers using tools they do not understand. Black box science does not work. As has been mentioned, the results cannot be shown to be valid. A recall a few recent incidents in which papers had to be retracted because the machine did not do what the researchers thought it did. I have personal experience in which the spectroscopy generated by the computer did not reflect reality. If the researcher does not know how to use a tool, then he or she does not know when that tool is being misused. I am not sure something like mathematica is the issue. Wolfram seems to use standard standard well known algorithm. Almost every academic institution has a license, so, given the data, any number of people can rerun the analysis. Likewise the algorithms can be tested with simpler data sets to understand how they work and breakdown. I would be more worried about homegrown software. Re:bad analysis, bad results (-1, Offtopic) | more than 6 years ago | (#21398949) Blacks do not work. And neither will Mexicans once they president Hillary and the democrats make them eligible for welfare. look at who's speaking... (2, Interesting) legrimpeur (594896) | more than 6 years ago | (#21398705) ... try to read a paper from their journal (JAMS http://www.ams.org/jams/2003-16-03/S0894-0347-03-00422-3/S0894-0347-03-00422-3.pdf [ams.org]) and you will be asked for... money. Well that's their interpretation of "... In mathematics information is passed on free of charge..." cheers Re:look at who's speaking... (3, Informative) William Stein (259724) | more than 6 years ago | (#21398837) The AMS did not write that article. I wrote the article as an opinion piece and the AMS published it. They do not necessarily agree with the points made in the article. By the way, the article is not about formal automated proofs. It is about what is now standard procedure in mathematical research, namely proofs that look like this: [Formal mathematical argument] ... and (using [Mathematica|Magma|...]) we deduce that [...]. It's incredibly common right now when reading published mathematical papers to see random citations to using closed source software to do key steps of calculations. Usually even the code used to get the closed source program to yield the result isn't given. The way many mathematicians read proofs is that they often basically skim the argument to get a general idea of what it is about. Then they decide they want to prove something similar or related, and they "dive" into the most refined details of some key part of the argument. When a part of the argument is "... using Mathematica we deduce ..." this gets very very frustrating, since one just hits a brick wall. And, in practice, reimplementing -- with enough optimization to make it useful for research -- just one or two key functions from Mathematica or Magma, can take literally years of work (in fact, that's exactly what I've been doing the last few years with http://sagemath.org/ [sagemath.org]). And sometimes exactly that is necessary to go beyond what has already been done, i.e., to do research. -- William Stein Re:look at who's speaking... (0) | more than 6 years ago | (#21398865) A (really free) version is posted at http://sage.math.washington.edu/home/wdj/research/index.html [washington.edu] Norman Megill's Meta-Math for proof verification (3, Informative) ClarkEvans (102211) | more than 6 years ago | (#21398745) http://metamath.org/ [metamath.org] has been around for 15 years or so; it has a very nice text-based proof expression, a huge library of existing proofs and a graphical visualization tool greed rears its ugly head (-1, Offtopic) | more than 6 years ago | (#21398803) what the 'fck' is it about the privileged that enough is never enough even when others suffer death from deprivation imho, affluent people are evil Sage (3, Informative) | more than 6 years ago | (#21398859) Sage( http://www.sagemath.org/ [sagemath.org] ) is currently the most full=featured open-source computer algebra system. It is being developed by the two authors of the AMS opinion piece (and many others including myself). Our goal is to provide a free, viable, open-source alternative to Mathematica, Maple, MATLAB, and Magma. Some nice features of Sage include: * It uses Python as its programming language so that you can use any existing Python modules with your Sage programs. * Sage also includes Cython ( http://www.cython.org/ [cython.org] ) which is based on Pyrex and allows one to easily compile Python code down to C for speed. * Sage's notebook interface with also interface with pretty much every existing computer algebra system, open-source or not. * Sage includes Maxima, GAP, Scipy, Numpy, and many other open source math packages. * A very active developer community. If there is something that you need Sage to do, chances are that there will be a number of developers willing to help you out. For some screenshots, see http://www.sagemath.org/screen_shots/ [sagemath.org] . One of the things that Sage needs most now is more users. So, if you have an interest in open source math software, definitely check out Sage. not really true (1) superwiz (655733) | more than 6 years ago | (#21398871) Mathematics goes in an out of the phases of being secretive and open. Pythagoreans were very secretive. So were statisticians in the 19th century. I am pretty sure investment bankers do a great deal of math that they don't want anyone to ever see because it gives them an edge in the market. It's sort of like gun powder. When first discovered, the secret is tightly controlled because it would gives advantage over the competition. Then the competition realizes that it is being consistently beaten and tries to emulate/steal the results. After a while, everyone knows what the results are. And then the "philosophers" come in. That is the people who ponder the implications of the results discovered out of necessity. Since these people are not interested in any immediate payback, they insist that everyone shares the results so that more can be discovered by all. They try to convince everyone that this is the "natural" way of things. But what is "natural"? Without the push of necessity the original results would have never been found. And without the contemplative phase of shared discovery, progress would not have been made to the point when the new era of rapid discovery done to assist in competition would come about. These phases of going in and out of secret (of math, science, heck of all knowledge that is used to maintain society) drive each other. So arguing for one or another is just another flame war. Load More Comments
{"url":"http://beta.slashdot.org/story/93401","timestamp":"2014-04-20T01:28:25Z","content_type":null,"content_length":"300782","record_id":"<urn:uuid:0741988b-2cbc-4854-8d7d-6160563f862c>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Modeling the concentration dependence of diffusion in zeolites. II. Kinetic Monte Carlo simulations of benzene in Na-Y Chandra Saravanan Department of Chemistry, University of Massachusetts, Amherst, MA 01003 Scott M. Auerbacha) Departments of Chemistry and Chemical Engineering, University of Massachusetts, Amherst, Massachusetts 01003 Received 12 June 1997; accepted 14 August 1997 We have performed kinetic Monte Carlo simulations of benzene diffusion in Na-Y at finite loadings for various temperatures to test the analytical theory presented in Paper I, immediately preceding this paper. Our theory and simulations assume that benzene molecules jump among SII and W sites, located near Na ions in 6-rings and in 12-ring windows, respectively. The theory exploits the fact that supercages are identical on average, yielding D 1 6k a2 /6 1 1 Keq(12) , where k is the cage-to-cage rate coefficient, Keq(12) is the WSII equilibrium coefficient, 1 is the mean W site residence time, and is the transmission coefficient for cage-to-cage motion. The simulations use fundamental rate coefficients calculated at infinite dilution for consistency with the theory in Paper I. Our theory for k , Keq(12) and 1 agrees quantitatively with simulation for
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/677/1661022.html","timestamp":"2014-04-21T05:39:56Z","content_type":null,"content_length":"8402","record_id":"<urn:uuid:ff4a12bf-ec6a-40a2-9859-50304fff0267>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
Spheres over rational numbers and other fields up vote 6 down vote favorite Let K be an ordered field. Define the n-sphere: $$S^n(K) := \{ (x_1,x_2,\dots,x_n+1) \in K^{n+1} \mid \sum_{i=1}^{n+1} x_i^2 = 1 \}$$ A set of vectors $v_1, v_2, \dots, v_r \in S^n(K)$ is orthonormal if the dot product of any two of them is zero. An orthonormal basis is an orthonormal set of cardinality $n + 1$. 1. Is every vector in $S^n(K)$ a member of an orthonormal basis? If not, what is the largest r such that very vector is a member of an orthonormal set of size r? 2. More generally, given n and s, what is the largest r such that every orthonormal set in $S^n(K)$ of size s is contained in an orthonormal set of size r? What's known: 1. For $n = 1$, every vector in $S^n(K)$ is a member of an orthonormal basis, regardless of K. 2. If K is Pythagorean (i.e., a sum of squares is a square) every orthonormal set completes to an orthonormal basis (use Gram-Schmidt). Can more be said? I'm most interested in the case of K the field of rational numbers or a real number field, and the case $n = 2$. ADDED LATER: I am assuming that K is an ordered field here. Otherwise, we need to modify our definition of orthonormal set to also include the condition that the vectors are linearly independent, which is automatically true for ordered fields. Observations for fields that are not ordered (such as non-real number fields or fields of positive characteristic) would also be much appreciated. MODIFIED: As Bjorn Poonen points out below, linear independence turns out to follow automatically in this case. (Though in general, over non-ordered fields, there can exist orthogonal vectors that are linearly dependent, our condition that the vectors be "normal" rules this out). You don't need to modify your definition of orthonormal set: the vectors are automatically linearly independent. (See my answer below.) – Bjorn Poonen Feb 17 '10 at 2:40 add comment 2 Answers active oldest votes Any orthonormal set extends to an orthonormal basis, over any field of characteristic not $2$. This is a special case of Witt's theorem. EDIT: In response to Vipul's comment: The proof of Witt's theorem is constructive, and leads to the following recursive algorithm for extending an orthonormal set $\lbrace v_1,\ ldots,v_r \rbrace$ to an orthonormal basis. Let $e_1,\ldots,e_n$ be the standard basis of $K^n$, where $e_i$ has $1$ in the $i^{\operatorname{th}}$ coordinate and $0$ elsewhere. It suffices to find a sequence of reflections defined over $K$ whose composition maps $v_i$ to $e_i$ for $i=1,\ldots,r$, since then the inverse sequence maps $e_1,\ldots,e_n$ to an orthonormal basis extending $v_1,\ldots,v_r$. In up vote 13 fact, it suffices to find such a sequence mapping just $v_1$ to $e_1$, since after that we are reduced to an $(n-1)$-dimensional problem in $e_1^\perp$, and can use recursion. down vote accepted Case 1: $q(v_1-e_1) \ne 0$, where $q$ is the quadratic form. Then reflection in the hyperplane $(v_1-e_1)^\perp$ maps $v_1$ to $e_1$. Case 2: $q(v_1+e_1) \ne 0$. Then reflection in $(v_1+e_1)^\perp$ maps $v_1$ to $-e_1$, so follow this with reflection in the coordinate hyperplane $e_1^\perp$. Case 3: $q(v_1-e_1)=q(v_1+e_1)=0$. Summing yields $0=2q(v_1)+2q(e_1)=2+2=4$, a contradiction, so this case does not actually arise. This says that the answer to your question 1 is YES, and that the answer to your question 2 is always n+1, over any field of characteristic not 2, and for any n. – Bjorn Poonen Feb 17 '10 at 2:46 2 Thanks, that seems to settle it! Is the theorem constructive? i.e., is there an algorithm that works, say over the rational numbers or over finite fields? – Vipul Naik Feb 17 '10 at In the three-dimensional case, with the standard dot product, the other two vectors for a vector $(a,b,c)$ on the sphere (with $a^2 + b^2 + c^2 = 1$) are: $b, (c^2 - ab^2)/(b^2 + c^ 2), -bc(1 + a)/(b^2 + c^2))$ and $(c, -bc(1 + a)/(b^2 + c^2), (b^2 - ac^2)/(b^2 + c^2))$ – Vipul Naik Feb 24 '10 at 23:16 add comment For the case of the rational numbers and the unit 2-sphere in 3-space there is a paper by Anthony Osborne and Hans Liebeck in the American Math Monthly v.96 (1989) that gives a construction (near the end of the short paper) which seems to show that every rational unit vector extends to a rational orthonormal basis. I haven't gone through the details of the up vote 7 construction, but it looks fairly elementary, as one would expect for this journal. down vote add comment Not the answer you're looking for? Browse other questions tagged linear-algebra fields geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/15496/spheres-over-rational-numbers-and-other-fields/15540","timestamp":"2014-04-17T04:18:52Z","content_type":null,"content_length":"61614","record_id":"<urn:uuid:0827b4bd-04ec-48bc-9f2c-0867165d9045>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Coin Fountains with a Given Base A fountain is an arrangement of coins where each coin above the bottom row rests on two coins on the row below. The numbers of coin fountains with a base of size —that is, with coins in the bottom row—are the Catalan numbers . This can be seen by mapping a given coin fountain to a corresponding lattice path. Snapshots 1, 2: For a given base size , the minimum number of coins in a fountain is —no coins above the base—and the maximum number is the triangular number . Between these extremes, there are multiple fountains with a base size and between and coins. (There is only one fountain with base and coins: a triangle missing its top.) Snapshot 3: Each coin fountain corresponds to a lattice path, and this mapping illustrates the equivalence between coin fountains and Catalan numbers. [1] R. P. Stanley, Enumerative Combinatorics , Vol. 2, Cambridge: Cambridge University Press, 1999 p. 228.
{"url":"http://demonstrations.wolfram.com/CoinFountainsWithAGivenBase/","timestamp":"2014-04-19T12:00:46Z","content_type":null,"content_length":"43381","record_id":"<urn:uuid:324780c3-5d24-4e02-a57e-66b9672aea86>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
Source code files accompanying article are located on MacTech CD-ROM or source code disks. If we could count the stars, we should not weep before them. George Santayana In LISP, numbers of type fixnum are the analog to type int in conventional languages. In MacScheme, a fixnum is represented in 30 bits as a two's complement number. (The top 2 bits are used for tag information, leaving 30 bits to play with.) Thus, the highest number representable as a fixnum is 536,870,911. If bigger numbers are needed (i.e., to calculate the national debt), bignums must be Bignums are integers that can be arbitrarily large, being limited only by available memory. Herein, bignums will be represented as dynamic lists. Scheme implementations are not required to support bignums [Rees et al, 1986], however MacScheme provides them. Despite that, we will implement them herein as an exploration of algorithms and LISP coding, and, to implement them in a more general fashion. Our specific goal will be to compute the factorial of 120. Beyond that, the rest is left to the reader. Here is what the desired example looks like using MacScheme s bignums (which one gets automatically when a number of type fixnum exceeds its size boundary). MacScheme Top Level >>> (define fact (lambda (n) (if (zero? n) 1 (* n (fact (1- n)))))) fact >>> (fact 120) An aside: a function called time-it will be used later to determine a function s execution time. In the case of fact (above), it took about 0.1 seconds on a Macintosh IIcx with 8 megabytes of RAM in MacScheme+Toolsmith version 2.0. Since our bignums will be dynamic lists, and dynamic lists come for free in LISP, we ll just use LISP lists. For example, to represent 123 we ll do it like this: (1 2 3). At least as far as the user level digit (power) ordering is concerned. But is that the best digit ordering for internal purposes? LISP is good at dealing with the head of lists, but not so good at dealing with the ends of lists, and since numbers can be any length, when adding (1 2 3) to (9 1 3 5 2 1 5 3 6) what internal digit ordering would be most desirable? How about having the car (head) of each list represent the lowest digit (digit times 10 to the 0th power)? That way, the car of any two lists, regardless of the sizes involved, would always represent digit positions of the same power, and by cdring (cdr is a function that returns the rest of a list after the first element has been excluded) through them we could get to the higher digit places in perfect synchrony. (Note also that this choice of digit ordering obviates the need to normalize bignums before and after arithmetic operations.) So, the first thing needed is a routine to convert from the user s digit ordering to our chosen internal digit ordering: normalize. To convert back to the user digit ordering: unnormalize. (Note that the definition of normalize and unnormalize will turn out to be nothing more than Scheme s reverse function, which reverses the elements of a list.) We will be building up the routines we need by building up substrates. Let s start on the first substrate and see how many will be needed after making the first substrate. This will be called the big substrate . It might prove a good tack to start by implementing only unsigned bignums, because even after other substrates are made, it could be that some functions consuming these substrates will want simply unsigned bignums, thus they can call directly into the lower, and presumably faster, first substrate. More importantly, it seems theoretically pleasing since many formal systems, such as primitive recursion, have only non-negative integers. In pure l-calculus there are no numbers whatsoever, however they can be bootstrapped, and when they are it s the nonnegative integers that get derived first. So, let s make the first substrate handle only unsigned bignums. Let s say we want to be able to represent numbers in bases other than 10. Let s say base 16,384 (why that s such a magical number will be explained later). The highest digit we can have in base 16,384 is 16,383! Yikes! We need a way to delimit digits which require more than one token. I chose angle brackets. Below is the factorial of 120 in base 16,384 (excerpted from the test suite). Note how the trailing zeros (which represent 8 digit positions) are not in angle brackets because they don t need to be. In the case of base 16, I kept with the convention of using ASCII characters: 10 = a, 11 = b, ..., 15 = f. This scheme is continued until the digits get so large that the set of reasonable ASCII characters we have to draw on gets exhausted. The interesting thing about representing numbers in different bases is that the higher the base, the faster the computations can be done. However, to convert from a higher base to a lower one (as it is done in this article s code) takes time, so there are tradeoffs. Also, if these substrates were used to make big floats (arbitrarily accurate floating point numbers) we could make good use of different bases-it is sometimes desirable to represent certain numbers in one base versus another to avoid infinite division problems (i.e., 1/3 when floated gives a repeating digit in base 10 but not in base 3). The routine to make bignums in the first substrate is make-big which normalizes its argument, and optionally converts it from any base to any other base. Conversely, bignums can be displayed in human readable form using show-big (as in above example). A note on style: In keeping with Scheme tradition, all pure predicates (functions which answer a question by returning canonical true: #t or canonical false: #f) end in an interrogative mark (i.e., no-digits?). Impure predicates (functions which answer a question with other than #t or #f) I chose to end in double interrogatives (i.e., base-10??). The program s data structuring was done abstractly. Abstract data structuring is a technique for making the reading, writing, and modifying of code easier, especially in big, complicated software systems. Reading one s own code is like a dog eating its own vomit. Reading someone else s code is like a dog eating another dog s vomit (grodymax!). Consider a program to keep track of widgets. Let s say you keep track of widgets by putting them in an array. If every time you access a widget, you make an explicit reference to the array that houses them, modifying and understanding the code could be difficult later, because the understanding and conceptualization of data structures is spaghetti-coded in with the rest of the program. That means if you wanted to change data structures, you d have to do major surgery on the program and have a more intimate understanding of the code than you d probably want to. To ameliorate such difficulties, one can (here s that ever so important word again) abstract out the data structuring from the rest of the program code! For instance, one could write a function make-widget-data-structure, and create accessors for it like: get-widget. If all references to data structures go through such abstract functions, modifying the data structuring is much easier. Not only that, but writing code in the first place is easier as well. For a good discussion on abstract data structuring, see [Abelson et. al., 1985]. No macros were used because I think macros represent weaknesses in current languages that need to be fixed, and they cause problems: If you update a macro, you must recompile all consumers of that macro to get the effect of the change-that introduces order matters issues that I find unpalatable. Also, macros aren t first class (which means you can t return them, nor pass them as arguments, which of course means you can t map them, etc.). Furthermore, they don t show up in the debugger (since they are gone after compilation). Nonetheless, if you convert the data constructors and accessors (etc.) to macros, the timings will show goodly speed improvements. Speed improvements could also be had in the code by getting rid of defaulted parameters at the big and bignum substrates. These are redundant and are only in the code to allow users to more conveniently explore the different substrates. Also, remove-leading-zeros gets called redundantly-perhaps there is some way to weed out some redundancy, perhaps not (left to the Department of Redundancy Department-I opted for correctness over efficiency). A note on running the code of this article. The code of the article resides in three files: bignums-big.sch , bignums-bignum.sch , and bignums-object.sch . The file bignums.sch contains a loader for the code that is akin to a sort of primitive defsystem (MacScheme doesn t provide a defsystem feature, but on systems that do provide it, it s analogous to Unix s make feature). In the file bignums.sch tweak the argument to the function load-bignum-system to the pathname where you placed the three files of the code on your Macintosh (i.e., what folder you placed it in). For me, this Then, evaluate all the forms in your tweaked version of file bignums.sch . Next, you will need to open the file bignum-tests.sch , be sure Copy to transcript on the Command menu is checked, then select Eval Window off the Command window, quickly selecting the transcript window with the mouse pointer in order to watch the test suites run. To perform addition is simple: Just add the car of one bignum to the car of another, then repeat the process on the cdrs of the respective lists. The only glitch is handling carries. But what exactly is a carry? When two digits are added, if the result is greater than or equal to 10, we have to pass along that overage when we call our add routine on the cdrs of the lists so that the next digits added can add it in. How do we find out what the overage (carry) is? The overage for us will look like: yx where y represents the tens digit and x represents the ones digit. More formally: y * 101 + x * 100, or more simply stated: y * 10 + x . With the result so represented, our job is clear: We want to peel off y (since y is the overage). How do we do it? A digression that will answer that question is in order here. Since we are talking about base conversions, we may as well look at the algorithm for converting bases. Basically we just divide by the base continually, collecting all the remainders. Here s how 15 would be converted to base 3. The quotient from the first division becomes the dividend of the next division. When we reach a dividend that is less than the base, we stop. 15 in base 3 is 120. If we cons each remainder, as we get it, into the recursive calls of the base conversion routine, we would collect: (0 2 1) which is nothing more than our internal representation for 120 (how convenient!). So, you can see that a base conversion routine could allow us to break out digit positions exactly as we need. We simply call such a routine with the result from the current digit position. The routine used for this purpose is fix-base-10->big-base-n. (Question for the reader: Could we instead employ a routine that treats numbers as token strings, and simply pull off the overage using string or symbol manipulation rather than arithmetic? Could we employ such a scheme for base 10? How about other bases?) To get the result that should go in the current digit position, we apply car to the result from fix-base-10->big-base-n, and pass the cdr of it to the next recursive call of the digit adder routine. (Note: if you pick a base bigger than *biggest-base*, you ll be using MacScheme s bignums! (And what do you suppose would happen if MacScheme didn t have bignums of its own?) *biggest-base* was chosen to ensure that all digit by digit intermediate operations done to implement add, mul, div, and sub have closure within the realm of MacScheme fixnums (i.e. produce numbers which are small enough be to representable as MacScheme fixnums). The routine find-biggest-base determines what the biggest base you can use is. To express numbers in bases bigger than *biggest-base*, you would need to make a meta substrate which consumes the three substrates herein to do its intermediate operations!) The numerical arguments to big-add might be lists of different lengths. I chose nested if tests to test for the end of first list, then the other list because that most accurately reflects my thinking while coding the algorithm. A concern: Note that big-add makes the assumption that there will never be more overage than one digit s worth when two digits are added. In other words, the following assumption is implicit: (<=? (length new-rem) 2) . Is this a valid assumption for all cases in all bases? It would seem okay for base 10: 9 + 9 = 18 which requires only two digits. Still, it would be nice to prove this assumption for the general case. Basically we need to show that b - 1 (which is the biggest digit we can get in any base) + b - 1 <= (b - 1) * b + b - 1 (which is the biggest quantity we can get in two digits worth in any base. b - 1 + b - 1 <=? (b - 1) * b + b - 1 2*b - 2 <=? b^2 - b + b - 1 2*b - 2 <=? b^2 - 1 2*b <=? b^2 + 1 b^2 - 2*b + 1 >=? 0 (b - 1) * (b - 1) >=? 0 (b - 1) * (b - 1) >= 0 for any integer b (QED) Indeed, the statement is true for any integer b. Since we re only interested in positive, non-zero bases, and since any non-zero positive integer makes the equation true, we re thoroughly safe. In other words, we can rest assured that any two digits added in any base will give only results that can be represented in two digits. While we re at it, let s do the same proof for multiplication, wherein we need to prove that (b - 1) * (b - 1) <= (b - 1) * b + b - 1. This seems likely for base 10 since 9 * 9 = 81 which is well less than 99 (99 being the biggest number we can represent in two digits in base 10). (b - 1) * (b - 1) <=? (b - 1) * b + b - 1 b^2 - 2*b + 1 <=? b^2 - 1 -2*b + 1 <=? -1 -2*b <=? -2 b >=? 1 b >= 1 for any counting number (QED) So for multiplication, as long as the base is greater than or equal to one, we re okay! (Exercise for Zippy: Explain base one!) Now we consider subtraction, which you might expect to be much harder. We implemented addition in big-add the same way we do it by hand. Let s look at how we humans do subtraction, then see if we can code a computer algorithm the same way. When the minuend digit is greater than or equal to the subtrahend digit, the difference is trivial to compute because there are no borrows to worry about. When the minuend digit is less than the subtrahend digit, we have to borrow. This is where coding an algorithm could get hairy. Consider 1000000 - 1. Most people borrow through all the zero digits of the minuend. Yuk! This is contrary to LISP s strength at dealing with the head of a list because it might require us to look ahead quite a ways in a given list if we need to do a borrow. Let s look at an example in human readable (as opposed to our internal) form. In the case of 1000000 - 1 we would be wanting to convert (1 0 0 0 0 0 0) to (0 9 9 9 9 9 10). But if you think about it, a borrow from the minuend is the same as a carry added to the subtrahend. So, the conversion from (1 0 0 0 0 0 0) to (0 9 9 9 9 9 9 10) can be simulated, with the same effect, by converting the minuend to this instead: (1 0 0 0 0 0 10) as long as we convert the subtrahend from (1) to (1 1)! However, after the first digit subtraction, we have to borrow again. No problem, we just repeat the process every time we need to borrow. The advantage to doing the borrow in this way is that it fits the LISP style of car and cdr, and it allows us to get everything done in one trip through the list with no look aheads whatsoever. Furthermore, it s much simpler-not much more difficult than the carry was for the add algorithm. In fact, not only is it easier for LISP-it s also easier for humans! Not only do you not have to look ahead more than one digit for a borrow, but you re never borrowing anything other than 10. (In the customary method, when borrowing, you are sometimes borrowing 10, and sometimes 9). These schemes are mathematically equivalent. In the below, consider c to be the minuend digit from which a borrow was taken, and d to be the subtrahend digit. (c - 1) - d (Given. i.e., how most people do it.) c + (-1 - d) (Associative Law.) c + (-d - 1) (Commutative Law.) c - (d + 1) (Distributive Law.) I was introduced to this trick by a CU professor [Haddon, 1982] who presented it as a trick for humans doing subtraction, especially in other bases such as 16, 8, and 2). He also presented a second optimization, and although it doesn t matter for our implementation of bignums herein, I will show it anyway. After a borrow , subtract from 10 before adding. More tutorially: After a borrow, most people add the borrowed quantity to the minuend digit that needed the borrow. Then the subtrahend digit is subtracted from the aforementioned sum. Let s say that c is the minuend digit that needed the borrow, and d is the subtrahend digit. We just borrowed a 10. Normally, people do this: (10 + c) - d , and I m suggesting this instead: (10 - d) + c . These are mathematically the same. (10 + c) - d (Given.) 10 + (c - d) (Associative Law.) 10 + (-d + c) (Commutative Law.) (10 - d) + c (Associative Law.) It is customary to double check one s answer by adding the difference one obtains with the subtrahend to see if their sum is equal to the minuend of the subtraction problem. Note what happens when we try that after doing subtraction in the optimized style: The tick marks from the borrows (of the subtraction) turn out to be right where the carries need to be! In fact one can do the double check in one s head without actually writing anything! Another nice creature feature: no digits got stroked through! In the usual style of subtraction, if a digit gets borrowed from, it gets stroked through and one less than the digit gets penciled in above it. Above: The same subtraction problem done using the customary method. Note that in the optimized form of subtraction, both the minuend and the subtrahend could get numbers penciled in above digits, but the only penciled in digits possible are 10 and 1. In the customary method, only the minuend can get penciled in numbers above a digit, and possible penciled in numbers can span a range of different values (Question to the reader: What is the possible range?). Worse yet, the penciled in numbers can get modified, as in the case of the 3 in the minuend which first became 2 then 12. Rationale for subtracting via the optimized method: (10 - d) + c is much easier for humans to compute than (10 + c) - d . This is the case in any base, but most especially in higher numbered bases like base 16 and beyond. Here s why: One has to memorize fewer subtractions. Let s compute exactly how many operations are required in base 10 for both methods of subtraction. The first case is where no borrow is required. Let s omit the cases: n - 0 = n and n - n = 0 since they are degenerate cases which can be computed trivially by viewing them as a special case. If we have a 9 in the minuend digit, we have to know how to subtract 8, 7, ..., 2, 1 from it. That s a total of 8 subtractions. If we have an 8 we have to know how to subtract 7, 6, ..., 2, 1 from it. That s a total of 7 subtractions. In the case of 2, we can only subtract 1 from it, and in the case of 1, we ve arrived at the degenerate case we re not counting. As you can see, the sum of all these subtractions we have to memorize is 8 + 7 + ... + 1 . Reversing that we are just summing the first n counting numbers wherein n = 8. There is a formula for computing this: n * (n + 1) / 2 (The derivation of the formula is quite cute, but that would be too much of a diversion.) So, for n = 8, we have: 8 * 9 / 2 = 36 , which means we d have to memorize 36 cases when subtracting with no borrows. When there is a borrow, using the usual style of subtraction: 9 never causes a borrow, 8 will if the subtrahend digit is 9 (thus the subtraction needed here is 18 - 9 = 9), 7 will if the subtrahend digit is 8 or 9, 0 will if the subtrahend digit is 1 through 9. The pattern is a forward summing of the integers from 1 to 9, so 9 * 10 / 2 = 45. When there is a borrow using the optimized style of subtraction, one need only know how to subtract from 10: 10 - 9, 10 - 8, ..., 10 - 1. A total of 9 subtractions. After doing the subtraction from 10, one has to do a trivial addition wherein c + d <= 10 - 1. This is truly a trivial addition when compared with what one must know for addition: c + d <= (10 -1) + (10 - 1) = 2 * 10 - 2 , which is certainly more additions than is required in c + d <= 10 - 1. To compare: The conventional style requires 36 + 45 = 81 subtractions one must memorize, whereas the optimized style requires only 36 + 8 = 45. 81 versus 45? 45 + 45 * x = 81 ; 45 * x = 81 - 45 ; x = (81 - 45) / 45 = .8 which means the conventional style requires one to memorize 80 percent more subtractions! Holy cow, no wonder Zippy can t subtract! Note that leading zeros can arise during subtraction. These could be ignored but accumulating too many could be inefficient, besides which everyone likes to see bignums without leading zeros. Therefore function remove-leading-zeros is used. Removing leading zeros forces another interesting issue: what is the representation for zero? The logical choice is: ( ) which is the empty list-this makes things work out nicely when recursing. By the way, list numbers can be used in conjunction with these bignums to allow numberless bignums (in which case bignums would be lists of lists but showing that is beyond my game plan for this There are different ways of doing subtraction, based on different representations of negative numbers. The most popular of these are the complement schemes. I rejected the use of complement schemes herein because they rely on static limits, which would have added some complexities-specifically, numbers of different lengths would have had to have been sign extended in some manner. Let me motivate two complement systems by showing how they are similar to the optimized form of subtraction shown herein. WIBNI (Wouldn t It Be Nice If) we had greater consistency in our subtraction algorithm? If you recall, our subtraction algorithm goes digit by digit and distinguishes two cases for x - y (where x and y represent digits); Case one: the subtrahend (y) is greater than the minuend (x), Case two: the subtrahend is less than or equal to the minuend. Case one requires a borrow, and the optimized method of subtraction performs x - y as though it was (10 - y) + x. Well, we can t get rid of case one, but can we massage case two into an instance of case one? Absolutely! If we have the equation x - y = z (again, where x and y represent digits) there is no reason why that equation can t be massaged into something else, as long as we do the same thing to both sides. Since case one does x - y as though it is (10 - y) + x, our goal is clear: we want to massage the LHS (Left Hand Side) of x - y = z into (10 - y) + x, then see what the resulting RHS (Right Hand Side) looks like. x - y = z [Given.] -y + x = z [Commutative Law.] 10 - y + x = z + 10 [Addition Theorem of Equality.] (10 - y) + x = z + 10 [Associative Law.] So, we can do both case one and case two the same way: (10 - y) + x, as long as we subtract 10 from the result. Let s try it by running through case one and case two with specific examples. Case one: 2 - 6. x = 2, y = 6. (10 - y) + x = z + 10; (10 - 6) + 2 = z + 10; 4 + 2 = z + 10; 6 = z + 10; z = 6 - 10; z = -4. Note that when we had z = 6 - 10 we couldn t simply peel off the tens digit because it wasn t there (or we could consider the tens digit as being 0). This means we had a deficit. Case two: 6 - 3. x = 6, y = 3. (10 - y) + x = z + 10; (10 - 3) + 6 = z + 10; 7 + 6 = z + 10; 13 = z + 10; z = 13 - 10; z = 3. Note how easy 13 - 10 is to perform-it amounts to simply throwing away the tens digit position. There is a correlation between the result in the tens digit and the sign of the result. And it would be nice to represent sign information, especially since we can create complements that look like positive quantities-it would be nice if we could encapsulate sign information into each number so we don t have to mentally keep track of what s negative and positive. Perhaps the result in the tens digit can be used as a sign. However, it seems backward-we d really like to have a 0 in the tens digit on results that are positive-that way positive numbers don t have to be altered for sign information, just negative numbers need be altered. So, let s say a 0 in the tens digit means positive. Now, the question becomes what to do for negatives. How about appending a 1 for negative numbers? That means adding 10 for a sign (because we want the sign in the tens digit place), after doing a complement. Let s try it. Case one: 2 - 6. 10 + (10 - 6) + 2 = z + 10 + 10; 10 + 6 = z + 20; 16 = z + 20; z = -4. Case two: 6 - 3. 10 + (10 - 3) + 6 = z + 10 + 10; 10 + 13 = z + 20; 23 = z + 20; 3 = z. So, as you can see, at the z + 20 = ? stage, we can tell the sign. If the result at that point has a 1 in the tens digit, we know the result is negative-if a 2, we know it s positive. But that s not exactly what we want. We d rather have a positive result be positive without any special effort-so we want a 0 in the tens digit. How can we achieve that? Well, there is no positive number we can add to 1 to get 0. However, 1 + 9 = 10, so if we use 9 as the negative sign that s much closer to what we want. That means (in our case) we want to add 90 to the result of the complement process. Let s try it. Case one: 6 - 3. 90 + (10 - 3) + 6 = z + 10 + 90; 97 + 6 = z + 100; 103 = z + 100; 03 = z + 00; z = 3. Case two: 2 - 6. 90 + (10 - 6) + 2 = z + 10 + 90; 94 + 2 = z + 100; 96 = z + 100; z = -4. Aha! That does the trick. At the stage where we had a result on the LHS, all we had to do was look at the tens digit to tell the sign. For the positive result, we had a 0 there, and a 9 for the negative result. That s what we want. In the case of the positive result, we had a digit beyond the tens digit-we can simply ignore that (the justification for ignoring it is clear-ignoring it is the equivalent of subtracting the 100 from each side-this is just a short cut allowed us when we use 9 as the negative sign! Nifty! Note one requirement of this system: since we prepended sign information to the number itself, we had to decide on static limits that the number can take on! Now, we want to consider examples of multiple digit numbers and try this sign system on them. To do that I will digress and show a different system, then tie the new system into the previous system. Instead of subtracting 10 - digit, we could do something even simpler: 9 - digit. That s even less subtractions that we have to memorize, and, it allows us to handle multiple digits quite easily. We simply would need a 9 for each digit. For instance, 4398 would require 9999. So, we could do this: 9999 - 4398 = 5601. Going back to our original equation we used to justify complement systems, it would look like this: x - y = z; nines - y + x = z + nines. So, if we had 67062 - 4398 = z, then: 9999 - 4398 + 67062 = z + 9999; 5601 + 67062 = z + 9999; 72,663 = z + 9999; z = 72,663 - 9999; z = 62,664. This is nine's complement! It would be nice to find a shortcut for the step wherein we subtract out the 9999 from the result to get the final z. Since 9999 = 10000 - 1, we could do this: z = 72,663 - (10000 - 1); z = 72,663 - 10000 + 1; z = 62,663 + 1 = 62,664. We could go a step beyond the above however: x - y = z; (nines + 1) - y + x = z + (nines + 1); ((nines - y) + 1) = z + (nines + 1) [Commutative and Associative Laws]. Note that if you have a bunch of nines, and you add 1 to it, you wind up with 10 to some power. This is ten's complement! A alternative way of computing a tens complement, is to do a nine's complement, then add 1. (This brings us around full circle. At first, we started out wanting to simplify our optimized form of subtraction to merge the 2 separate cases it uses to handle digits into one unifying case. We did that for digits by essentially coming up with ten''s complement for digits, then we did nines complement, and showed how nines complement plus one gives us ten's complement for the general case of arbitrary numbers of digits. There are other ways of motivating and arriving at these conclusions, but this is how I chose to do it.) Here s yet another shortcut. Before, we figured out the sign and dealt with it as a separate entity, and did complements based on the number of actual digits (excluding the sign digit). Instead of doing that, we can just pretend the sign digit is one more digit to be complemented. Let s go back to an old example: 33 - 54. Let s actually put the positive sign on: 033 - 054. Now, go from there mechanically, without thinking of the sign as being any different than any other digit: 033 + ((999 - 054) + 1) = z + (999 + 1); 033 + (945 + 1) = z + (999 + 1); 033 + 946 = z + (999 + 1); 979 = z + (999 + 1); z = 999 - 979 + 1; z = 20 + 1; z = 21. If we have a complemented number, and want to add it to a number that has more digits, we need only sign extend the complemented number to match the number of digits in the number we want to add it to. In other words: -03 = 97; -003 = 997; -0003 = 9997; in general -03 = 9...97 etc.. (Convince yourself as to why this is so.) Note that in base two this process degenerates into simple operations that computers can do quickly and easily. Instead of nine's complement, we do a one's complement, which is cheap and easy. Most computers have two's complement operations wired in. We use the highest order bit for the sign bit. One thing that our complement system must consider is closure. For instance, if we have -03 - 08, wherein the leading zero represents the sign digit, that operation goes like this: -03 - 08 = -03 + -08 = 99 - 03 + 1 + 99 - 08 + 1 = 96 + 1 + 91 + 1 = 97 + 92 = 189. Then we throw away the digit beyond the sign bit, leaving 89. What s that? Something s wrong! The sign digit changed. We know that a negative plus a negative must yield a negative, and it didn t. This condition is underflow-the result could not be represented within the static limits we represented the operands in. We can detect overflow/underflow by checking the signs of the operands, and checking the result s sign. If the signs of the operands are different, we know that the result is closed under addition (i.e., we can t have overflow/underflow). If the signs are the same, we don t have closure under addition, and must check to be sure that the sign of the result is the same as the sign of the operands. If not, an overflow/underflow error should be raised. Of course, if we can represent numbers dynamically, overflow is not a problem for positives. Question for the reader: can we solve this problem for negatives by using dynamic lists and adding on a 0 digit beyond the sign bit before complementing, then throwing away the highest digit of the result? What kind of a range can be gotten from ten's complement? The truth table can be written out by enumerating the range of the numerals first with a positive sign, then with a negative sign. After doing so, the negative numbers can be complemented to see what they represent. Notice that 0 has a sign (since its representation has a 0 in the sign digit), but in reality 0 is neither positive nor As can be seen above, the number of values represented in n numerical digits, is 2 * 10n because there are 10n representable values with a positive sign, and that same number of values is representable with a negative sign. Notice however that 10n - 1 of those are positive while 10n are negative! Note that nine's complement gives two representations for 0! Note also how that changes the range (left to the reader). Are bignums simpler when implemented as they are herein? Or would they be simpler if implemented using the ten's complement scheme? Faster? Slower? Note: I didn t include the implementation of bignums using ten's complement because I didn t want to bother. Er...uh....I mean...I wanted to leave that as an exercise for the reader ( Yeah, yeah, that s the ticket! ). Despite all the mathematical knowledge we have, the human method of multiplying is still used in computer algorithms: Multiply, shift, add. (Open ended question for the Überprogrammer: Is there a better way? What do you think of Booth s algorithm?) Notice that in base 2, this process degenerates into shift and add. I used two routines to aid multiplication. First, I wrote a routine to multiply a bignum by a digit. Shifting is easy-since the lower positions are at the head of the list, we can shift it over by consing ( pushing ) zeros into it! The bignum add routine was already written, so all that was needed after the support functions were written, was the routine to call them all and organize them. One other entirely optional feature I added was a check to reorder the arguments if necessary in order to assure that the multiplicand has more digits in it than the multiplier. I performed no tests to see if this is actually an optimization or not-it could be that the overhead for determining which argument has more digits is more than the potential gains from such an optimization (if any). However, I consider it a conceptual optimization (even if the optimization doesn t pan out when implemented on a computer) because it mimics the heuristic a human would use. The test used to compare the number of digits is less-digits? which doesn t care about digit positions numerically (only symbolically) and does short-circuit (lazy-like) logic-as soon as the end of either number is reached a conclusion can be arrived at regarding whether the first argument to less-digits? has less digits than the second argument. If Scheme s length function had been used, both arguments would have to be completely traversed. The Scheme code herein for big-div, is by far the hairiest of the arithmetic operations. Basically, the division algorithm used herein is akin to that done by a human, including having to guess at each digit of the quotient. The guessing algorithm is the most interesting part of the code. For each quotient digit, an intermediate dividend must be selected. This is done by pushing the first digit of the master dividend into any remainder from subtracting the last quotient digit times the divisor from the last intermediate dividend. If the divisor is greater than the current intermediate dividend, then the quotient digit for that intermediate dividend is 0 and the process continues by recursing on the rest of the master dividend s digits. If the divisor is not greater, then there are two possible cases for the intermediate dividend; 1) The divisor and intermediate dividend has the same number of digits (leading zeros excluded). In this case, we are guaranteed to get a quotient digit guess that is greater or equal to what the current quotient digit should be by simply finding the quotient of the first digit of the intermediate dividend divided by the first digit of the divisor. The reason why is that any digits in the positions not being looked at in the divisor can only serve to increase the value of guess times divisor which in turn can only serve to increase the likelihood that the guess will be too big rather than too small. The digits not being considered in the intermediate dividend can t possibly matter enough to ever make our guess too small-indeed, for those digits to be of enough consequence to give us an underestimate in our guess, they would have had to have been big enough to force the digits being considered to be bumped up by 1, but obviously they weren t! (Sounds rhetorical, until you think about it.) 2) The number of digits in the intermediate dividend is one greater than the number of digits in the divisor. In this case we can t make a reasonable guess by looking at first digits. We need to find the quotient of the first two digits of the intermediate dividend and the first digit of the divisor. Just as above, this will give a quotient digit guess that is greater than or equal to the correct digit. Since the bandwidth of a fixnum is enough to accommodate two of our bignum digits (the value of *biggest-base* was picked deliberately to assure this), we can stuff two digits into one fixnum and use MacScheme s quotient function just as we did in case two above. If the result is greater than 10, the guess should be 9 because that s the biggest number we can represent (in base 10) in one digit, otherwise, we just take the guess as it is. Let me state a more formal proof of why the above cases will always give quotient digit guesses that are greater than or equal to the correct quotient digits. Given qa / xp wherein q, a, x, and p stand for variables that occupy digit positions, and b is the base value that variables q and x reside at. So in other words, qa is worth q * b + a, and xp is worth x*b + p. The quotient digit is picked by quotient(q, x). Let s consider the case of p = 0. Basically, the claim that the quotient digit guesses are greater than or equal to the correct digits is true, can be expressed like this: remainder(q, x) * b + a < x * b. Let s consider two cases of remainders. One wherein the largest possible remainder is generated, and the other wherein the smallest possible remainder is generated. Case one: (x - 1) * b + a < x * b; x * b - b + a < x * b; a < b. Case two: 0 * b + a < x * b; a < x * b. As can be seen, by starting with a symbolic expression of the claim, and massaging it, we arrive at true statements for both case one and case two. This reasoning works wherein a and p stand for no digits or any number of digits. Also, by allowing q to stand for two digits considered as one conglomerate digit, this reasoning can be applied to both cases of picking quotient digits; the case wherein the current dividend has the same number of digits as the divisor, and the case wherein the current dividend has one more digit than the divisor (and again, those are the only cases that are possible wherein the divisor is less than or equal to the dividend, leading 0s excluded). After a quotient digit is arrived at, it is multiplied by the divisor and compared to the current dividend to see if it is greater. If it is, then 1 is subtracted from the quotient digit, and the new quotient is checked. When the correct quotient digit is found, it is multiplied by the divisor, and that result is subtracted from the current dividend to give a remainder which will be used by the next recursive call. Note that it s good that the quotient digit guesser never guesses too low. Low guesses are more expensive to deal with because you have to multiply the quotient digit by the divisor and subtract it from the current dividend, then the resulting remainder has to be compared to the divisor, whereas if it is known that the quotient digit guesses are too big, we can skip the subtraction! That saves a lot of time. When I first had a guesser that could go high or low, the times for division were appreciably slower. Note that the recursive process of big-div stops only when the dividend is eq to the symbol done. Rationale: just because the dividend is empty (i.e. is big-zero) does not mean we re done-we still have a quotient digit from a previous call that needs to be pushed into the result, even in the case where the master dividend started off being big-zero (in which case we ve got a result of 0 and a remainder of whatever the divisor was). The purpose of the second substrate is to provide signed bignums. The sign is attached by consing it into the head of each list representing a bignum. This only need be done for negatives (that way the numbers created by the first substrate are compatible with this substrate-they don t need to be modified. After the sign frobbing is done by substrate two, substrate two calls out to substrate one functions to do the rest. Simple! And, it doesn t alter any numbers created by substrate one! The attaching of a sign does not side effect the original number. The purpose of the third substrate is to allow for numbers in different bases to be participants in the same arithmetic operation. This requires two things; 1) The ability to tag base information into a number, and 2) The ability to coerce numbers from one base to another. In the case of base coercion, the participant with the highest base is the most contagious. This is done because higher bases result in faster computation times. Note that 0 and 1 are the same in every base! Therefore, these two special numbers get assigned a special base called all . In the case of base information, that could be attached as one more piece of information like a sign. However, that seems like a bit of a kludge. A more appealing approach is to resort to object oriented programming. Viewing numbers as objects nicely meets the demands created by devising truly general numbers. Here s the original example done in the highest substrate (taken from the test suites). >>> ; ;;; bignum-object tests. ; (define foo (time-it (bignum-object-fact (make-bignum-object (1 2 0))))) available space: 80152 time: 131.15 foo >>> ; (foo show-number) These substrates, in true Hollywood style, could have sequels. Substrate n could have arithmetic functions that can take 0, 1, or many arguments, and do type overloading. By type overloading I mean that the arithmetic operations could be called simply; add, mul, div, sub; and such operators could accept arguments of any type (thus the type slot in the substrate three objects). This would make the coercion functions quite tricky, if say, one had a truly generic math system that had type complex, type polynomial, type bignum, type rational, etc.. The objects could become more sophisticated too: a class hierarchy could be introduced: the functions that make objects could make use of inheritance and slot defaults. For instance, a rational number could inherit characteristics from the integer/bignum class. To generalize all the discussions in this article for the general case of base n, simply swap 10 for base and 9 for base - 1 . Thanks to John Koerber for allowing ideas to be bounced off of him ( and we all know how painful that can be... ); and for suggesting many changes to earlier drafts of this article, with the interests of readability, beginners, and nonLISPers at heart. Thanks to Henry Baker for making comments on an earlier draft. [Abelson et al, 1985] Harold Abelson and Gerald Jay Sussman with Julie Sussman. Structure and Interpretation of Computer Programs. MIT Press, Cambridge, Massachusetts, USA, 1985. [Auvil, 1979] Daniel L. Auvil. Intermediate Algebra. Addison-Wesley Publishing Company, 1979. [Haddon, 1982] Bruce K. Haddon. Course: CS 453 Assembly Language Programming. Colorado University at Boulder, Spring 1982. [Osborne, 1976] Adam Osborne. An Introduction to Microcomputers. Adam Osborne & Associates, Incorporated, Berkeley, California, 1979. [Rees et al, 1986] Jonathan Rees and William Clinger (editors). Revised3 Report on the Algorithmic Language Scheme; AI Memo 848a. MIT Artificial Intelligence Laboratory, Cambridge, Massachusetts, USA, September 1986. [Roth, 1979] Charles H. Roth, Jr.. Fundamentals of Logic Design second edition. West Publishing Company, 1979. MacScheme is put out by Lightship Software, P.O. Box 1636, Beaverton, OR 97075 USA. Phone: (415) 694-7799. The Code ; (define print (lambda (n) (display n) (newline))) ; (define base-10?? (lambda (base) (if (null? base) 10 (car base)))) ; (define to-base?? (lambda (bases) (if (or (null? bases) (null? (cdr bases))) 10 (cadr bases)))) ; (define normalize reverse) ; (define unnormalize reverse) ; (define bigify-digits list) ; (define no-digits? null?) ; (define first-digit car) ; (define rest-digits cdr) ; (define last-digit (lambda (n) (first-digit (last-pair n)))) ; (define push-digit cons) ; (define big-zero ()) ; (define big-zero? (lambda (n) (cond ((no-digits? n) #t) ((and (number? (first-digit n)) (zero? (first-digit n))) (big-zero? (rest-digits n))) (else #f)))) ; (define big-one '(1)) ; ; detokenize and normalize could be combined to save one ; trip through the argument list. I prefer the ; generality of not having them combined--tokenization ; is not a time critical routine. ; (define detokenize (lambda (x) (if (no-digits? x) big-zero (push-digit (if (number? (first-digit x)) (first-digit x) (- (char->integer (string-ref (symbol->string (first-digit x)) 0)) 87)) (detokenize (rest-digits x)))))) ; (define show-big (lambda (big) (if (big-zero? big) (print "0") (let ((first-thing (first-digit big))) (if (or (null? first-thing) (pair? first-thing)) (begin (show-big first-thing) (display "remainder ") (show-big (rest-digits big))) (let ((start-value (- (char-> integer #\a) 10))) (begin (for-each (lambda (x) (cond ((<? x 10) ; a (display x)) ((<? x 36) ; z (display (make-string 1 (integer->char (+ start-value x))))) (else ; out of tokens (display "<") (display x) (display ">")))) (unnormalize big)) (newline)))))))) ; (define find-biggest-base (lambda () (letrec ((iter (lambda (base bits-per-digit) (let ((base-minus-one (- base 1))) (if (not (fixnum? (* base-minus-one base-minus-one))) (if (odd? bits-per-digit) (quotient base 2) (quotient base 4)) (iter (* 2 base) (+ bits-per-digit 1))))))) (iter 2 1)))) ; (define *biggest-base* (find-biggest-base)) ; (define fix-base-10->big-base-n (lambda (n . base) (let ((base (base-10?? base))) (letrec ((recur (lambda (n) (if (<? n base) (bigify-digits n) (push-digit (remainder n base) (recur (quotient n base))))))) (recur n))))) ; (define compare-digits (lambda (first second) (if (no-digits? first) (if (no-digits? second) '= '<) (if (no-digits? second) '> (compare-digits (rest-digits first) (rest-digits second)))))) ; (define less-digits? (lambda (first second) (eq? (compare-digits first second) '<))) ; (define more-digits? (lambda (first second) (eq? (compare-digits first second) '>))) ; (define same-number-of-digits? (lambda (first second) (eq? (compare-digits first second) '=))) ; (define big-add (lambda (addend augend . base) (let ((base (base-10?? base))) (letrec ((recur (lambda (addend augend rem) (if (no-digits? addend) (if (no-digits? augend) rem (recur augend addend rem)) (if (no-digits? augend) (if (no-digits? rem) addend (recur addend rem ())) (let ((new-rem (fix-base-10->big-base-n (+ (first-digit addend) (first-digit augend) (if (no-digits? rem) 0 (first-digit rem))) base))) (push-digit (first-digit new-rem) (recur (rest-digits addend) (rest-digits augend) (rest-digits new-rem))))))))) (recur addend augend ()))))) ; (define big-sub (lambda (minuend subtrahend . base) (let ((base (base-10?? base))) (letrec ((recur (lambda (minuend subtrahend borrow) (if (no-digits? minuend) (if (no-digits? subtrahend) (if (=? borrow 1) (error "needed to borrow more than I had") ()) (error "(>? subtrahend minuend) => #t" minuend subtrahend)) (if (no-digits? subtrahend) (if (zero? borrow) minuend (recur minuend (bigify-digits borrow) 0)) (let ((dig1 (first-digit minuend)) (dig2 (+ (first-digit subtrahend) borrow))) (if (<? dig1 dig2) (push-digit (+ (- base dig2) dig1) (recur (rest-digits minuend) (rest-digits subtrahend) 1)) (push-digit (- dig1 dig2) (recur (rest-digits minuend) (rest-digits subtrahend) 0))))))))) (remove-leading-zeros (recur minuend subtrahend 0)))))) ; (define remove-leading-zeros (lambda (n) (if (or (no-digits? n) (not (zero? (last-digit n)))) n (letrec ((peel-off-zeros (lambda (n) (cond ((no-digits? n) big-zero) ((zero? (first-digit n)) (peel-off-zeros (rest-digits n))) (else n)))) (recur (lambda (n user-n) (if (no-digits? user-n) big-zero (push-digit (first-digit n) (recur (rest-digits n) (rest-digits user-n))))))) (recur n (peel-off-zeros (unnormalize n))))))) ; (define big-*-digit (lambda (big digit . base) (let ((base (base-10?? base))) (cond ((zero? digit) big-zero) ((=? digit 1) big) (else (letrec ((recur (lambda (big rem) (if (no-digits? big) rem (let ((new-rem (fix-base-10->big-base-n (+ (* digit (first-digit big)) (if (no-digits? rem) 0 (first-digit rem))) base))) (push-digit (if (no-digits? new-rem) 0 (first-digit new-rem)) (recur (rest-digits big) (rest-digits new-rem)))))))) (recur big '(0)))))))) ; (define big-mul (lambda (multiplicand multiplier . base) (let ((base (base-10?? base))) (if (less-digits? multiplier multiplicand) (big-mul multiplier multiplicand base) (letrec ((recur (lambda (multiplicand multiplier shift-amount) (if (no-digits? multiplicand) big-zero (big-add (big-right-shift (big-*-digit multiplier (first-digit multiplicand) base) shift-amount) (recur (rest-digits multiplicand) multiplier (1+ shift-amount)) base))))) (recur multiplicand multiplier 0)))))) ; (define big-right-shift (lambda (big n) (if (zero? n) big (push-digit 0 (big-right-shift big (1- n)))))) ; (define big-fact (lambda (n . base) (let ((base (base-10?? base))) (letrec ((recur (lambda (n) (if (big-zero? n) big-one (big-mul n (recur (big-sub n big-one base)) base))))) (recur n))))) ; (define last-digit? (lambda (big) (no-digits? (rest-digits big)))) ; (define big-compare? (lambda (first second predicate?) (letrec ((iter (lambda (first second) (if (no-digits? first) (if (no-digits? second) #f #t) (if (no-digits? second) #f (if (last-digit? first) (if (last-digit? second) (predicate? (first-digit first) (first-digit second)) #t) (iter (rest-digits first) (rest-digits second)))))))) (iter first second)))) ; (define equal-digit-big-compare? (lambda (first second predicate?) (letrec ((iter (lambda (first second) (if (no-digits? first) #f (if (= (first-digit first) (first-digit second)) (iter (rest-digits first) (rest-digits second)) (if (predicate? (first-digit first) (first-digit second)) #t #f)))))) (iter (unnormalize first) (unnormalize second))))) ; (define big-<? (lambda (first second) (let ((first (remove-leading-zeros first)) (second (remove-leading-zeros second))) (case (compare-digits first second) ((<) #t) ((>) #f) ((=) (equal-digit-big-compare? first second <?)))))) ; (define big->? (lambda (first second) (let ((first (remove-leading-zeros first)) (second (remove-leading-zeros second))) (case (compare-digits first second) ((>) #t) ((<) #f) ((=) (equal-digit-big-compare? first second >?)))))) ; (define big-=? (lambda (first second) (let ((first (remove-leading-zeros first)) (second (remove-leading-zeros second))) (case (compare-digits first second) ((>) #f) ((<) #f) ((=) (equal-digit-big-compare? first second =?)))))) ; (define big-<=? (lambda (first second) (not (big->? first second)))) ; (define big->=? (lambda (first second) (not (big-<? first second)))) ; (define big-div (lambda (dividend divisor . base) (let ((base (base-10?? base)) (dividend (unnormalize dividend)) (divisor (remove-leading-zeros divisor))) (let ((user-divisor (unnormalize divisor))) (letrec ((find-next-quotient-digit (lambda (dividend) (let ((dividend (remove-leading-zeros dividend))) (if (big->? divisor dividend) (push-digit 0 dividend) (letrec ((guess-next-quotient-digit (lambda () (let ((dividend (unnormalize dividend))) (let ((dig1-divisor (first-digit user-divisor)) (dig1-dividend (first-digit dividend))) (if (and (same-number-of-digits? dividend user-divisor) (<=? dig1-divisor dig1-dividend)) (quotient dig1-dividend dig1-divisor) (let ((guess (quotient (+ (* base dig1-dividend) (first-digit (rest-digits dividend))) dig1-divisor))) (if (>=? guess base) (- base 1) guess))))))) (iter (lambda (guess) (let ((subtrahend (big-*-digit divisor guess base))) (if (big->? subtrahend dividend) (iter (- guess 1)) (push-digit guess (big-sub dividend subtrahend base))))))) (iter (guess-next-quotient-digit))))))) (iter (lambda (dividend rem result) (if (eq? dividend 'done) (push-digit (remove-leading-zeros result) (remove-leading-zeros rem)) (let* ((new-quotient-digit (find-next-quotient-digit rem)) (new-rem (cdr new-quotient-digit))) (if (no-digits? dividend) (iter 'done new-rem (push-digit (first-digit new-quotient-digit) result)) (iter (rest-digits dividend) (push-digit (first-digit dividend) new-rem) (push-digit (first-digit new-quotient-digit) result)))))))) (iter (rest-digits dividend) (bigify-digits (first-digit dividend)) ())))))) ; 'done Living Planet - Tiny Planet Videos and Photos 1.0 Device: iOS Universal Category: Photography Price: $.99, Version: 1.0 (iTunes) Description: 50% OFF LAUNCH SPECIAL! BUY NOW BEFORE THE PRICE GOES UP... | Read more » Made by: Livescribe Price: $149.99 for Livescribe 3 Hardware/iOS Integration Rating: 4.5 out of 5 stars Usability Rating: 4 out of 5 stars Reuse Value Rating: 4.75 out of 5 stars Build Quality Rating: 4.5 out of 5 stars Overall Rating: 4.44 out of... | Read more » Unpossible Review By Carter Dotson on April 17th, 2014 Our Rating: :: RALPH WIGGUM APPROVESUniversal App - Designed for iPhone and iPad Unpossible is much better than its English! | Read more » Hitman GO Review By Carter Dotson on April 17th, 2014 Our Rating: :: GO HITMAN, GO!Universal App - Designed for iPhone and iPad Hitman GO is not the obvious way to do a mobile version of the Hitman series, but it’s an incredibly... | Read more » Monster High Ghouls and Jewels is a Freaky Fashion-Forward Match-3 Puzzler Posted by Rob Rich on April 17th, 2014 [ permalink ] | Read more » Dinosaur Train A to Z Review By Amy Solomon on April 17th, 2014 Our Rating: :: DINO DETAILSUniversal App - Designed for iPhone and iPad Dinosaur Train A to Z is an educational app about dinosaurs that includes In-App Purchases... | Read more » Easter Comes to Junk Jack X – Bringing New Crafts, Chemistry, and More Posted by Rob Rich on April 17th, 2014 [ permalink ] Universal App - Designed for iPhone and iPad | Read more » Call of Cookie Review By Jordan Minor on April 17th, 2014 Our Rating: :: COOKIE CRUMBLESUniversal App - Designed for iPhone and iPad Call of Cookie proves that plants aren’t the only fighting foods out there. | Read more » Corel Launches Video Editing App Pinnacle Studio for iPhone, Updates iPad Version for iOS 7 Posted by Tre Lawrence on April 17th, 2014 [ | Read more » Bad Vamp Review By Jennifer Allen on April 17th, 2014 Our Rating: :: BASIC VAMPIRIC ADVENTURESUniversal App - Designed for iPhone and iPad Run or destroy the vampires in this simple, single-screen game that lacks real bite. | Read more »
{"url":"http://www.mactech.com/articles/mactech/Vol.08/08.03/BigNums/index.html","timestamp":"2014-04-18T05:48:53Z","content_type":null,"content_length":"135346","record_id":"<urn:uuid:2b6e4c1c-4498-4f73-9c01-6357edf4d754>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
Can't raise x to the power of y This is a discussion on Can't raise x to the power of y within the C Programming forums, part of the General Programming Boards category; Hi, is there a way to raise x to the power of y (x^y), without using the math.h library and ... Hi, is there a way to raise x to the power of y (x^y), without using the math.h library and the pow(x,y) function? Teacher: "You connect with Internet Explorer, but what is your browser? You know, Yahoo, Webcrawler...?" It's great to see the educational system moving in the right direction Write pow() yourself. I don't think you want to do that. Just use pow() if you can. Use a for loop. Or use a for loop that squares and loops through the bits of y. There are 10 types of people in this world, those who cringed when reading the beginning of this sentence and those who salivated to how superior they are for understanding something as simple as Hi I'm trying to use the for loop, but i'm having trouble figuring out how to use it. Do you know what i'm doing wrong? Code: int n; int output[60]; int index = 0; int condition; int quotient; int size = 0; int nsize; int converted_number=0; int power; int i; printf("Enter base(b) between [1,10], and a number(n), such that digits of n is between[0,b-1] in this format 'n b': "); scanf("&#37;d %d", &n, &b); printf("n is: %d\n", n); //Test printf("b is: %d\n", b); //TEST nsize = n; if ((b < 1) || (b > 10)) { printf("Your base is not between 1 and 10"); return 0; } if (n < 0) { printf("n must be positive"); return 0; } while (nsize != 0) //counting the total number of digits { nsize = nsize/10; ++size; } printf("size of n is: %d \n", size); //testing to see the size while (size > 0) { output[index] = n % 10; n = n/10; printf("number to convert: %d\n", output[index]); for (i = 1; i <=index; i++) //FOR LOOP IN HERE!!! { power = output[index]*; //NOT SURE ON WHAT TO DO HERE converted_number += (power*output[index]); } printf("%d * %d^%d %d\n", output[index], b, index, converted_number); --size; index++; } printf("converted number is: %d", converted_number); return 0; } int n; int output[60]; int index = 0; int condition; int quotient; int size = 0; int nsize; int converted_number=0; int power; int i; printf("Enter base(b) between [1,10], and a number(n), such that digits of n is between[0,b-1] in this format 'n b': "); scanf("&#37;d %d", &n, &b); printf("n is: %d\n", n); //Test printf("b is: %d\n", b); //TEST nsize = n; if ((b < 1) || (b > 10)) { printf("Your base is not between 1 and 10"); return 0; } if (n < 0) { printf("n must be positive"); return 0; } while (nsize != 0) //counting the total number of digits { nsize = nsize/10; ++size; } printf("size of n is: %d \n", size); //testing to see the size while (size > 0) { output[index] = n % 10; n = n/10; printf("number to convert: %d\n", output[index]); for (i = 1; i <=index; i++) //FOR LOOP IN HERE!!! { power = output[index]*; //NOT SURE ON WHAT TO DO HERE converted_number += (power*output[index]); } printf("%d * %d^%d %d\n", output[index], b, index, converted_number); --size; index++; } printf("converted number is: %d", converted_number); return 0; } Code: int power(int base, int ext) { int i; int res=1; for(i=1; i <= ext; i++) res = res * base; return res; } here is a basic pow function. It can handle just int. You will have to work around to make it to work for all datatype. ssharish2005 int power(int base, int ext) { int i; int res=1; for(i=1; i <= ext; i++) res = res * base; return res; } x^y simply means, mutliply x by itself y times. Such that, 5^10 = 5 * 5 * 5 * 5 *5 * 5 * 5 * 5 * 5 *5 Quite easy to put in a loop. Last edited by zacs7; 08-30-2007 at 08:16 PM. Reason: Giving the answer This sounds like a homework problem, but I'll give you the hint that pow(x, y) can be rewritten in terms of other functions from <math.h>, so that's one way, if you're allowed to use those. Otherwise, if y is an integer, you can implement it in terms of a loop as described above. Oh, you mean it only has to work for integer values of y? Well I wont bother to post my solution then. My homepage Advice: Take only as directed - If symptoms persist, please see your debugger Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"
{"url":"http://cboard.cprogramming.com/c-programming/93115-can%27t-raise-x-power-y.html","timestamp":"2014-04-16T11:18:00Z","content_type":null,"content_length":"77323","record_id":"<urn:uuid:226f00a4-f438-4227-a1de-c66985fa7965>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
Brevet US6049762 - Standardizing a spectrometric instrument This invention relates to spectrometric instruments, and particularly to the standardization of spectral information generated by such instruments. Spectrometric instruments are used for a variety of applications usually associated with analyses of materials. A spectrum is generated in interaction with a sample material to effect a spectral beam that is characteristic of the sample and impinged on a photodetector. Modern instruments include a computer that is receptive of spectral data from the detector to generate and compare spectral information associated with the materials. The spectrum may be generated, for example, by a dispersion element such as a prism or a holographic grating that spectrally disperses light passed by a sample or received from a plasma or other excitation source containing sample material. Another type of instrument incorporates a time varying optical interference system, in which an interference pattern of light is produced and passed through a sample material that modifies the pattern. In such an instrument Fourier transform computations are applied to the detector signals to transform the modified light pattern into spectral data. The Fourier transform instrument is most commonly operated in the infrared range, in which case it is known as an "FTIR" instrument. With improvements in optics, detectors and computerization, there has evolved an ability to perform very precise measurements. Examples are an absorption spectrophotometer, a polychromator or an FTIR instrument that use chemometric mathematical analysis to measure octane number in gasolines. Differences in octane number are associated with subtle differences in near infrared (IR) absorption spectra. The very small changes in spectral characteristics cannot effectively be detected directly by personnel, and computerized automation is a necessity. It also is desirable for such spectral measurements to be effected continuously on line. Thus there is an interest in utilizing advanced spectrometry methods for analytical chemistry. A problem with high precision measurements is that instruments vary from each other, and each instrument varies or drifts with time. One aspect of the problem is achieving and maintaining wavelength calibration. A more subtle aspect is that the instruments have intrinsic characteristics that are associated with spectral profiles and are individual to each instrument and may vary with time. Intrinsic characteristics of the instrument distort the data, rendering comparisons inaccurate. In an instrument such as a polychromator with a dispersion grating, an intrinsic characteristic is typified by the profile of spectral data representing a very narrow, sharp spectral line. Such a profile has an intrinsic shape and line width wider than the actual line, due to the fundamental optical design as well as diffraction effects and other imperfections in the optics and (to a lesser extent) electronics in the instrument. An actual intrinsic profile may not be symmetrical. In a grating polychromator and similar instruments, the instrument profile from a narrow line source is often similar to a Gaussian profile. For other instruments such as FTIR, the intrinsic profile attributable to aperture size at the limit of resolution is more rectangular. U.S. Pat. No. 5,303,165 (Ganz et al) of the present assignee discloses a method and apparatus for standardizing a spectrometric instrument having a characteristic intrinsic profile of spectral line shape for a hypothetically thin spectral line in a selected spectral range. The instrument includes a line source of at least one narrow spectral line that has an associated line width substantially narrower than the width of the intrinsic profile. A target profile is specified having a spectral line shape for a hypothetically sharp spectral line, for example a Gaussian profile of width similar to that of the intrinsic width. The instrument is operated initially with the line source to produce a set of profile data for the line such that the data is representative of the intrinsic profile. A transformation filter is computed for transforming the profile data to a corresponding target profile, and is saved. The instrument then is operated normally with a sample source to produce sample data representative of a sample spectrum. The transformation filter is applied to the sample data to generate standardized data representative of the sample. Such standardized data is substantially the same as that obtained from the same sample material with any similar instrument, and repeatedly with the same instrument over time. Standardization according to the foregoing patent is utilized particularly with an instrument having the capability to utilize a source of one or more spectral lines, such as a Fabry-Perot etalon placed in the beam from the light source in place of a sample, so as to pass the spectral line to the grating or other dispersion element. In the case of certain other instruments including FTIR, it is possible but cumbersome to utilize such a line source for such a standardization technique. Conventional FTIR instruments are taught in textbooks such as "Fourier Transform Infrared Spectrometry" by P. R. Griffiths and J. A. de Haseth (Wiley, 1986). In these instruments, an interference pattern of light is produced with a Michaelson or similar interferometer comprising a beam splitter which is a partial reflector that splits white light into two beams. These beams are reflected back and recombined at the beam splitter. The path length of one of the beams is varied with time to produce a time-varied interference pattern. This light pattern is directed through an angle-selecting aperture and thence through a sample material that modifies the pattern. Fourier transform computations transform the modified pattern into spectral data representing intensity vs. wavenumber. (Wavenumber is reciprocal of wavelength and proportional to frequency.) The aperture generally should be as small as practical to minimize distortion of the spectral beam due to finite size of the aperture and the size and configuration of the light source, and other instrument features. The distortion has several aspects: ordinary broadening which is predictable but not generally corrected; wavelength shift; and pattern shape change due to reflections, alignment, flatness of mirrors, light source geometry, and the like. Distortions related to wavelength shift and shape change are addressed by the present invention. A very small aperture may sufficiently minimize distortion, but passes the light at too low an intensity, thereby requiring long term operations for sufficient spectral data. Therefore, normal operations are made with a larger aperture that introduces more distortion. A further characteristic of FTIR is that the limit of resolution (minimum line width) attributable to the aperture is a function of the spectral wavenumber, in particular being proportional to the wavenumber, viz. greater line width at higher wavenumber. To apply the transformation of the aforementioned U.S. Pat. No. 5,303,165 would require defining and storing a separate target profile for many increments in the wavenumber scale in the selected spectral range, and operating the instrument repeatedly or with a source of many lines to obtain the corresponding intrinsic profiles that would be applied individually to test data. This could be cumbersome for frequent restandardizations, and may substantially lengthen the computation times for every analysis with the instrument. An object of the invention is to provide a spectrometric instrument with a novel means for effecting standardized spectral information. Another object is to provide a novel method for standardizing spectral information from spectrometric instruments that intrinsically distort the data. Other objects are to provide a novel method and a novel means for transforming spectral data of the instrument so that spectral information is idealized for comparison with that of the same instrument at other times, or with other similar instruments. A further object is to provide such standardizing for an instrument where distortion of data is dependent on spectral wavenumber. Yet another object is to provide a computer readable storage medium with means for effecting standardized spectral information in instruments that incorporate computers. A particular object is to provide such standardizing for an interferometer instrument that incorporates Fourier transform. The forgoing and other objects are achieved, at least in part, by a method of, and means for, standardizing spectral information for a sample in a spectrometric instrument that effects an intrinsic distortion into spectral data. The instrument includes an optical train with spectral means for effecting a spectral beam responsively to a sample such that the spectral beam is characteristic of the sample, detector means for detecting the spectral beam to effect signal data representative thereof, computing means receptive of the signal data for computing corresponding spectral information representative of the sample, and display means for displaying the spectral information. The optical train includes an optical component that selectively has a standardizing condition or an operational condition, such condition having the intrinsic distortion associated therewith. The sample is selectable from a sample set that includes a test sample and one or more standard samples formed of a substance having true spectral data. An idealized function of spectral line shape is specified for a hypothetically sharp spectral line. Standard spectral data for a standard sample with the standardizing condition are obtained, and a standard function that relates the standard spectral data to the true spectral data is established. These functions and data are stored, advantageously at the factory, for future application to test spectral data to effect standardized spectral data. In operational situations, operational spectral data are obtained for the same or a similar standard sample with the operational condition, and this data is also stored for future application to test spectral data. The idealized function, the standard function, the standard spectral data and the operational spectral data are related with a transformation function. Test spectral data for one or more test samples are then obtained with the operational condition. Standardized spectral information for the test sample, corrected for the intrinsic distortion, is computed by application of the transformation function to the test spectral data. The standard function may be established theoretically or, more accurately, by use of another, basic sample having predetermined fundamental spectral data. In the latter case, the instrument is operated with the standardizing condition to obtain basic spectral data for the basic sample with the standardizing condition. The standard function is determined by a relationship with the basic spectral data and the fundamental spectral data. In another embodiment, operations with the standardizing condition are omitted, and the standard sample is formed of a substance having fundamental spectral data with a predetermined profile. An idealized function for spectral line shape is specified and stored with the fundamental spectral data for future application to spectral data. Operational spectral data for a standard sample are obtained in an operational situation, and stored. Without changing instrument conditions to change intrinsic distortion, test spectral data for a test sample are obtained. The idealized function, the fundamental spectral data and the standard spectral data are related with a transformation function. Standardized spectral information for the test sample, corrected for the intrinsic distortion, is computed by application of the transformation function to the test spectral data. The invention is particularly suitable for an instrument in which the spectral means comprises an interferometer for effecting a time-scanned interference beam passed through the sample to effect the spectral beam, and the spectral data is obtained by applying a Fourier transform computation to corresponding signal data. For such an instrument, the idealized function has a profile with a width proportional to wavenumber, so it is advantageous to specify the idealized function in logarithmic space independently of wavenumber. Similarly, the standard function is established in logarithmic space independently of wavenumber. Sample data are obtained by application of the Fourier transform to corresponding signal data to effect preliminary data, and computation of a logarithm of the corresponding preliminary data to effect corresponding sample data in the logarithmic space. The transformation filter is thus defined in the logarithmic space. The standardized spectral information is effected by computation of a logarithmic form of the test spectral data, multiplication of the logarithmic form by the transformation filter to effect a transformed form of the test spectral data, and computation of an anti-logarithm of the transformed form to effect the standardized spectral information. Objects are also achieved with a computer readable storage medium for utilization in standardizing spectral information for a spectrometric instrument of a type described above. The storage medium has data code and program code embodied therein so as to be readable by the computing means of the instrument. The data code includes an idealized function for spectral line shape, and standard spectral data obtained for a standard sample with the standardizing condition. The program code includes means for establishing a standard function that relates the standard spectral data to the true spectral data, means for storing operational spectral data obtained for a standard sample with the operational condition, and means for relating the idealized function, the standard function, the standard spectral data and the operational spectral data with a transformation function. The program code further includes means for computing standardized spectral information for the test sample corrected for the intrinsic distortion by application of the transformation function to test spectral data obtained for a test sample with the operational condition. Objects are further achieved with a computer readable storage medium for utilization in standardizing spectral information for a spectrometric instrument of a type described above, the storage medium having data code embodied therein so as to be readable by the computing means of the instrument. The data code comprises an idealized function for spectral line shape associated with the standardizing condition, and standard spectral data obtained for a standard sample with the standardizing condition. The idealized function and the standard spectral data have a cooperative relationship for application to test spectral data obtained for a test sample with the operational condition. In another embodiment the data code comprises hypothetical spectral information derived from true spectral data for the standard sample by application of an idealized function for spectral line shape, and a baseline for spectral data. The program code comprises means for effecting converted spectral data by application of the baseline to measured spectral data for a standard sample with the operational condition. The converted spectral information is available for computation of standardized spectral information therefrom by the computing means, and the hypothetical spectral information is available for comparison with the standardized spectral information. FIG. 1 is a schematic drawing of a spectrometric instrument used for the invention. FIG. 2 is an optical diagram of a component utilized in the instrument of FIG. 1 for generating an interference spectrum. FIG. 3 illustrates scales utilized for data in the invention. FIG. 4 shows a shape of a factor utilized in computations of the invention. FIG. 5 is a flow chart for a first embodiment of means and steps for computational aspects in the instrument of FIG. 1. FIG. 6 is a flow chart for a second embodiment of means and steps for computational aspects in the instrument of FIG. 1. FIG. 7 is a flow chart for a third embodiment of means and steps for computational aspects in the instrument of FIG. 1. FIGS. 8a, 8b and 8c are schematic diagrams of matrices representing filters for operating on spectral data in the instrument of FIG. 1. FIG. 9 is a portion of the diagram of FIG. 8b. FIG. 10 is a flow chart for determination of a component of spectral data in the charts of FIGS. 5-7. FIG. 11 is a flow chart for a further embodiment of the invention. FIG. 1 schematically shows a spectrometric instrument 10 utilized for the invention, the instrument generally being conventional except as described herein. An optical train 12 includes a spectrum generator or spectral means 14 that effects a spectral pattern 16 of some form in the range of infra-red, visible and/or ultraviolet light. The spectral means may be, for example, a dispersion element such as a prism or a holographic grating that spectrally disperses light received from a plasma or other excitation source containing a sample material, or from a sample material transmitting or reflecting light. In a preferable embodiment of the present invention, an FTIR instrument is utilized in which the spectral means 14 consists of an optical interference system in combination with means for applying Fourier transform computations to transform an interference pattern into spectral data. The spectrum (which herein is defined broadly to include an ordinary spectrum as well as a time-varied interference pattern) is further associated with a sample 18 to effect a spectral beam 20 that is responsive to the sample so that the beam is spectrally characteristic of the sample. Other optical components including focusing means such as a concave mirror or lens 23 generally are disposed in the optical train. For the present purpose, one such component introduces an intrinsic distortion into the spectral beam as explained below. The distortion component may be, for example, an aperture stop 22 (as shown), a shaped light source, an imperfect lens or reflector, misalignment of a optical components, or any combination of these. In FTIR, the distortion is primarily representative of finite size and shape of the light source as manifested through the aperture stop 22. The tandem order of optical elements is characteristic of the instrument but not important to this invention. For example, a variable distortion component (such as a variable aperture stop) may be disposed anywhere in the optical train, or the sample may be integral with the light source such as a sample injected into a plasma source. Also, the spectrum may be reflected by the sample to effect the spectral beam. Thus, as used herein, the term "transmitted by a sample" more generally includes reflection as an alternative. A detector 24 receives the spectral beam to effect signal data on an electrical line 26, the data being representative of the beam spectrum 20 as modified by the sample. The detector may be a conventional photomultiplier tube or solid state photodetector. A computer 28 is receptive of the data signals to compute corresponding spectral information representative of the sample. A display for the computer such as a monitor 33 and/or a printer displays the spectral information. The computer 28 may be conventional, such as a Digital model DEC PC 590, usually incorporated into the instrument by the manufacturer thereof. The computer should include a central processing unit 30 (CPU) with a analog/digital (A/D) converter 32 from the detector (the term "signal data" herein referring to data after such conversion). Sections of computer memory 34 typically include RAM, an internal hard disk and a portable storage medium 35 such as a floppy disk, CD-ROM, and/or a tape with relevant code embedded therein. A keyboard 36 is generally provided for operator input. The computer also may provide signals via a digital/analog (D/A) converter 38 to control the spectrum generator. One or more additional dedicated chip processing units (not shown), may be utilized for certain steps. For example in FTIR, a separate chip is used for the Fourier transform computations, and another for controlling alignment and the like. The present invention is implemented conveniently with the main CPU, utilizing data and program codes representing steps and means for carrying out the invention. Such codes advantageously are provided in a computer-readable storage medium such as the hard disk of the computer or a floppy disk that may be utilized with the computer of an otherwise conventional instrument. Programming is conventional such as with "C++" which generally is incorporated into the computer by the manufacturer of the computer or the instrument for conventional operations. Adaptations of the programming are made for the present invention. Programming from flow diagrams and descriptions herein is conventional and readily effected by one skilled in the art. The details of such programming are not important to this invention. As the computer computations may involve considerable data and therefore be extensive, a high performance processor such as an Intel Pentium™ of at least 100 MHz is recommended, although a 486 processor should be sufficient. The invention is suited particularly for incorporation into a Fourier transform infra red (FTIR) type of spectrometric instrument (FIG. 2) such as a Perkin-Elmer model Paragon 1000. Such an instrument normally is used for the range of 400 to 15,000 cm.sup.-1 (wavenumber) (25 to 0.7 μm wavelength range). In the spectrum generator 14, white light from a source 40, such as an electrically heated nichrome wire acting as a black body radiation source, is transmitted in the optical train through a first aperture 41 which becomes an effective source of light for the remainder of the optical train. The light continues through a collimator, such as a lens 42 or a mirror, and a combination of reflectors that constitute a conventional Michaelson interferometer. In this combination, the incoming white light beam 43 is split by a semi-reflective mirror 44 that reflects a first half 46 of the light beam and transmits the second half 48. The first beam 46 is reflected by a fixed mirror 50 back through the semi-reflector 44. The second beam 48 also is reflected back. This beam has a variable path length which may be accomplished in a simple system (not shown) by a second mirror 52 to reflect back the semi-reflector, the second mirror being movable on the light axis. For better precision and alignment, the second mirror 52 is fixed but offset, and a pair of angled reflectors 54, 56 is interposed to reflect the second beam to and back from the offset mirror 52. The angled reflectors are mounted in parallel and nominally about 45 main light axis 53 on a platform 55 that is rotatable about an axis 57 centered midway between the mounted reflectors. The platform is connected directly or through its axle to a motor 58 that rotationally oscillates the orientation of the parallel reflectors over a range such as about 10 under computer control. The number of oscillations in one data run is selected to provide sufficient spectral data, for example 16 cycles. The rotation varies the total path length of the second beam. The precise change in path length may be determined conventionally by a laser beam (not shown) interposed into the interferometer, or into another interferometer using the same pair of parallel reflectors, and counting nulls detected in the laser interference pattern (automatically by the computer if desired). Path length generally is changed up to about 10 mm in each oscillation, A portion of the first beam 46 passes through the semi-reflector 44. A portion of the second beam 48 is reflected by the semi-reflector to combine with the first beam and thereby effect a time-scanned interference form of the spectral pattern 16. The spectral beam may be folded if desired by one or more additional mirrors (not shown). The spectral pattern or interference beam 16 is passed through a lens 23 which focuses the beam at the aperture stop 22 and an adjacent sample 18 which may be a standard sample or an unknown test sample, for example an organic fluid such as gasoline. Due to spectral absorption by the sample, the spectral beam transmitted from the sample is characteristic of the sample. This beam is passed through a further lens 25 (or pair of lenses) that is disposed to focus the sample onto the detector 24. The focused beam is thus incident on the detector 24 which effects signals to the computer 28 (FIG. 1) in proportion to the beam intensity which varies according to the sample with the oscillation of the pair of reflectors 54, 56. One or more of the lenses may be replaced by concave mirrors with equivalent functions. The interference pattern, and thereby the spectral beam from the sample, is formed of a continuum of spectral wavenumbers that the computer digitizes into wavenumber increments. In a conventional FTIR instrument, the computer is programmed for Fourier transformation computations to sort the signal data into ordinary type spectral data representing a plot of intensity vs wavenumber. This data is processed further into corresponding spectral information representative of the sample. In computer computations, the spectral data generally is treated by matrix operations, in which the signal data is a vector and matrix filters are applied for the transformation. A typical computation system for Fourier transform is taught in the aforementioned textbook by P. R. Griffiths and J. A. de Haseth, particularly pages 81-120, incorporated herein by reference. Conventional wavenumber calibration is carried out separately, for example with the spectral line of a built-in He--Ne laser validated with a known sample (such as polystyrene) and is not part of the present invention. This calibration generally is incorporated into the transformation computations. Spectral data is designated herein by S which mathematically is a vector. For this data and for associated matrix factors and functions, a subscript "0" is for basic ("true") data for zero aperture stop size (not directly attainable), "M" is for measured data for any sample, "1" is for a standardizing (smaller) aperture stop, "2" is for an operational (larger) aperture stop, and "I" denotes idealized. A superscript "B" denotes a basic sample (for which fundamental spectral data is known), "C" denotes a standard sample, and "T" denotes a test sample which may be of unknown composition. Computer operations for multiplication and division steps may comprise direct multiplication or division of vector and matrix elements, or may involve rapid computation techniques such as convolution, deconvolution or related procedures, which are available in commercial programs such as MATLAB™ sold by Mathworks Inc., Natick, Mass. As used herein and in the claims, the term "multiplication" and its corresponding symbol "*" for matrix operations means either direct multiplication or a related procedure such as convolution. Similarly the terms "ratio", "division" and the corresponding symbol "/" for matrix operations means either direct division or a related procedure such as deconvolution. The intensity associated with any one wavelength in the interference beam from the interferometer is in the form a time-dependent sine wave representing the varying path length. The ends of the sine wave are truncated by the limits of the path change in the interferometer, e.g. the reflector rotation. Therefore, for better interpretation, data in current FTIR instruments are treated by multiplying the data vector by a matrix correction factor A, known as an "apodization" factor, representing a weighting function. The filter A is preselected by theoretical considerations as a mathematical function, for example in the form of a matrix representing a modified rectangle ("box car") in interferogram (time) space. The rectangle has a calculated width associated with the cutoff of the ends of a sine wave, known from the cycle limits of the interferometer mirrors. In wavenumber space A is a sinc function, i.e. A=sin(2πσL.sub.m)/(2πσL.sub.m) where L.sub.m is the maximum difference in optical path lengths of the split interfering beams in the interferometer. Correction for the aperture distortion in FTIR is made conventionally by a J-stop function J which is applied together with a conventional apodizing factor A to relate measured spectral data S.sub.M to fundamental or true data S.sub.0 by a formula S.sub.M =S.sub.0 *J*A. The factor A should be the same for all instruments for which spectral information is being compared. If this is not the case, the factors should be related by a conversion factor φ such that A.sub.a =φ*A.sub.b where subscripts a and b denote different instruments, or the same instrument operating under different conditions, with different factors A. It further should be appreciated that selection of A is not critical to the present invention, as long as it remains the same or is converted. As utilized with respect to the present invention, the factor A is applied to the initial spectral data within the same computational steps as the Fourier transform (FT) to effect the spectral data S.sub.M that is treated according to the invention. As used herein and in the claims, further reference to measured spectral data S.sub.M means such data after application of FT and the conventional (or other desired) apodizing factor A. Although preferably included with the FT, it is not critical where in the computational sequence the factor A (transformed if appropriate) is applied, and its inclusion in steps outlined below is to be considered equivalent to inclusion with the FT for the present purpose. The aperture stop 22 may be an actual physical plate with an orifice therein, or a virtual aperture stop with a size determined by other elements in the optical train such as a lens (or mirror) diameter that establishes the diameter of the collimated section of the beam. Thus, as used herein and in the claims, the term "aperture" as depicted by the element 22 means an effective aperture that is either virtual or actual (physical). If two sizes of aperture are used as described below, the smaller may be a physical aperture, and the larger also may be physical or may be virtual with the plate removed. The size of the aperture stop 22 in FTIR, known as a Jacquinot stop or "J-stop", is selected for normal operation to provide sufficient light for desired resolution in the spectral information, while being as small as otherwise practical to minimize distortion of the spectral beam due to finite size of the aperture and size and configuration of the light source as well as the sample. Hypothetical zero aperture stop would provide true or fundamental spectral data. The highest practical resolution, which is limited by the nature of the Fourier transform of the interference pattern, varies with instrument and, for example, may be 1 cm.sup.-1 at 6530 cm.sup.-1 for FTIR. (Units herein are wavenumber, i.e. reciprocal of wavelength; use of units of frequency would be equivalent, as would wavelength with appropriate conversion.) Such resolution typically is associated also with an aperture diameter of about 4.2 mm for an instrument with a focal length of 120 mm associated with the lens 23. Operation usually is carried out with a larger aperture size, for example 8.4 mm which can provide a resolution of 4 cm.sup.-1 at a spectral wavenumber of 6530 cm.sup.-1. The degree of distortion associated with the J-stop is proportional to the resolution. The different aperture stops may be fixed sizes and substituted, or a variable iris. For the larger, operational aperture, a theoretical estimate for the function J often is not sufficiently accurate or comparable for different instruments. Therefore, the present invention is directed to applying another modification to the computations to improve accuracy and sensitivity. The modification recognizes that the spectrum of an infinitesimally narrow spectral line in FTIR actually has a line shape that is narrow with a finite width representing the resolution. To effect the modification, an ideal J-stop function J.sub.I is selected which specifies an idealized spectral line shape for a hypothetically sharp spectral line. The shape should approximate an intrinsic profile for the instrument. For FTIR the idealized J-function preferably has a nominally rectangular profile representing the resolution width. Idealized spectral data S.sub.I is defined by a formula S.sub.I =S.sub.0 *J.sub.I. An objective of the present invention, for better reproducibity and sensitivity, is to determine the idealized data S.sub.I for a test sample, not the true data S.sub.0. The ideal J-stop function J.sub.I may be estimated according to optical theory, for example as a matrix representing a rectangle having a width proportional to wavenumber and related to aperture size. A rectangular ("box car") function for J is used ordinarily in FTIR instruments, for the reason that FTIR line shape (in wavenumber space) at the resolution limit is rectangular. A standard sample is selected that preferably has at least one known, well defined spectral feature over the desired spectral range. The feature should be such that a change in shape is observable between the standardizing and operational apertures. The water vapor in air is suitable, so that uncontained air in the instrument simply may be used for the standard sample. Another type of standard sample is a gas such as methane and/or carbon monoxide contained in a cell. Yet another suitable sample is fine powder of the mineral talc, for example mounted in a 0.3 mm thick clear polyethylene sheet in a concentration suitable to produce a transmission of 25% to 35% for the 3676.9 cm.sup.-1 line of the talc. Such mounting is achieved by melting the polyethylene containing the talc. The talc or other such standard sample should have adequate purity and morphology for spectral suitability. The fine talc powder is conventionally sized. The standard sample is preliminarily measured with the instrument using the standardizing aperture, for example at the factory for a commercial instrument. The measured, standard spectral data S.sub.1.sup.C for the standard is related to its true spectrum S.sub.0.sup.C by S.sub.1.sup.C =S.sub.0.sup.C *J.sub.1. The J-stop characterizing function J.sub.1 for the standardizing aperture is determined in one of several ways: as a theoretical J-stop function or, preferably, with a standard sample with known basic (true) data S.sub.0.sup.C or, alternatively, by way of a further, basic sample with known fundamental (true) data S.sub.0.sup.B. (The terms "true" and "fundamental" are used herein to distinguish between a standard sample and basic sample, and otherwise are equivalent.) The standard sample also has an idealized spectrum S.sub.I1.sup.C given by S.sub.I1.sup. C=S.sub.0.sup.C *J.sub.I1 which is determinable from S.sub.I1.sup.C =S.sub.1.sup.C *(J.sub.I1 /J.sub.1). The function J.sub.I1 is a preselected, idealized J-function conversion factor associated with the smaller aperture. This function preferably is utilized for reasons of numerical stability in the computations as explained below with respect to Eq. 1a. Also, as this function cancels out in the computations, its exact form is not critical. The two J functions and the standard spectral data are stored in a selected format whereby either S.sub.I1.sup.C is computed and stored or (preferably) the components in its equation are stored permanently for the computer (e.g. on disk), for future application to test sample data during normal operation. Further standardizing is done in association with operational use of the instrument, for example on a daily basis or more frequently so as to account not only for the specific instrument but also for instrumental drift such as may be due to temperature changes. This utilizes the same type of standard sample as for the preliminary steps described above, e.g. air (water vapor) or talc. It is preferable, but not necessary, to use the same actual sample, although any other sample should be consistent with the original in having the same spectrum. Spectral data S.sub.2.sup.C is taken for such a standard sample with the operational aperture that is used for the ordinary operation with test samples. This generally is the conventional, larger aperture, e.g. 8.4 mm for resolution of 4 cm.sup.-1 at 6530 cm.sup.-1. With the larger aperture, there are relationships similar to those set forth above for the smaller aperture. Thus, measured operational spectral data S.sub.2.sup.C for the standard sample under the operational condition ("operational spectral data") are related to its true spectrum S.sub.0.sup.C by S.sub.2.sup.C =S.sub.0.sup.C *J.sub.2. This sample also has its idealized spectrum S.sub.I2.sup.C given by S.sub.I2.sup.C =S.sub.0.sup.C *J.sub.I2 so that its idealized spectrum is determinable from S.sub.I2.sup.C=S.sub.2.sup.C *(J.sub.I /J.sub.2). The function J.sub.2 is the characterizing J-stop function for the larger aperture. The idealized function J.sub.I is a second idealized function associated with the larger aperture (the subscript "2" being omitted), and is preselected. Another J-function relationship is J.sub.2 =J.sub.I1 *δJ, wherein the mathematical function δJ is another conversion factor, so that δJ=S.sub.2.sup.C /S.sub.I1.sup.C =(S.sub.2.sup.C /S.sub.1.sup.C)*(J.sub.1 /J.sub.I1) which is determined by the measured S.sub.2.sup.C and the previously determined S.sub.I1.sup.C. A conversion factor F is defined as a ratio F=J.sub.I /J.sub.2, so that F=(J.sub.I /J.sub.1)*(S.sub.1.sup.C /S.sub.2.sup.C) Eq. 1 which can be computed from a preselected ideal J.sub.I, the theoretical or otherwise determined J.sub.1, and the measured S.sub.1.sup.C and S.sub.2.sup.C. In a further selected format for storing, either this factor F is saved, or its components of the equation are saved, for application to data for test samples. (As used herein and in the claims, unless otherwise indicated, the terms "store" and "save" refer to either separate storing of such components as such or in one or more pre-computed relationships of the components to be used in computing the factor F.) This equation is set forth in a simple form to show the basic relationship. However, for reasons of numerical stability in the computations, a function or spectral data having a larger width should be divided by one having a smaller width. Therefore, in this and the other equations herein, the computations should be carried out in a sequence that achieves this. For example, it is preferable to put the equation in the form: F'=J.sub.I /[J.sub.1 *(S.sub.2.sup.C /S.sub.1.sup.C)]) Eq. 1a A preferred sequence is to first calculate the mathematical function (ratio) S.sub.2.sup.C /S.sub.1.sup.C, multiply this by J.sub.1, and then divide the result into J.sub.I. For this reason, it is advantageous to save the instrument components J.sub.I and J.sub.1 along with the standard data S.sub.1.sup.C separately to go with the instrument. As S.sub.2.sup.C is later obtained periodically with the operational condition, this also is saved or is immediately incorporated into a computation of F' which then is saved. In ordinary operations of the instrument, spectral data S.sub.2.sup.T are then obtained for one or more test samples using the operational aperture. For each test sample, idealized spectral data S.sub.I.sup.T are computed from a further relationship S.sub.I.sup.T =S.sub.2.sup.T *F. This is the desired spectral information that is displayed, and is substantially independent of instrument (within a family of instruments) and of ordinary variations in an instrument. For the theoretical approach, it is recognized that J.sub.1 is based essentially on wavenumber resolution δσ which in turn is dependent on aperture size and also is proportional to wavenumber σ. An aperture small enough for the resolution to approach interferometer resolution is selected as the smaller aperture for standardizing. The standard function J.sub.I for the standardizing aperture may be approximated theoretically (in a conventional manner) by a rectangle having a width W=β.sup.2 σ/8 where β=sin.sup.-1 (d/f), d is the standardizing aperture diameter for that resolution, and f is the focal length of the lens 23 that focuses the interference beam onto the sample (e.g. d=4.2 mm and f=120 mm). In this case, a separate wavelength calibration may be performed and incorporated into the correction. Although a theoretical value for J.sub.1 may be used in Eq. 1, this J-factor generally will vary from instrument to instrument. A potentially more precise approach is to utilize a standard sample that has a known, predetermined "true" spectrum S.sub.0.sup.B. Such information is available for certain materials, particularly gases, from libraries of standards, for example in the database HITRAN™ of the United States Air Force Geophysics Laboratory as "USF HITRAN-PC" provided by University of South Florida, Tampa Fla. 33620, Version 2.0 (1992) supplimented by versions 2.2 (Aug. 30, 1993) and 2.41 (Aug. 18, 1995). A commercial database is available from Ontar Corporation, North Andover, Mass., which includes basic spectral data as well as program software for searching and plotting, and for correcting for pressure, temperature and path length using conventional theory (discussed in section 6.1 of the Version 2.0 text). Both of these databases are incorporated herein by An advantage is that this information is in computer format, eliminating the need to enter hard data. HITRAN information is available for methane and carbon monoxide gases which have spectral features (peaks across the selected spectral range) suitable for the present purpose. A standard sample is formed of a cell containing such a selected one or more such gases. The cell is formed, for example, of a quartz tube (e.g. 1 cm long, 22 mm diameter) with planar end windows that are non-parallel to prevent auxiliary fringes. A measurement with such a gas cell may be used to omit the need for the J.sub.1 function and any initial (factory) determinations with a smaller aperture, by using an equation S.sub.2.sup.C = S.sub.0.sup.C *J.sub.2 where S.sub.0.sup.C is known from a HITRAN database or the like. In this case the conversion factor becomes: F"=J.sub.I /(S.sub.2.sup.C /S.sub.0.sup.C) Eq. 2 For reasons given previously, a preferred sequence is to effect the parenthetic ratio first. The idealized data S.sub.I.sup.T =S.sub.2.sup.T *F" is then computed as indicated above. This sequence is particularly suitable with the fundamental spectral data S.sub.0.sup.C having a known profile. Also, especially for FTIR, logarithmic steps are advisable where the width of the idealized function J.sub.I for spectral line shape is proportional to wavenumber as set forth elsewhere herein as explained below. There is no standardizing condition with corresponding standardizing data in the aspect of Eq. 2, this being replaced by fundamental data. For the purpose of a generic term, the word "calibribration" herein encompasses standardizing and fundamental with respect to spectral data. Standardizing refers to a condition such as a finite aperture that allows practical measurement, and fundamental refers to a hypothetical condition such a zero aperture. It may not be desirable or practical to utilize such a gas cell for the standard sample measurement under operational conditions. In this case a more usable standard such as talc may be used, and a further, basic sample such as a cell of methane and carbon dioxide having predetermined fundamental spectral data S.sub.0.sup.B is used for an initial calibration (e.g. at the factory). The precise data for S.sub.0.sup.B is obtained, and corrected for pressure and temperature, from information included in the HITRAN database. This data is saved in a selected format, either as a ratio or preferably separately for future use. The instrument is initially operated with this basic sample with the smaller aperture to obtain basic spectral data S.sub.1.sup.B for the basic sample. This data is saved along with the other information for F in a further selected format, i.e. either separately or combined with one or both of the data sets for the standard sample. For F, the calibration filter J.sub.1 is established by the relationship J.sub.1 =S.sub.1.sup.B /S.sub.0.sup.B. In this case the transformation function becomes: F"'=J.sub.I /(S.sub.1.sup.B /S.sub.0.sup.B *S.sub.2.sup.C /S.sub.1.sup.C)Eq. 3 For reasons given previously, the computations may not actually be effected in the order shown. A preferred version, which also preferably utilizes the additional components J.sub.1, J.sub.I1, and S.sub.I1.sup.C which are explained above, is: F"'=J.sub.I /[J.sub.I1 *(S.sub.2.sup.C /S.sub.1.sup.C)] Eq. 3a where S.sub.I1.sup.C =S.sub.1.sup.C *J.sub.I1 /J.sub.1). Computational sequence is determination first of J.sub.1 =S.sub.1.sup.B /S.sub.0.sup.B, then ratio J.sub.I1 /J.sub.1, S.sub.I1.sup.C, the ratio S.sub.2.sup.C /S.sub.I1.sup.C, the multiplication with J.sub.I1, and the final division into J.sub.I. For this, J.sub.I1, must also be preselected. The idealized data S.sub.I.sup.T = S.sub.2.sup.T *F" is then computed as indicated above. It is convenient to predetermine and store J.sub.1 and J.sub.I1, or J.sub.1 and J.sub.I1 /J.sub.1, as well as S.sub.I1, and provide these components with the instrument. It may be noted that mathematically J.sub.I1 cancels out in Eq. 3. However, this component is useful for maintaining numerical stability in computing the matrix ratios (deconvolutions) in the sequence. As previously indicated, a function or spectral data having a larger width should be divided by one having a smaller width. Thus J.sub.I1 should have a FWHM significantly greater than that of J.sub.1, e.g. 1.5 cm.sup.-1 for a J.sub.1 of 1.0 cm.sup.-1 at 6530 cm.sup.-1. Similarly, J.sub.I should have a FWHM greater than that of J.sub.2, e.g. 4 cm.sup.-1 at this wavenumber. As indicated above, it is not necessarily desirable to actually save or compute the conversion factor F (or F' or F" or F"'), as for computational sequencing it may be advantageous to save its component vector and matrix data separately, or in computed sub-units, and apply several matrix operations at the time of test sample computations. Therefore, reference to this conversion factor in the claims is to be interpreted as equivalent to its components with respect to storage and computations. (The factors F', F" and F"' collectively may be termed F hereafter and in the claims, as they all are specified by or derived from Eq. 1 depending on which components are known.) In the case of FTIR, the resolution δσ attributable to the effects of the aperture is proportional to the wavenumber σ, such that δσ/σ is a constant c. Therefore, the J functions are also proportional to the wavenumber and would require a series of such functions and computation across the spectral range, thus complicating the computations and selection of a function. To account for this according to a preferred embodiment of the invention, the wavenumber axis of the measured spectral data is transformed into logarithmic space where the resolution is independent of wavenumber. This advantageously is achieved by first defining a scale in linear space (FIG. 3), wherein the unit spacing 60 is proportional to wavenumber for the original data. The original data is interpolated into the scaled points. Two conventional ways are Lagrange interpolation and interpolation with truncated sinc functions (approximating massive zero padding FT interpolation). Corresponding points 62 in logarithm space will result in equal J width (independent of σ). The mathematics of the scaling involves the concept that the width in log space is W.sub.L =log[(σ-δσ)/σ]=log(1-c) which is a constant. The conversion puts each data point at an edge of an increment, so in a further step the axis is shifted so as to center the data properly to represent absolute wavenumber position. The preferred sequence of steps is scaling (interpolation), logarithm and center-shift, but these may be combined into one matrix. The linear data are then converted into logarithm data by conventional computer matrix procedures into the logarithm space by taking a logarithm ("log", e.g. base 10 or base e) of the original data. In the log space the points are uniformly spaced with the equal resolutions. Preferably more points are used in log space than linear space, e.g. 4 or 5 times more. The number of points in log space is conveniently rounded to a power of 2; e.g. 256 points from 40 in linear. The above spectral data S are converted into the log space after A is applied in linear space. The filter is calculated in log space and is applied to the data via the factor F. A problem is that a rectangular function with sharp corners, being the basic form of J in ordinary space, cannot be converted sufficiently by logarithm. Therefore, this function is modified to a form 66 (FIG. 4) having rounded upper corners and tails at the lower corners formed of a rectangle multiplied with a smooth, symmetrical function such as a small Gaussian. The exact form is not critical as the choice of the ideal function J.sub.I is arbitrary provided it is reasonably close to actual line shape. For this function, the starting rectangle width is δσ=1 cm.sup.-1, and for its Gaussian δσ=1.5 cm.sup.-1 (both at σ=6530 cm.sup.-1 before conversion to log space), to effect a profile with full width at half maximum (FWHM) slightly greater than 1 (at σ=6530 cm.sup.-1). For the apodization A, a suitable FWHM for the sinc function is 0.6 cm.sup.-1 at 1 cm.sup.-1 J-stop. The functions for J.sub.I and A are normalized to unit area. When the computations for idealized data S.sub.I.sup.T are carried out in log space, these data are then converted and re-interpolated by conventional antilogarithm and Lagrange (or other) procedures back to linear space to determine the final spectral information that is displayed. In matrix form the computation for the idealized spectral information S.sub.I.sup.T to be presented is S.sub.I.sup.T =D.sub.2 *L*F*L' Eq. 4 where D.sub.2 represents preliminary, pre-logarithm data for a sample (after Fourier transform), L includes a conventional logarithm filter, F is determined according to Eq. 1, and L' includes the reverse logarithm matrix. The logarithmic filter also includes interpolation, axis scaling and centering, and the reverse matrix also includes reverse interpolation and axis scaling. (A further centering shift is not necessary in reverse.) Each of the data sets in F is already in logarithm form, as a result of logarithm conversions before being applied in the computations of the Eqs. 1 through 3a that are relevant. A flow chart (FIG. 5) with reference to Eq. 3a illustrates the procedures for the case of using a standard sample together with a basic sample having predetermined fundamental spectral data. Items in the chart represent computational steps or computer means for effecting computations and saving. Although each computational element (data vector or matrix function) is shown to be separately stored or saved, as pointed out above some of the components may be combined by multiplication (or convolution) and stored as a single matrix or vector. Thus, herein and in the claims, successive convolutions and deconvolutions by such components are considered equivalent to direct multiplication by a combined factor F"'. Initially the selected idealized functions J.sub.I1 and J.sub.I, and the apodization factor A (if utilized), are determined and stored 68, 70, 71. The basic data S.sub.0.sup.B for the basic sample also are stored 72. This stored basic data has been derived as necessary from the published data S.sub.I1 (HITRAN or the like) by apodizing 76 with the apodization factor via S.sub.I1 *A. The stored basic data also has been interpolated 78, and logarithm applied and axis shifted 80. Also initially (generally at the factory), the instrument is operated 82 to obtain signal data for the basic sample 84 using the small, standardizing aperture. The Fourier transform (FT, which preferably includes initial conventional wavenumber calibration and application of apodization A) is applied 86 to the signal data to effect initial basic spectral data D.sub.1.sup.B. The transformed data are interpolated 78 to a scale proportional to wavenumber. The logarithm matrix is then applied 80 with axis-shifting to center the points. The resulting standard spectral data S.sub.1.sup.B are stored 88. Similarly (e.g. initially at the factory) the instrument is again operated 82 to obtain spectral data for the standard sample 90 using the same standardizing aperture. The FT is applied 86 to the signal data to effect initial standard spectral data D.sub.1.sup.C for this sample. This transformed data are interpolated 78, and the logarithm matrix is applied 80 with axis-shifting. The resulting standard spectral data Slc are stored 92. In the ordinary location and situation of utilizing the instrument, the larger operational aperture is used, although the initial standardizing steps (above) also can be performed in this location. At least initially, and preferably periodically, the instrument is operated 94 again to obtain spectral data for the same standard sample 90 (or another of the same material) using the operational aperture. The FT is applied 86 to the corresponding signal data to effect initial spectral data D.sub.2.sup.C for this sample with the larger aperture. The transformed data are interpolated 78, the logarithm matrix and shifting are applied 80. An instruction is provided 96 by an operator (or automatically with selection of the standard sample) that this is the standard sample, and the resulting operational spectral data S.sub.2.sup.C is saved 98. At this point all components for the factor F (actually F"' of Eq. 3) are available. As pointed out above with respect to Eq. 3a, there is a preferred sequence in the computations for relating components into the factor F"'. The function J.sub.1 is computed 100 first from data of the basic sample, and then the ratio J.sub.I1 /J.sub.1 is computed 102 from the preselected J.sub.I1. Next the intermediate relationship S.sub.I1.sup.C =S.sub.1.sup.C* (J.sub.I1 /J.sub.1) is computed 104 from this ratio and the standard data S.sub.1.sup.C. The foregoing constitutes data that is stored permanently for the instrument, e.g. in a disk. The components that advantageously are stored are S.sub.I1.sup.C and J.sub.I1 or S.sub.1.sup.C and J.sub.1. When the operational data S.sub.2.sup.C are obtained 98, the ratio S.sub.2.sup.C /S.sub.I1.sup.C or alternatively S.sub.2.sup.C /S.sub.1.sup.C is computed 108. Then the multiplication with J.sub.I1 or alternatively J.sub.1 is computed 106 and, finally, the factor F"' is computed 110 and saved 112 for routine use. The instrument is operated 94 on one or more test samples 114 to obtain normal signal data for each test sample using the operational aperture. The FT is applied 86 to the signal data to effect initial spectral data D.sub.2.sup.T. The transformed data are interpolated 78, and the logarithm matrix with shifting is applied 80. Lacking specific designation 96 as standardizing, the resulting spectral data S.sub.2.sup.T are saved 116 as test data. The previously stored transformation function F"' is applied 118 to this test data, to yield a logarithmic form of the idealized spectral data S.sub.I.sup.T ' for the test sample. The antilogarithm matrix is applied 120, then reverse interpolation is computed 122, to provide the final idealized spectral information S.sub.I.sup.T which is displayed 124 for the test sample. Similarly (FIG. 6), if a theoretical value for the filter J.sub.1 is used, this is saved 126 and utilized in the function F' (Eq. 1a) in place of data for a basic sample. For sequential computation, the ratio S.sub.2.sup.C /S.sub.1.sup.C first should be computed 128, and then its multiplication with J.sub.1 is computed 130. From this and J.sub.I, F' is computed 132. As all other relevant steps are the same or substantially the same as for FIG. 5, the flow sheet incorporates rest of the numeral designations the same as described above for FIG. 5. A simpler procedure (FIG. 7) is used for the case in which the true spectral data S.sub.0.sup.C for the standard sample is predetermined, for example for a methane and carbon dioxide cell with HITRAN data. In this case the series of steps with a basic sample is omitted, and the true spectral data is stored 72 (after having been interpolated, logarithm applied and shifted). The ratio with S.sub.2.sup.C is computed 134 and utilized with J.sub.I for computation 136 of the factor F" of Eq. 2. As other relevant steps are the same or substantially the same, the flow sheet incorporates the same remaining numeral designations as described above for FIGS. 5 and 6. The three major filters L, F and L' are respectively in the form of bands 138, 138', 138" as illustrated in FIGS. 8a, 8b and 8c. The matrix elements 140 outside the bands are zero. (The designation "nz" is number of zeros in the matrix.) For the filters L and L', the respective logarithm and antilogarithm matrices are conventional, being implemented in computational programs such as the aforementioned MATLAB. This matrix includes Lagrange (or other) interpolation which may be obtained from the MATLAB program, among others. The axis scaling and shifting are also included and readily implemented by selecting points with the spacing and shifting. The matrices for logarithm, interpolation, scaling and shifting may be combined into a single filter L or applied individually. Similarly, the matrices for antilogarithm, interpolation and scaling may be combined into a single filter L' or applied individually. The filter F is a matrix determined from Eq. 1. The numbers in each band of the three matrices are generally close to one except rounding at the corners and tailing to zero near the edges. FIG. 9 shows an upper portion of the combined matrix L*F*L', showing spreading that results from the convolution of the matrices of FIGS. 8a-8c. The spreading at the top is due to the opposite "curvatures" of the matrices of FIGS. 8a and 8c. Care should be exercised in deconvolution with small numbers approaching zero, as enormous numerical noise can be introduced into the results from division by very small numbers. This is achieved by eliminating very small numbers in FT space (after Fourier transform and before logarithm), e.g. those smaller than 1% of the largest number. The published data used for the true (i.e. fundamental or basic) spectral data, although corrected for pressure and temperature, has no corrections for incidental absorptions and reflections that occur in the use of a cell with the corresponding sample gas. Such incidental contributions effectively result in a shift of the baseline, i.e. the vertical level of the horizontal wavenumber axis relative to the transmission data. It is advantageous to compensate for this shift. Using the case of FIG. 7 and Eq. 2 as an example, a way of compensating is to obtain spectral data S.sub.2i.sup.C (FIG. 10) for the standard sample gas 90 with the operating condition 94, where the subscript "i" designates that this is generally in an initial situation such as at the factory. A horizontal baseline BL=[(S.sub.2i.sup.C /S.sub.0.sup.C)/(1/S.sub.0.sup.C -1)].sub.av is computed 138 where the subscript av designates averaging over the the selected wavenumber range for the spectral data. The computed baseline is stored 140, and then it is applied back to S.sub.2i.sup.C to compute 142 corrected standard spectral data S.sub.2.sup.C with a formula S.sub.2.sup.C = (BL-S.sub.2i.sup.C)/BL. This computation also incorporates a conversion related to the fact that the present instrument provides transmission data if the published (e.g. HITRAN) data is absorption-type data. This corrected data is the S.sub.2.sup.C that is stored and utilized in the computation of the transformation function F. Although a horizontal, linear baseline detemined as above should be sufficient, more generally the baseline is a function computed from the initial spectral data and the true spectral data with a conventional or other desired procedure. The same type of correction is made for the case of FIG. 5 where a basic sample is used. The data S.sub.0.sup.B, S.sub.2i.sup.B and S.sub.2.sup.B are substituted respectively for S.sub.0.sup.C, S.sub.2i.sup.C and S.sub.2.sup.C. More broadly, the data associated with the sample with known fundamental data are used to make the baseline correction. The accuracy of a computational structure of the present invention for standardizing spectral information may be checked by a comparison of data for a basic sample. Measurement is made with the operational condition to obtain data which is baseline corrected (as above) to effect measured spectral data S.sub.2.sup.C for the standard sample having known true data (e.g. FIG. 7). Standardized spectral information S.sub.I.sup.C for this sample is determined from the relationship S.sub.I.sup.C =S.sub.2.sup.C *F, with (if otherwise used) apodization, intervening interpolation, log and shift, and then antilog and re-interpolation. Corresponding hypothetical spectral information S.sub.H.sup.C are computed from the fundamental data by S.sub.H.sup.C =S.sub.0.sup.C *J.sub.I, the subscript "H" referring to hypothetical. Apodization (if used) also is applied to the latter computation, so that the actual latter formula is S.sub.H.sup.C =S.sub.0.sup.C *A*J.sub.I. The spectral information S.sub.I.sup.C and S.sub.H.sup.C are compared, either by presentation of the two sets of data for observation, or preferably by calculation of the differences across the wavenumber range. In an example comparison, using the specific instrument and conditions referenced herein, it was found that the differences were less than 3%. A computer readable storage medium 35 (FIG. 1) such as a hard disk of the computer, or a portable medium such as a floppy disk, CD-ROM or tape is advantageous for use with the instruments described herein. The disk (or other storage medium) has data code and program code embedded therein so as to be readable by the computing means. With reference to FIG. 5, the data code includes at least the idealized function J.sub.I for spectral line shapes, and standard spectral data S.sub.1.sup.C obtained for a standard sample with the standardizing condition. If used, the further idealized function J.sub.I1 also is included. The program code includes means for establishing the standard function J.sub.1 that relates the standard spectral data to the true spectral data, and means for relating the idealized function, the standard function, the standard spectral data and the operational spectral data S.sub.1.sup.C (for the standard sample with the operational condition) with the transformation function F (F"' in Eq. 3 or 3a and FIG. 5). The program code further includes means for computing standardized spectral information S.sub.I.sup.T for the test sample corrected for the intrinsic distortion by application of the transformation function F to test spectral data obtained for a test sample with the operational condition. The stored means for establishing the standard function J.sub.1 may be theoretical, as with FIG. 6 and Eq. 1a, in which case the stored means comprises the theoretical formula; or, intended as an equivalent in the claims, a precomputed theoretical J.sub.1 is included in the data code. Alternatively the stored means may utilize basic sample data S.sub.0.sup.B and S.sub.1.sup.B as set forth in Eq. 3a with respect to FIG. 5. In the latter case, S.sub.0.sup.B is also stored on the floppy disk after having been apodized, interpolated, logarithm applied and shifted as explained above. In another embodiment, with functional means represented in FIG. 7, the disk is set up for the case in which the standard sample has known fundamental data. In such case, the data code includes an idealized function J.sub.I for spectral line shape, and fundamental spectral data S.sub.0.sup.C for the standard sample. The program code includes means for relating the idealized function, the standard spectral data and the operational spectral data S.sub.2.sup.C with a transformation function F (F" in FIG. 7). The program code further includes means for computing standardized spectral information S.sub.I.sup.T for the test sample corrected for the intrinsic distortion by application of the transformation function F to test spectral data obtained for a test sample with the operational condition. A floppy disk (or other portable storage medium) may be provided with certain minimal data for use by an instrument already having the programming means and certain data functions incorporated into the instrument computer (e.g. hard disk). In such a case, the data code includes an idealized function J.sub.I1 for spectral line shape associated with the standardizing condition, and standard spectral data S.sub.I1.sup.C, obtained for a standard sample with the standardizing condition. Alternatively, the data code include J.sub.1 and S.sub.1.sup.C. The idealized function and the standard spectral data have the aforedescribed cooperative relationship for application to test spectral data obtained for a test sample with the operational condition. A direct relationship is S.sub.1.sup.C /J.sub.1 ; however, a preferred relationship is S.sub.I1.sup.C =S.sub.1.sup.C *(J.sub.I1 /J.sub.1), where J.sub.I1 is a second idealized function as set forth above, which is a component in the sequence of computations for F. Either of these relationships may be precomputed and stored on the disk but, as indicated previously, the components are preferably kept separate on the disk to allow the preferred computational sequence. Such data on a separate disk is useful for updating or changing the standard sample for an instrument that already incorporates the invention. In a further embodiment, a floppy disk or other storage medium is provided for use in standardizing spectral information in a spectrometric instrument that includes logarithmic transformation in the standardization as described above. The disk has data code readable by the computing means of the instrument, wherein the data code comprises fundamental spectral data for the basic sample. The fundamental spectral data is in a form 72 (FIG. 5) that is apodized, interpolated, logarithm applied and axis shifted. The fundamental data may also be in such a form for a standard sample having such data (FIG. 7, Eq. 2). This disk is useful for providing updated or replacement fundamental spectral data to an instrument already incorporating a standardization, and is particularly useful with the data being in a directly usable form. A further embodiment (FIG. 11) omits the need for any standard or basic sample (except for conventional calibration purposes outside the purview of the invention). This utilizes the J-stop function J.sub.I =S.sub.2.sup.C /S.sub.0.sup.C. From Eq. 2 the transformation function is restated as: F.sup.v =J.sub.I /J.sub.2 Eq. 5 The idealized function J.sub.I is specified as explained above. A technique for determining J.sub.2 in log space is to operate 144 the instrument with or without a sample to yield a series of measurements 146 of energy data E collected by the detector for a set of J-stop aperatures having different aperture sizes of radius r. The instrument should be operated under conditions intended for its use. The aperture should be varied in a series ranging from the smallest to the operational size, preferably spaced at equal intervals in terms of r.sup.2. Two such aperture sizes 22, 22' are shown in FIG. 2. Wavenumber and other parameters of the energy band are not important except for maintaining these constant in the measurement series. A sample is not necessary but may be in place. The energy E is an energy total, for example being either the entire total or a centerburst of the interferogram which may be detected with step changes in aperature size. Alternatively, with a fixed beam the aperture size may be scanned rapidly as with an iris, with a continuous measurement of energy E without recourse to an actual interferometric scan. From the data 146 a derivative of the energy with respect to aperture size, preferably dE/dr.sup.2, is computed 148 conventionally. A derivative of energy with respect to wavenumber σ is desired, preferably in log space for reasons set forth above. Thus converting to a derivative dE/d(lnσ) is advantageous. Light of true optical wavenumber σ.sub.0 passing through a J-stop aperature having radius r is incident in the interferometer at an angel θ relative to the central ray such that tan θ=r/f and a wavenumber spread is given by σ=σ.sub.0 cos θ where f is the focal distance defined above. From these relationships and interpolation for the logarithm, a relationship for the J-stop function J.sub.2 is computed 150. J.sub.2 =dE/d(lnσ)=-2f.sup.2 (dE/dr.sup.2) Eq. 6 where σ=σ.sub.0 (1-r.sup.2 /2f.sup.2), and the negative sign indicates that lineshape broadens to lower frequency as the J-stop is opened. It may be noted that σ.sub.0 is not specified and can be arbitrary as there is no significant dispersion of optical frequency across the J-stop in a properly designed instrument. The function J.sub.2 is stored 152. As in previous embodiments, the instrument is operated 94 on one or more test samples 114 to obtain normal signal data for each test sample using the operational aperture. The Fourier transform (FT) is applied 86 to the signal data to effect initial spectral data D.sub.2.sup.T. The transformed data are interpolated 78, and the logarithm matrix with shifting is applied 80. The resulting spectral data S.sub.2.sup.T are saved 116 as test data. The previously stored transformation function F.sup.v of J.sub.1 and J.sub.2 (Eq. 5) is applied 118 to this test data, to yield a logarithmic form of the idealized spectral data S.sub.I.sup.T for the test sample. The antilogarithm matrix is applied 120, then reverse interpolation is computed 122, to provide the final idealized spectral information S.sub.I.sup.T which is displayed 124 for the test sample. While the invention has been described above in detail with reference to specific embodiments, various changes and modifications which fall within the spirit of the invention and scope of the appended claims will become apparent to those skilled in this art. Therefore, the invention is intended only to be limited by the appended claims or their equivalents.
{"url":"http://www.google.fr/patents/US6049762","timestamp":"2014-04-21T04:48:32Z","content_type":null,"content_length":"192780","record_id":"<urn:uuid:84101ebd-ee71-44e9-838e-d1225a59ad0a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Harrisburg, TX Calculus Tutor Find a Harrisburg, TX Calculus Tutor ...I am a retired state certified teacher in Texas both in composite high school science and mathematics. I offer a no-fail guarantee (contact me via WyzAnt for details). I am available at any time of the day; I try to be as flexible as possible. I try as much as possible to work in the comfort of your own home at a schedule convenient to you. 35 Subjects: including calculus, chemistry, physics, statistics ...I was a counselor in charge of a 6-15 person group of elementary school kids. We participated in activities including learning Math and Science, solving mysteries, and playing sports. I worked on a daily basis with elementary school kids and grew very fond of them. 22 Subjects: including calculus, chemistry, physics, geometry ...I have taught statistics, experimental design, technical writing and various subjects and levels in college psychology, including introduction, personality theory, research methods and psychological measurement (i.e. testing). My specialty is statistics, and in addition to teaching I have served ... 20 Subjects: including calculus, writing, algebra 1, algebra 2 ...By expanding one's daily vocabulary and improving reading speed, reading comprehension soon follows. I have more than 20 years of experience in working with students to improve their vocabulary, reading rate and reading comprehension. Writing is the last skill mastered by learners in any language, following listening, speaking and reading. 22 Subjects: including calculus, reading, English, grammar ...Regardless of what subject I am working with someone on, I will strive to make sure the student understands. Here is a list of the subjects I've have taught or am capable of teaching: Math- Pre-Algebra High school, Linear and College Algebra, Geometry, Pre-Calculus, Trigono... 38 Subjects: including calculus, chemistry, reading, physics Related Harrisburg, TX Tutors Harrisburg, TX Accounting Tutors Harrisburg, TX ACT Tutors Harrisburg, TX Algebra Tutors Harrisburg, TX Algebra 2 Tutors Harrisburg, TX Calculus Tutors Harrisburg, TX Geometry Tutors Harrisburg, TX Math Tutors Harrisburg, TX Prealgebra Tutors Harrisburg, TX Precalculus Tutors Harrisburg, TX SAT Tutors Harrisburg, TX SAT Math Tutors Harrisburg, TX Science Tutors Harrisburg, TX Statistics Tutors Harrisburg, TX Trigonometry Tutors Nearby Cities With calculus Tutor Alta Loma, TX calculus Tutors Bordersville, TX calculus Tutors El Jardin, TX calculus Tutors Galena Park calculus Tutors Greenway Plaza, TX calculus Tutors Howellville, TX calculus Tutors Inks Lake Village, TX calculus Tutors Lomax, TX calculus Tutors Oak Forest, TX calculus Tutors Pine Valley, TX calculus Tutors Satsuma, TX calculus Tutors Sunny Side, TX calculus Tutors Sylvan Beach, TX calculus Tutors Timber Cove, TX calculus Tutors Tod, TX calculus Tutors
{"url":"http://www.purplemath.com/Harrisburg_TX_Calculus_tutors.php","timestamp":"2014-04-21T02:31:18Z","content_type":null,"content_length":"24305","record_id":"<urn:uuid:f122cf8a-c319-4e31-be68-05bd8a21c620>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
Levenshtein distance This article needs additional citations for verification. (February 2010) In information theory and computer science, the Levenshtein distance is a string metric for measuring the difference between two sequences. Informally, the Levenshtein distance between two words is the minimum number of single-character edits (insertion, deletion, substitution) required to change one word into the other. The phrase edit distance is often used to refer specifically to Levenshtein distance. It is named after Vladimir Levenshtein, who considered this distance in 1965.^[1] It is closely related to pairwise string alignments. Mathematically, the Levenshtein distance between two strings $a, b$ is given by $\operatorname{lev}_{a,b}(|a|,|b|)$ where $\qquad\operatorname{lev}_{a,b}(i,j) = \begin{cases} \max(i,j) & \text{ if} \min(i,j)=0, \\ \min \begin{cases} \operatorname{lev}_{a,b}(i-1,j) + 1 \\ \operatorname{lev}_{a,b}(i,j-1) + 1 \\ \ operatorname{lev}_{a,b}(i-1,j-1) + 1_{(a_i eq b_j)} \end{cases} & \text{ otherwise.} \end{cases}$ where $1_{(a_i eq b_j)}$ is the indicator function equal to 0 when $a_i = b_j$ and 1 otherwise. Note that the first element in the minimum corresponds to deletion (from $a$ to $b$), the second to insertion and the third to match or mismatch, depending on whether the respective symbols are the For example, the Levenshtein distance between "kitten" and "sitting" is 3, since the following three edits change one into the other, and there is no way to do it with fewer than three edits: 1. kitten → sitten (substitution of "s" for "k") 2. sitten → sittin (substitution of "i" for "e") 3. sittin → sitting (insertion of "g" at the end). Upper and lower bounds[edit] The Levenshtein distance has several simple upper and lower bounds. These include: • It is always at least the difference of the sizes of the two strings. • It is at most the length of the longer string. • It is zero if and only if the strings are equal. • If the strings are the same size, the Hamming distance is an upper bound on the Levenshtein distance. • The Levenshtein distance between two strings is no greater than the sum of their Levenshtein distances from a third string (triangle inequality). In approximate string matching, the objective is to find matches for short strings in many longer texts, in situations where a small number of differences is to be expected. The short strings could come from a dictionary, for instance. Here, one of the strings is typically short, while the other is arbitrarily long. This has a wide range of applications, for instance, spell checkers, correction systems for optical character recognition, and software to assist natural language translation based on translation memory. The Levenshtein distance can also be computed between two longer strings, but the cost to compute it, which is roughly proportional to the product of the two string lengths, makes this impractical. Thus, when used to aid in fuzzy string searching in applications such as record linkage, the compared strings are usually short to help improve speed of comparisons. Relationship with other edit distance metrics[edit] There are other popular measures of edit distance, which are calculated using a different set of allowable edit operations. For instance, • the Damerau–Levenshtein distance allows insertion, deletion, substitution, and the transposition of two adjacent characters; • the longest common subsequence metric allows only insertion and deletion, not substitution; • the Hamming distance allows only substitution, hence, it only applies to strings of the same length. Edit distance is usually defined as a parametrizable metric calculated with a specific set of allowed edit operations, and each operation is assigned a cost (possibly infinite). This is further generalized by DNA sequence alignment algorithms such as the Smith–Waterman algorithm, which make an operation's cost depend on where it is applied. Computing Levenshtein distance[edit] A straightforward recursive implementation LevenshteinDistance function takes two strings, s and t, and returns the Levenshtein distance between them: // len_s and len_t are the number of characters in string s and t respectively int LevenshteinDistance(string s, int len_s, string t, int len_t) { /* test for degenerate cases of empty strings */ if (len_s == 0) return len_t; if (len_t == 0) return len_s; /* test if last characters of the strings match */ if (s[len_s-1] == t[len_t-1]) cost = 0; else cost = 1; /* return minimum of delete char from s, delete char from t, and delete char from both */ return minimum(LevenshteinDistance(s, len_s - 1, t, len_t ) + 1, LevenshteinDistance(s, len_s , t, len_t - 1) + 1, LevenshteinDistance(s, len_s - 1, t, len_t - 1) + cost); } Unfortunately, the straightforward recursive implementation is very inefficient because it recomputes the Levenshtein distance of the same substrings many times. A better method would never repeat the same distance calculation. For example, the Levenshtein distance of all possible prefixes might be stored in an array d[][] where d[i][j] is the distance between the first i characters of string s and the first j characters of string t. The table is easy to construct one row at a time starting with row 0. When the entire table has been built, the desired distance is d[len_s][len_t]. While this technique is significantly faster, it will consume len_s * len_t more memory than the straightforward recursive implementation. Iterative with full matrix[edit] Note: This section uses 1-based strings instead of 0-based strings Computing the Levenshtein distance is based on the observation that if we reserve a matrix to hold the Levenshtein distances between all prefixes of the first string and all prefixes of the second, then we can compute the values in the matrix in a dynamic programming fashion, and thus find the distance between the two full strings as the last value computed. This algorithm, an example of bottom-up dynamic programming, is discussed, with variants, in the 1974 article The String-to-string correction problem by Robert A. Wagner and Michael J. Fischer.^[2] A straightforward implementation, as pseudocode for a function LevenshteinDistance that takes two strings, s of length m, and t of length n, and returns the Levenshtein distance between them: int LevenshteinDistance(char s[1..m], char t[1..n]) { // for all i and j, d[i,j] will hold the Levenshtein distance between // the first i characters of s and the first j characters of t; // note that d has (m+1)*(n+1) values declare int d[0..m, 0..n] clear all elements in d // set each element to zero // source prefixes can be transformed into empty string by // dropping all characters for i from 1 to m { d[i, 0] := i } // target prefixes can be reached from empty source prefix // by inserting every characters for j from 1 to n { d[0, j] := j } for j from 1 to n { for i from 1 to m { if s[i] = t[j] then d[i, j] := d[i-1, j-1] // no operation required else d[i, j] := minimum ( d[i-1, j] + 1, // a deletion d[i, j-1] + 1, // an insertion d[i-1, j-1] + 1 // a substitution ) } } return d[m, n] } Note that this implementation does not fit the definition precisely: it always prefers matches, even if insertions or deletions provided a better score. This is equivalent; it can be shown that for every optimal alignment (which induces the Levenshtein distance) there is another optimal alignment that prefers matches in the sense of this implementation.^[3] Two examples of the resulting matrix (hovering over a number reveals the operation performed to get that number): k i t t e n S a t u r d a y s 1 1 2 3 4 5 6 S 1 0 1 2 3 4 5 6 7 i 2 2 1 2 3 4 5 u 2 1 1 2 2 3 4 5 6 t 3 3 2 1 2 3 4 n 3 2 2 2 3 3 4 5 6 t 4 4 3 2 1 2 3 d 4 3 3 3 3 4 3 4 5 i 5 5 4 3 2 2 3 a 5 4 3 4 4 4 4 3 4 n 6 6 5 4 3 3 2 y 6 5 4 4 5 5 5 4 3 g 7 7 6 5 4 4 3 The invariant maintained throughout the algorithm is that we can transform the initial segment s[1..i] into t[1..j] using a minimum of d[i,j] operations. At the end, the bottom-right element of the array contains the answer. Proof of correctness[edit] As mentioned earlier, the invariant is that we can transform the initial segment s[1..i] into t[1..j] using a minimum of d[i,j] operations. This invariant holds since: • It is initially true on row and column 0 because s[1..i] can be transformed into the empty string t[1..0] by simply dropping all i characters. Similarly, we can transform s[1..0] to t[1..j] by simply adding all j characters. • If s[i] = t[j], and we can transform s[1..i-1] to t[1..j-1] in k operations, then we can do the same to s[1..i] and just leave the last character alone, giving k operations. • Otherwise, the distance is the minimum of the three possible ways to do the transformation: □ If we can transform s[1..i] to t[1..j-1] in k operations, then we can simply add t[j] afterwards to get t[1..j] in k+1 operations (insertion). □ If we can transform s[1..i-1] to t[1..j] in k operations, then we can remove s[i] and then do the same transformation, for a total of k+1 operations (deletion). □ If we can transform s[1..i-1] to t[1..j-1] in k operations, then we can do the same to s[1..i], and exchange the original s[i] for t[j] afterwards, for a total of k+1 operations • The operations required to transform s[1..n] into t[1..m] is of course the number required to transform all of s into all of t, and so d[n,m] holds our result. This proof fails to validate that the number placed in d[i,j] is in fact minimal; this is more difficult to show, and involves an argument by contradiction in which we assume d[i,j] is smaller than the minimum of the three, and use this to show one of the three is not minimal. Possible modifications[edit] Possible modifications to this algorithm include: • We can adapt the algorithm to use less space, O(min(n,m)) instead of O(mn), since it only requires that the previous row and current row be stored at any one time. • We can store the number of insertions, deletions, and substitutions separately, or even the positions at which they occur, which is always j. • We can normalize the distance to the interval [0,1]. • If we are only interested in the distance if it is smaller than a threshold k, then it suffices to compute a diagonal stripe of width 2k+1 in the matrix. In this way, the algorithm can be run in O(kl) time, where l is the length of the shortest string.^[4] • We can give different penalty costs to insertion, deletion and substitution. We can also give penalty costs that depend on which characters are inserted, deleted or substituted. • By initializing the first row of the matrix with 0, the algorithm can be used for fuzzy string search of a string in a text.^[5] This modification gives the end-position of matching substrings of the text. To determine the start-position of the matching substrings, the number of insertions and deletions can be stored separately and used to compute the start-position from the end-position. • This algorithm parallelizes poorly, due to a large number of data dependencies. However, all the cost values can be computed in parallel, and the algorithm can be adapted to perform the minimum function in phases to eliminate dependencies. • By examining diagonals instead of rows, and by using lazy evaluation, we can find the Levenshtein distance in O(m (1 + d)) time (where d is the Levenshtein distance), which is much faster than the regular dynamic programming algorithm if the distance is small.^[7] Iterative with two matrix rows[edit] It turns out that only two rows of the table are needed for the construction: the previous row and the current row (the one being calculated). The Levenshtein distance may be calculated iteratively using the following algorithm:^[8] int LevenshteinDistance(string s, string t) { // degenerate cases if (s == t) return 0; if (s.Length == 0) return t.Length; if (t.Length == 0) return s.Length; // create two work vectors of integer distances int[] v0 = new int[t.Length + 1]; int[] v1 = new int[t.Length + 1]; // initialize v0 (the previous row of distances) // this row is A[0][i]: edit distance for an empty s // the distance is just the number of characters to delete from t for (int i = 0; i <= v0.Length; i++) v0[i] = i; for (int i = 0; i < s.Length; i++) { // calculate v1 (current row distances) from the previous row v0 // first element of v1 is A[i+1][0] // edit distance is delete (i+1) chars from s to match empty t v1[0] = i + 1; // use formula to fill in the rest of the row for (int j = 0; j < t.Length; j++) { var cost = (s[i] == t[j]) ? 0 : 1; v1[j + 1] = Minimum(v1[j] + 1, v0[j + 1] + 1, v0[j] + cost); } // copy v1 (current row) to v0 (previous row) for next iteration for (int j = 0; j < v0.Length; j++) v0[j] = v1[j]; } return v1[t.Length]; } See also[edit] External links[edit] • Black, Paul E., ed. (14 August 2008), "Levenshtein distance", Dictionary of Algorithms and Data Structures [online], U.S. National Institute of Standards and Technology, retrieved 3 April 2013
{"url":"http://blekko.com/wiki/Levenshtein_distance?source=672620ff","timestamp":"2014-04-21T00:26:36Z","content_type":null,"content_length":"59355","record_id":"<urn:uuid:ab3582c6-6535-450b-a8e2-7921e0f3e167>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
Learning mixtures of Bayesian networks Results 1 - 10 of 14 - Journal of Computational Biology , 2000 "... DNA hybridization arrays simultaneously measure the expression level for thousands of genes. These measurements provide a “snapshot ” of transcription levels within the cell. A major challenge in computational biology is to uncover, from such measurements, gene/protein interactions and key biologica ..." Cited by 731 (16 self) Add to MetaCart DNA hybridization arrays simultaneously measure the expression level for thousands of genes. These measurements provide a “snapshot ” of transcription levels within the cell. A major challenge in computational biology is to uncover, from such measurements, gene/protein interactions and key biological features of cellular systems. In this paper, we propose a new framework for discovering interactions between genes based on multiple expression measurements. This framework builds on the use of Bayesian networks for representing statistical dependencies. A Bayesian network is a graph-based model of joint multivariate probability distributions that captures properties of conditional independence between variables. Such models are attractive for their ability to describe complex stochastic processes and because they provide a clear methodology for learning from (noisy) observations. We start by showing how Bayesian networks can describe interactions between genes. We then describe a method for recovering gene interactions from microarray data using tools for learning Bayesian networks. Finally, we demonstrate this method on the S. cerevisiae cell-cycle measurements of Spellman et al. (1998). Key words: gene expression, microarrays, Bayesian methods. 1. , 1998 "... In recent years there has been a flurry of works on learning Bayesian networks from data. One of the hard problems in this area is how to effectively learn the structure of a belief network from incomplete data---that is, in the presence of missing values or hidden variables. In a recent paper, I in ..." Cited by 220 (12 self) Add to MetaCart In recent years there has been a flurry of works on learning Bayesian networks from data. One of the hard problems in this area is how to effectively learn the structure of a belief network from incomplete data---that is, in the presence of missing values or hidden variables. In a recent paper, I introduced an algorithm called Structural EM that combines the standard Expectation Maximization (EM) algorithm, which optimizes parameters, with structure search for model selection. That algorithm learns networks based on penalized likelihood scores, which include the BIC/MDL score and various approximations to the Bayesian score. In this paper, I extend Structural EM to deal directly with Bayesian model selection. I prove the convergence of the resulting algorithm and show how to apply it for learning a large class of probabilistic models, including Bayesian networks and some variants thereof. - Statistics and Computing , 1998 "... Cross-validated likelihood is investigated as a tool for automatically determining the appropriate number of components (given the data) in finite mixture modelling, particularly in the context of model-based probabilistic clustering. The conceptual framework for the cross-validation approach to mod ..." Cited by 65 (4 self) Add to MetaCart Cross-validated likelihood is investigated as a tool for automatically determining the appropriate number of components (given the data) in finite mixture modelling, particularly in the context of model-based probabilistic clustering. The conceptual framework for the cross-validation approach to model selection is direct in the sense that models are judged directly on their out-of-sample predictive performance. The method is applied to a well-known clustering problem in the atmospheric science literature using historical records of upper atmosphere geopotential height in the Northern hemisphere. Cross-validated likelihood provides strong evidence for three clusters in the data set, providing an objective confirmation of earlier results derived using non-probabilistic clustering techniques. 1 Introduction Cross-validation is a well-known technique in supervised learning to select a model from a family of candidate models. Examples include selecting the best classification tree using cr... , 1998 "... We show how many different variants of Switching Kalman Filter models can be represented in a unified way, leading to a single, general-purpose inference algorithm. We then show how to find approximate Maximum Likelihood Estimates of the parameters using the EM algorithm, extending previous results ..." Cited by 58 (3 self) Add to MetaCart We show how many different variants of Switching Kalman Filter models can be represented in a unified way, leading to a single, general-purpose inference algorithm. We then show how to find approximate Maximum Likelihood Estimates of the parameters using the EM algorithm, extending previous results on learning using EM in the non-switching case [DRO93, GH96a] and in the switching, but fully observed, case [Ham90]. 1 Introduction Dynamical systems are often assumed to be linear and subject to Gaussian noise. This model, called the Linear Dynamical System (LDS) model, can be defined as x t = A t x t\Gamma1 + v t y t = C t x t +w t where x t is the hidden state variable at time t, y t is the observation at time t, and v t ¸ N(0; Q t ) and w t ¸ N(0; R t ) are independent Gaussian noise sources. Typically the parameters of the model \Theta = f(A t ; C t ; Q t ; R t )g are assumed to be time-invariant, so that they can be estimated from data using e.g., EM [GH96a]. One of the main adva... - In Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing , 2000 "... Most HMM-based speech recognition systems use Gaussian mixtures as observation probability density functions. An important goal in all such systems is to improve parsimony. One method is to adjust the type of covariance matrices used. In this work, factored sparse inverse covariance matrices are int ..." Cited by 38 (10 self) Add to MetaCart Most HMM-based speech recognition systems use Gaussian mixtures as observation probability density functions. An important goal in all such systems is to improve parsimony. One method is to adjust the type of covariance matrices used. In this work, factored sparse inverse covariance matrices are introduced. Based on Í �Í factorization, the inverse covariance matrix can be represented using linear regressive coefficients which 1) correspond to sparse patterns in the inverse covariance matrix (and therefore represent conditional independence properties of the Gaussian), and 2), result in a method of partial tying of the covariance matrices without requiring non-linear EM update equations. Results show that the performance of full-covariance Gaussians can be matched by factored sparse inverse covariance Gaussians having significantly fewer parameters. 1. - In Proceedings of Artificial Intelligence and Statistics , 1999 "... Probabilistic model-based clustering, based on finite mixtures of multivariate models, is a useful framework for clustering data in a statistical context. This general framework can be directly extended to clustering of sequential data, based on finite mixtures of sequential models. In this paper we ..." Cited by 28 (1 self) Add to MetaCart Probabilistic model-based clustering, based on finite mixtures of multivariate models, is a useful framework for clustering data in a statistical context. This general framework can be directly extended to clustering of sequential data, based on finite mixtures of sequential models. In this paper we consider the problem of fitting mixture models where both multivariate and sequential observations are present. A general EM algorithm is discussed and experimental results demonstrated on simulated data. The problem is motivated by the practical problem of clustering individuals into groups based on both their static characteristics and their dynamic behavior. 1 Introduction and Motivation Consider the following problem. We have a set of individuals (a random sample from a larger population) whomwe would like to cluster into groups based on observational data. For each individual we can measure characteristics which are relatively static (e.g., their height, weight, income, age, sex, etc)... - 2004 Black, Paul E. “Markov Chain.” National Institute of Standards and Technology , 2002 "... Abstract. We define the problem of inferring a “mixture of Markov chains ” based on observing a stream of interleaved outputs from these chains. We show a sharp characterization of the inference process. The problems we consider also has applications such as gene finding, intrusion detection, etc., ..." Cited by 12 (0 self) Add to MetaCart Abstract. We define the problem of inferring a “mixture of Markov chains ” based on observing a stream of interleaved outputs from these chains. We show a sharp characterization of the inference process. The problems we consider also has applications such as gene finding, intrusion detection, etc., and more generally in analyzing interleaved sequences. 1 - Proc. 16th European Conf. Machine Learning, Lecture Notes in Computer Science , 2005 "... Abstract. Ensemble classifiers combine the classification results of several classifiers. Simple ensemble methods such as uniform averaging over a set of models usually provide an improvement over selecting the single best model. Usually probabilistic classifiers restrict the set of possible models ..." Cited by 10 (0 self) Add to MetaCart Abstract. Ensemble classifiers combine the classification results of several classifiers. Simple ensemble methods such as uniform averaging over a set of models usually provide an improvement over selecting the single best model. Usually probabilistic classifiers restrict the set of possible models that can be learnt in order to lower computational complexity costs. In these restricted spaces, where incorrect modelling assumptions are possibly made, uniform averaging sometimes performs even better than bayesian model averaging. Linear mixtures over sets of models provide an space that includes uniform averaging as a particular case. We develop two algorithms for learning maximum a posteriori weights for linear mixtures, based on expectation maximization and on constrained optimizition. We provide a nontrivial example of the utility of these two algorithms by applying them for one dependence estimators. We develop the conjugate distribution for one dependence estimators and empirically show that uniform averaging is clearly superior to BMA for this family of models. After that we empirically show that the maximum a posteriori linear mixture weights improve accuracy significantly over uniform aggregation. - In Joint 13th International Conference on Artificial Neural Network (ICANN-2003) and 10th International Conference on Neural Information Processing (ICONIP-2003), Long paper, Lecture Notes in Computer Science , 2003 "... The Naive Bayesian (NB) network classifier, a probabilistic model with a strong assumption of conditional independence among features, shows a surprisingly competitive prediction performance even when compared with some state-of-the-art classifiers. With a looser assumption of conditional independ ..." Cited by 3 (1 self) Add to MetaCart The Naive Bayesian (NB) network classifier, a probabilistic model with a strong assumption of conditional independence among features, shows a surprisingly competitive prediction performance even when compared with some state-of-the-art classifiers. With a looser assumption of conditional independence, the Semi-Naive Beyesian (SNB) network classifier is superior to NB classifiers when features are combined. However, the problem for SNB is that its structure is still strongly constrained which may generate inaccurate distributions for some datasets. A natural progression to improve SNB is to extend it using the mixture approach. However, in obtaining the final structure, traditional SNBs use the heuristic approaches to learn the structure from data locally. On the other hand, ExpectationMaximization (EM) method is used in the mixture approach to obtain the structure iteratively. The extension is difficult to integrate the local heuristic into the maximization step since it may not convergence. In this paper we firstly develop a Bounded Semi-Naive Bayesian network (B-SNB) model, which contains the restriction on the number of variables that can be joined in a combined feature. As opposed to local property of the traditional SNB models, our model enjoys a global nature and maintains a polynomial time cost. Overcoming the difficulty of integrating SNBs into the mixture model, we then propose an algorithm to extend it into a finite mixture structure, named Mixture of Bounded Semi-Naive Bayesian network (MBSNB). We give theoretical derivations, outline of the algorithm, analysis of algo- rithm and a set of experiments to demonstrate the usefulness of MBSNB in some classification tasks. The novel finite MBSNB network shows good speed up, ability to converge and ... - in Proceedings of International Joint Conference on Neural Networks (IJCNN , 2004 "... Various iterative refinement clustering methods are dependent on the initial state of the model and are capable of obtaining one of their local optima only. Since the task of identifying the global optimization is NP-hard, the study of the initialization method towards a sub-optimization is of great ..." Cited by 3 (0 self) Add to MetaCart Various iterative refinement clustering methods are dependent on the initial state of the model and are capable of obtaining one of their local optima only. Since the task of identifying the global optimization is NP-hard, the study of the initialization method towards a sub-optimization is of great value. This paper reviews the various cluster initialization methods in the literature by categorizing them into three major families, namely random sampling methods, distance optimization methods, and density estimation methods. In addition, using a set of quantitative measures, we assess their performance on a number of synthetic and real-life data sets. Our controlled benchmark identifies two distance optimization methods, namely SCS and KKZ, as complements of the K-Means learning characteristics towards a better cluster separation in the output solution.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=187671","timestamp":"2014-04-16T09:04:14Z","content_type":null,"content_length":"41750","record_id":"<urn:uuid:051bfb55-db1a-4fb6-a4f2-4d9c592635f2>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
SSC IBPS Competitive Exams Updates on MeriView.in Dear readers, this time we will be discussing about shortcut formulas for profit and loss questions. Profit and loss questions are asked in mostly all types of examinations including competitive exams and entrance exams. So for qualifying these examinations, we need to solve these question as quickly as possible. Here are a few shortcut formulas for profit and loss. Remember them on your fingertips to make you score more in exams. Here is a Full set of shortcut formulas for profit and loss questions. Profit = SP-CP where SP= selling price CP= cost price Profit %=(( Profit*100)/CP)% SP = [(100+Profit%)/100]*CP Loss = CP-SP Loss %=((Loss*100)/CP)% SP = [(100-Loss%)/100]*CP all the shortcut tips and tricks are discussed here · A man losses x% when he sells an item for z. if he wants to make a profit of y%, then SP= ((100+y)z)/(100-x) SSC CGL 2014 BOOKS and STUDY MATERIAL learn HOW TO SOLVE 2,3,4 STATEMENT SYLLOGISM QUICKLY AND EASILY USING VENN DIAGRAMS · A man sells two items at the same price P and he make a profit of x on first item and a loss of y on second item, then Profit/loss %= (100(x-y)-2xy)/(200+x-y) If the result is +ve, then there is profit % but if the result is negative, then there is loss %. · If the selling price of two items is same but one item is sold at x% profit and other is sold at x% loss, then In that case, there is always loss and Loss %= (x^2)/100 If on selling an item for x, a person gets profit equal to the loss if he sells it for y, then cost price of that item will be CP=(x+y)/2 · If on selling an item for x, a person gets profit equal to the loss if he sells it for y. then if that person wants to make z% profit on that item, then he should sell the item at SP= ((x+y)(100+z))/200 · If A sells an item on x% profit to B, B sells the item on y% loss to C, C sells the item on z% profit to D for S money. Then selling price of item by A will be If the result comes +, then there is profit. If the result comes -, then there is loss. · If a person buys y goods for x and sells y goods for x, then the profit/loss % of that person will be Profit/loss %= 100(Y^2-x^2)/(x^2) ……………..(x^2 mean x squared) · If a person buys A goods for B and sells C goods for D, then the profit/loss % will be Profit/loss %= (100(AD-BC))/BC · If a person sells an item on cost price but in place of giving 1 kg of item, he gives only 960 grams by using wrong techniques, then the profit % of that person will be Profit%= (100(actual weight- wrong weight))/wrong weight · If an item is sold at 10% profit rather that 16% profit and the person who sells it gets 200 (money) less on doing this. Then the cost price of that item will be · A person sells an item for x% profit. If he sells it for z money more, he would make a profit of y%, then · If a corrupted shopkeeper makes x% while purchasing an item and y% while selling an item, then his profit %age will be Profit %age= x+y+(xy/100) If x=y then Profit %age= 2x+(x^2/100) · A dishonest shopkeeper sells an item for 10% profit but he makes a foul by giving that item 20% less by weight. So the profit % he makes will be Profit %={100(profit % + % less by weight)}/100-% less by weight) So there is the set of shortcut formulas for profit and loss questions to help you easily solve them. Stay updated
{"url":"http://www.meriview.in/2013/09/shortcut-formulas-for-profit-and-loss.html","timestamp":"2014-04-18T16:41:35Z","content_type":null,"content_length":"98166","record_id":"<urn:uuid:dbba976c-bd2b-4849-8772-c1e93579f508>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
Lukas Theussl Triumf, Vancouver, Canada Generalized parton distributions of the pion in a Bethe-Salpeter approach Generalized parton distribution functions (GPDs) are calculated in a field theoretic formalism using a covariant Bethe-Salpeter approach for the determination of the bound-state wave function. The procedure is first described in an exact calculation in scalar Electrodynamics, and then extended to the Nambu-Jona-Lasinio model, a realistic theory of the pion. In both cases we demonstrate that all important features required by general physical considerations, like symmetry properties, sum rules and the polynomiality condition, are explicitly verified. Back to the theory seminar page.
{"url":"http://www.phy.anl.gov/theory/semabstracts04/theussl.html","timestamp":"2014-04-20T18:59:54Z","content_type":null,"content_length":"1064","record_id":"<urn:uuid:164f86e3-a45b-414b-9128-9becc096b644>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
Control operators tryAwait :: Monad m => Pipe a b m (Maybe a)Source Like await, but returns Just x when the upstream pipe yields some value x, and Nothing when it terminates. Further calls to tryAwait after upstream termination will keep returning Nothing, whereas calling await will terminate the current pipe immediately. forP :: Monad m => (a -> Pipe a b m r) -> Pipe a b m ()Source Execute the specified pipe for each value in the input stream. Any action after a call to forP will be executed when upstream terminates. ($$) :: Monad m => Pipe x a m r' -> Pipe a y m r -> Pipe x y m (Maybe r)Source Connect producer to consumer, ignoring producer return value. Folds are pipes that consume all their input and return a value. Some of them, like fold1, do not return anything when they don't receive any input at all. That means that the upstream return value will be returned instead. Folds are normally used as Consumers, but they are actually polymorphic in the output type, to encourage their use in the implementation of higher-level combinators. fold :: Monad m => (b -> a -> b) -> b -> Pipe a x m bSource A fold pipe. Apply a binary function to successive input values and an accumulator, and return the final result. fold1 :: Monad m => (a -> a -> a) -> Pipe a x m aSource A variation of fold without an initial value for the accumulator. This pipe doesn't return any value if no input values are received. List-like pipe combinators drop :: Monad m => Int -> Pipe a a m rSource Remove the first n values from the stream, then act as an identity. takeWhile :: Monad m => (a -> Bool) -> Pipe a a m aSource Act as an identity until as long as inputs satisfy the given predicate. Return the first element that doesn't satisfy the predicate. dropWhile :: Monad m => (a -> Bool) -> Pipe a a m rSource Remove inputs as long as they satisfy the given predicate, then act as an identity. Other combinators
{"url":"http://hackage.haskell.org/package/pipes-core-0.1.0/docs/Control-Pipe-Combinators.html","timestamp":"2014-04-16T08:38:37Z","content_type":null,"content_length":"19345","record_id":"<urn:uuid:68c93c1b-87fe-45bd-a911-b368033f7b86>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
Implicit Differentiation March 24th 2008, 09:18 AM #1 Implicit Differentiation If you could show me the steps as well, that would be great. I'm totally lost: find dy/dx a) x^2 y^2 = x^3-5y b) x^4 + 3 x^2 y^3 - y^2 = 5 c) x^3 - 6y^2 = 10 thanks so much for your help part c I'll work out the last one yeah here we go! $x^3 - 6y^2 = 10<br /> <br />$ The Key point here is that $y=f(x)$ and that we need to use the chain rule for the derivative. $\frac{d}{dx}x^3 - \frac{d}{dx}6y^2 = \frac{d}{dx}10$ so taking the derivative we get... $3x^2-12y \cdot \frac{dy}{dx}=0$ using the chain rule becuase y is a functin of x. now we solve for the derivative $12y \cdot\frac{dy}{dx}=3x^2 \iff \frac{dy}{dx}=\frac{3x^2}{12}y=\frac{x^2}{4y}$ March 24th 2008, 09:39 AM #2
{"url":"http://mathhelpforum.com/calculus/31898-implicit-differentiation.html","timestamp":"2014-04-20T12:46:43Z","content_type":null,"content_length":"33553","record_id":"<urn:uuid:1125ac57-df18-417d-9465-6ef25338b3a3>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
{- | Sampling example with continuous distributions Continuous networks can't be handled by any of the functions defined for the discrete networks. So, instead of using exact inference algorithms like the junction trees, sampling method have to be used. In this example, we want to estimate a parameter which is measured by noisy sensors. There are 'nbSensors' available. They are described with a 'normal' distribution centered on the value of the unknown parameters and with a standard deviation of 0.1. The unknown parameter is described with a 'uniform' distribution bounded by 1.0 and 2.0. First, we describe the sensor: sensor :: 'DN' -> 'CNMonad' 'DN' sensor p = do 'normal' () p 0.1 It is just a 'normal' distribution. The mean of this distribution is the parameters p. This parameter has special type 'DN'. All expressions used to build the continuous bayesian network are using values of type 'DN'. A value of type 'DN' can either represent a constant, a variable or an expression. If the sensor was biased, we could write: 'normal' () (p + 0.2) 0.1 The Bayesian network describing the measurement process is given by: test = 'runCN' $ do a <- 'uniform' \"a\" 1.0 2.0 -- Unknown parameter sensors <- sequence (replicate 'nbSensors' (sensor a)) return (a:sensors) We are connecting 'nbSensors' nodes corresponding to the 'nbSensors' measurements. In real life it can either be different sensors or the same one used several times (assuming the value of the parameter is not dependent on time). Now, as usual in all the examples of this package, we get the bayesian graph and a list of variables used to compute some posterior or define some evidences debugcn = do let ((a:sensors), testG) = test Then, we generate some random measurements and create the evidences g <- create measurements <- sequence . replicate nbSensors $ (MWC.normal 1.5 0.1 g) let evidence = zipWith (=:) sensors measurements Evidence has type 'CVI' and is created with the assigment operator '=:' . Now, we generate some samples to estimate the posterior distributions. n <- 'runSampling' 10000 200 ('continuousMCMCSampler' testG evidence) This function is generating a sequence of graphs ! We are not interested in the sensor values. They are known and fixed since they have been measured. So, we extract the value of the parameter. let samples = map (\g -> 'instantiationValue' . fromJust . 'vertexValue' g $ ('vertex' a)) n And with the samples for the parameters we can compute an histogram and get an approximation of the posterior. let samples = map (\g -> 'instantiationValue' . fromJust . 'vertexValue' g $ ('vertex' a)) n h = 'histogram' 6 samples print h We see in the histogram that the estimated value is around 1.5. module Bayes.Examples.ContinuousSampling( , sensor , test , debugcn ) where import Bayes import Bayes.Continuous import qualified System.Random.MWC.Distributions as MWC(normal) import System.Random.MWC(GenIO,create) import Data.Maybe(fromJust) nbSensors = 10 sensor :: DN -> CNMonad DN sensor p = do normal () p 0.1 test = runCN $ do a <- uniform "a" 1.0 2.0 -- Unknown parameter sensors <- sequence (replicate nbSensors (sensor a)) return (a:sensors) debugcn = do let ((a:sensors), testG) = test g <- create measurements <- sequence . replicate nbSensors $ (MWC.normal 1.5 0.1 g) let evidence = zipWith (=:) sensors measurements n <- runSampling 10000 200 (continuousMCMCSampler testG evidence) let samples = map (\g -> instantiationValue . fromJust . vertexValue g $ (vertex a)) n h = histogram 6 samples print h
{"url":"http://hackage.haskell.org/package/hbayes-0.5/docs/src/Bayes-Examples-ContinuousSampling.html","timestamp":"2014-04-23T07:59:49Z","content_type":null,"content_length":"12623","record_id":"<urn:uuid:962179e2-232a-4c88-bcb4-2e8e873075c3>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: ALGEBRA in Calculus proof... Can you help me understand the basis for bringing delta x upstairs into only the numerator in this step of a problem? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4eee750fe4b0367162f5e433","timestamp":"2014-04-16T08:12:06Z","content_type":null,"content_length":"55697","record_id":"<urn:uuid:2c83c43c-7fed-4388-89b3-007b632d1e06>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
chris dixon's blog The P vs NP problem One of the great unsolved questions in computer science is the P vs NP problem. It is one of the seven Millennium Prize Problems - if you solve one of them, you get $1 million and become really famous among mathematicians and computer scientists. Here’s my non-technical interpretation of the essence of the P vs NP problem: Can every answer that can be feasibly verified also be feasibly calculated? What I am calling “feasible” is what computer scientists call algorithms that can run “polynomial” as opposed to “exponential” time. There are at least four possible outcomes to the attempts to solve this problem: 1) the current situation continues – no proof of anything is found, 2) P=NP is proved true, 3) P=NP is proved false, 4) it is proved that it’s impossible to prove P=NP to be true or false. If P=NP were proved true, there would be many serious real-world consequences. All known encryption schemes rely on the fact that prime factors of large numbers are something that can be feasibly verified but not calculated. If P=NP, that means there would also be feasible ways to calculate prime factors, and hence decrypt codes without their private keys. So if someone does prove P=NP, he or she should probably inform authorities before publishing the proof and all hell breaks loose (thanks Matt for this observation – you could also imagine a lot of conspiracy theories about what happens to scientists who try to prove P=NP..!) Most computer scientists seem to suspect P does not equal NP. MIT computer scientist Scott Aaronson gives informal arguments against P=NP in this entertaining blog post, including this philosophical If P=NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in “creative leaps,” no fundamental gap between solving a problem and recognizing the solution once it’s found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss; everyone who could recognize a good investment strategy would be Warren Buffett. It’s possible to put the point in Darwinian terms: if this is the sort of universe we inhabited, why wouldn’t we already have evolved to take advantage of it? He follows up with a much longer essay (which I found really interesting but ultimately unconvincing) on the philosophical implications of computational complexity (the field of computer science that studies questions like P vs NP). 18 thoughts on “The P vs NP problem” 1. I liked Notch’s quantum-suicide-computer solution: http://notch.tumblr.com/post/10526684412/o-1-np-solving-using-the-mqsc EDIT: In addition, to give people some context, NP problems can be seemingly commonplace. The “travelling salesman” problem of calculating the shortest route between many points is one, and has many practical applications in logistics. It turns out humans may even be better at solving NP than computers. When presented with “travelling salesman” problems, most human test subjects can draw a better solution, faster, than most computer algorithms. 2. @nateberkopec:disqus be careful not to confuse solving NP-complete problems with executing a “good enough” heuristic solution. 3. I’m curious about your thoughts on the philosophical implications? What did you find unconvincing for instance? 4. 1. I think it’s a leap to assume Creating : verifying factoring primes :: creating : verifying beautifulness of Mozart’s music. 2. In the longer essay, I found many of the arguments unconvincing. E.g He tried to reconcile Searle’s “Chinese Room” experiment by saying it is infeasible. Even if infeasible, the very idea that a lookup process could outwardly perfectly simulate a true thinking process should make us question whether simply appearing to think == actually thinking. 5. Yeah that bugs me too. Sure it’s much sexier to talk about appreciating a symphony versus composing one, but this leads to lots of badly written popular science articles. There’s a whole world of detail wrapped up in the words “feasible,” “verified” and “computed.” 6. I wish I could understand what you are talking about. It sounds fascinating. 7. hm. I was trying to take technical concepts (which I only partially understand) and make them understandable to non-technical people. If I didn’t do that, then this post failed. 8. Fascinating, but I didn’t get the relationship with startups. 9. ha, no relationship with startups. this is just my personal blog. i thought about writing CS stufff elsewhere but then thought it was a better to just write what i’m interested in here. 10. Um. P=NP. I proved it exactly 3 years ago. I just lost the napkin. 11. Note that proving it true or false has nothing to do with which world we inhabit in. The continuum hypothesis, for example, is completely independent from the rest of Set Theory. One can do mathematics with our without it. Which one corresponds to our “universe”? Establishing (4) in your post could come in the form of proving that P=NP is independent of ZFC (i.e. Set Theory + Continuum Hypothesis). Then we’d still have to figure which universe we live Perhaps the most accessible example of this sort of thing is Euclid’s 5th Postulate, also known as the parallel postulate. People tried for two thousand years to prove this from the other postulates, but failed to do so. Turns out it is “independent” from the other axioms — it is impossible to prove or disprove it… but we still had to figure out what the geometry of our universe looks like! Although the 5th postulate is mathematically independent from the rest, for some given geometries it is either true or false. It turns out that the most natural formulation of the geometry of space time, according to Einstein’s special relativity theory is not in Euclidean space, but Minkowski Space. 12. Thanks! 13. I wish more startup bloggers would go “off topic” like this! Startups are about solving real world problems. Doing that well means thinking about interesting things outside of startup-land. 14. Yeah, and just because an algorithm exists doesn’t mean evolution has programmed it into human brains. It’s pretty obvious in fact that human brains can’t solve NP problems feasibly. 15. In particular, from an AI perspective, it seems fairly clear that the difficulty of doing things like producing symphonies has relatively little to do with NP-hard problems. SAT-solving is NP-hard and yet computers are very good at doing it, much better than humans are. Why are computers not good at composing symphonies, not even really beginner-level good? NP-completeness can’t explain why symphonies are harder than SAT; difficulty of formalization I think is closer to the mark. Aaronson’s post spurred me to write some more in that vein a few months ago. 16. I think I read somewhere that public key encryption is not really NP-hard. So the advent of quantum computing could possibly render public keys useless for encrypting communications without necessarily proving P=NP. 17. Search and data crunching problems can easily be in NP in the general case, but with narrower special cases that fall into P. Either the problem is constrained to avoid NP, or simply that algorithms may be possible that produce pretty good solutions for real-world data, even if they are not optimal for the general case. There are plenty of opportunities for startups finding better ways to extract value from ever larger data sets. On the other hand, if your startup falls on the wrong side of the P/NP boundary, you might craft a beautiful prototype that cannot be scaled. 18. Pingback: Trackback
{"url":"http://cdixon.org/2012/02/21/the-p-vs-np-problem/","timestamp":"2014-04-20T21:02:25Z","content_type":null,"content_length":"64844","record_id":"<urn:uuid:01fb2d58-6915-48f3-b289-0ec67c562784>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
2004 Week 9 Strategy Review A model-based approach to football strategy. 2004 Week 9 Strategy Review In this article we discuss some notable coaching decisions from selected games. Many of the analyses use the footballcommentary.com Dynamic Programming Model . Minnesota at Indianapolis (11/8/2004) [Recap] With 10:49 remaining in the 3rd quarter, Minnesota's Nate Burlson returned a punt for a touchdown. The Vikings still trailed 14-12 before the try. Minnesota coach Mike Tice elected to attempt a two-point conversion. According to the Model , if Minnesota chooses to kick the extra point their probability of winning the game is 0.421. If they go for two, their probability of winning is 0.473 if they make it and 0.394 if they fail. If we assume that the probability of success is 0.4, then Minnesota's probability of winning if they go for two is 0.4 × 0.473 + (1 − 0.4) × 0.394 = 0.426. In this case, then, the choice makes little difference. Nevertheless, it is a bit better to go for two, and that's true as long as the probability of success exceeds 0.35. (The same result can be found by consulting the Chart .) Minnesota's squandered timeouts and poorly executed two-minute drill at the end of the first half, which forced them to settle for a field goal, were justifiably criticized by the commentators. However, where the Vikings really wasted time was before the two-minute warning. They took possession at their own 8 yard line with 5:16 left in the half. From that field position, nearly half of drives that culminate in touchdowns use more than 5:16, so if the Vikings were interested in scoring, they should have been trying to conserve time from the outset. During that possession, and prior to the two-minute warning, the Vikings ran six plays for which the game clock was running at the time of the snap. For those six plays, the average real time that elapsed from the end of the previous play was 34 seconds. If the Vikings had been paying attention, they could easily have run two more plays before the two-minute warning. With 1:53 left in the game, and the score 28-28, Peyton Manning's left-handed shovel pass to Edgerrin James resulted in an Indianapolis first down at the Minnesota 15 yard line. Because the Vikings had only one timeout, the commentators suggested that Minnesota should let Indianapolis score. Even if Minnesota could be sure that the Indianapolis ball-carrier would take the bait, it's not obvious that it's in Minnesota's interest. It's true that Mike Vanderjagt isn't likely to miss from 36 yards (the average NFL place-kicker makes about 83% from that distance), but the Vikings aren't likely to score a touchdown in the last 1:40 either. However, the real risk to letting Indianapolis score is that the ball-carrier might instead down himself at the one yard line, turning the game-winning field goal into a chip shot. So, if both teams are rational, the correct strategy for the Vikings is to try to prevent the Colts from getting better field-goal position. That's exactly what they did. Finally, the Colts irrationally stopped the clock at 0:06 in preparation for the game-winning field goal. Since it was 4th down, there was no possible advantage to leaving so much time. Predictably , the field goal left 0:02 on the clock, and forced the Colts to kick off to the Vikings before the game ended. Jets at Buffalo (11/7/2004) [Recap] The Jets won the coin toss prior to the game. Herman Edwards then made the puzzling decision to defend the west goal rather than receive the kickoff, and Buffalo elected to receive. The wind was from the southwest at about 25 miles per hour. If the coin toss were for the start of overtime, then as we discussed in a previous article , there would be a threshold for the wind above which it would be correct to take the wind rather than receive the kickoff. It's not clear if the wind exceeded that threshold in this game; the ease with which Doug Brien made a 41-yard field goal into the wind in the 2nd quarter gives us doubts. But the point is, the start of the game is not overtime. Because the teams change ends after the 1st and 3rd quarters, each team gets the wind for half the game regardless of who kicks off for either half. Buffalo, naturally, elected to receive to start the second half. Therefore, the effect of the Jets' decision not to receive the opening kickoff was simply to reduce their expected number of possessions in the game. According to the Model , coach Herman Edwards' decision lowered his team's probability of winning the game from 0.5 to 0.464. With 12:40 left in the 1st quarter, and no score, Buffalo faced 4th and six inches at their own 41 yard line. The Bills elected to punt. Normally we assume that a punt (other than a pooch kick) is expected to net 40 yards. But because of the stiff headwind, we will assume that the punt is expected to net 30 yards, in which case Buffalo's probability of winning if they punt is 0.499. If instead they go for the first down, their probability of winning is 0.552 if they make it and 0.465 if they fail. With six inches to go, the probability of making it should be about 0.75. Therefore, Buffalo's probability of winning if they go for the first down is 0.75 × 0.552 + (1 − 0.75) × 0.465 = 0.53. So Buffalo should have gone for it, and in fact going for it is preferred as long as the probability of picking up the first down exceeds 0.39. Cleveland at Baltimore (11/7/2004) [Recap] With 11:33 left in the 3rd quarter, Baltimore led 12-10, and faced 4th down and 1 yard to go at the Cleveland 45 yard line. The Ravens chose to punt. According to the Model , this choice give the Ravens a 0.585 probability of winning the game, assuming that Cleveland's expected starting point is their own 10 yard line. If instead the Ravens go for it, their probability of winning is 0.648 if they pick up the first down but 0.523 if they are stopped. Assuming that they have a 0.7 probability of making the first down, their probability of winning if they go for it is 0.7 × 0.648 + (1 − 0.71) × 0.523 = 0.611. It follows that Baltimore would have been better off going for the first down. One can check that going for it is better than punting as long as the probability of making the first down exceeds 0.5. With 10:19 left in the 4th quarter, the Ravens punted, and the officials ruled that Baltimore downed the ball inside the Cleveland 1 yard line. The Browns challenged at 10:05, claiming that it was a touchback. Cleveland led 13-12 at the time. According to the Model , Cleveland's probability of winning is 0.6 if the ruling on the field stands, but 0.66 if it is reversed and the Browns start at their 20 yard line. From the view of the play that was available when Cleveland had to make the decision to challenge, it seemed to us that there was about a 0.5 probability that the ruling would be reversed. So the expected gain from challenging was about a 0.03 increase in the probability of winning, before taking into account the costs, which are the loss of a challenge and the possible loss of a timeout. With only 8:05 before the two-minute warning and Cleveland still leading, if feels unlikely that these costs could be worth 0.03. We think Cleveland was right to challenge. When Baltimore scored a touchdown with 7:03 left in the game to go ahead 18-13 before the try, they elected to go for two. Assuming a success probability of 0.4, Baltimore's probability of winning if they go for two is 0.818, versus 0.79 if they kick, so Baltimore did the right thing. In fact, it turns out that it's right to go for two in this situation as long as the probability of success exceeds 0.1. (The same result can be found by consulting the Chart .) The latest in a series of counterproductive individual achievements came with 0:45 left and the Ravens leading 20-13, when Baltimore's Ed Reed intercepted Jeff Garcia's pass 6 yards deep in his own end zone and ran 106 yards for a touchdown and the longest interception return in NFL history. Cleveland had no timeouts, so if Reed had taken a touchback the game would have been over. At least he slowed down as he approached the goal line. ESPN showed a replay of Baltimore coach Brian Billick's reaction during Reed's runback. We were hoping to hear Billick shouting "Get down! Get down!" but he was actually shouting "Run it back! Run it New Orleans at San Diego (11/7/2004) [Recap] With 0:13 left in the first half, San Diego had 1st and 10 at the New Orleans 23 yard line. The Chargers led 20-7, and had no timeouts. They had to decide whether to try another play before halftime, or kick an immediate field goal. The Chargers chose to run another play, but Drew Brees was sacked, and the half ended. We will base our analysis on the Model , which says that when the Chargers kick off to start the second half, their probability of winning the game will be either 0.877, 0.928, or 0.966, according to whether their lead is 13, 16, or 20 points. NFL place-kickers make about 78% from 41 yards. Therefore, if San Diego chooses to kick immediately, their probability of winning the game is 0.78 × 0.928 + (1 − 0.78) × 0.877 = 0.917. If the Chargers attempt another play, it really has to be a pass into the end zone, because time will expire if the ball carrier is tackled inbounds short of the goal line. Let p denote the probability of a touchdown, and let q denote the probability of a sack or interception. The probability of an incomplete pass is then 1 − p − q. If the pass is incomplete, San Diego will attempt a field goal on the next play. It follows that in order for trying another play to be better than kicking immediately, we must have 0.966p + 0.877q + 0.917(1 − p − q) > 0.917, which reduces to p > 0.82 q. In short, the probability of a touchdown has to be almost as high as the combined probability of a sack or an interception. From the 23 yard line, when the opponents know you're passing, that feels like a close call. On this decision, we wouldn't criticize the coach either way. Oakland at Carolina (11/7/2004) [Recap] With 14:11 left in the 2nd quarter, and Oakland leading 3-0, the Raiders had the ball, 4th and goal at the Carolina 1 yard line. Oakland elected to go for the touchdown. For simplicity, assume that a chip-shot field goal would be a sure thing. Then according to the Model , if Oakland kicks their probability of winning is 0.641. If they go for it, their probability of winning is 0.754 if they score and 0.59 if they are stopped. Assuming that the probability of scoring is 0.57, it follows that Oakland's probability of winning if they go for it is 0.57 × 0.754 + (1 − 0.57) × 0.59 = 0.683. So Oakland's decision to go for it was correct by a significant margin. One can check that going for the TD is preferred as long as the probability of scoring exceeds 0.31 The endgame was almost surreal. With the score 24-24, and 1:18 left, Oakland had 1st and goal at the Carolina 3 yard line. Oakland had three timeouts, Carolina none. It's an intellectually interesting exercise to determine whether it's best for Oakland to take the clock down to 0:02 before calling timeout and kicking the field goal, to ensure that time expires on the kick, or to leave 0:04 on the clock, to allow for a re-kick in case of a bad snap. But let's not split hairs. If Oakland simply takes a knee twice, centering the ball between the hashmarks, and calls timeout at 0:02, Carolina can win only if the field goal misses (probability 0.02) and then they win in overtime (probability 0.5). Oakland therefore reduces Carolina's probability of winning to 0.02 × 0.5 = 0.01. This plan seems straightforward enough, but instead of following it, Oakland coach Norv Turner had his offense try to score a touchdown on both 1st and 2nd down. Carolina coach John Fox, evidently convinced that it's better to give than to receive, had his defense try (successfully) to stop both attempts when he should have told them to step aside. (Stepping aside can't hurt in this case, in contrast to the situation in the Minnesota-Indianapolis game .) Fox reportedly considered letting Oakland score, but decided — we're not making this up — that it was a low-percentage play. Oakland finally saw the light and set up for a field goal. Even then, Oakland erred by stopping the clock with 0:09 left, guaranteeing that they would have to kick off to Carolina following the field goal, and virtually doubling Carolina's probability of winning the game. Houston at Denver (11/7/2004) [Recap] With 13:15 remaining in the 2nd quarter, Houston trailed 7-0, and faced 4th and inches at the 50 yard line. Houston coach Dom Capers called for a quarterback sneak, but David Carr was stopped short, and Denver took over on downs. According to the Model , Houston's probability of winning if they punt is 0.27. If the Texans go for it, their probability of winning is 0.31 if they make it but 0.228 if they are stopped. On 4th and inches at midfield, the probability of picking up the first down is about 0.75. Therefore, Houston's probability of winning if they go for it is 0.75 × 0.31 + (1 − 0.75) × 0.228 = 0.29. Notwithstanding the bad result, then, Capers was right to go for it. One can check that going for it is preferable to punting as long as the probability of success exceeds 0.52. Copyright © 2004 by William S. Krasker
{"url":"http://www.footballcommentary.com/analysis2004week9.htm","timestamp":"2014-04-18T05:42:26Z","content_type":null,"content_length":"17950","record_id":"<urn:uuid:608210a4-7fdd-404c-910e-a317cd95864a>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
Pacific Institute for the Mathematical Sciences - PIMS Happy New Year from PIMS! Here is what's happening around the sites... January 2013 January 9 - 2013 Mathematical Institutes Open House - MPE 2013 Launch in San Diego, California January 10 - Computer Science Distinguished Lecture Series: Martin Rinard at the University of British Columbia January 14 - CMS/PIMS/IAM Public Lecture: Margot Gerritsen at the University of British Columbia January 17 - how Does Google Google: Margot Gerritsen at the University of Calgary January 17 - Computer Science Distinguished Lecture Series: Elizabeth Mynatt at the University of British Columbia January 17-19 - Disease Dynamics 2013: Immunization, a true multi-scale problem at the University of British Columbia Former Chair of the PIMS Board passes on: Hugh Morris (1932-2012) CAIMS/PIMS Early Career Award - deadline for nominations is January 31. Please see for details. The Mathematics Community Unites to Focus on Global Issues Cultural Barriers Crunched by Numbers Bruce Reed announced as 2013 CRM/Fields/PIMS Prize recipient
{"url":"http://www.pims.math.ca/node/6902","timestamp":"2014-04-16T19:03:14Z","content_type":null,"content_length":"15877","record_id":"<urn:uuid:54f397d8-ef19-4788-b660-b6495c21b143>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
Round down a int number in c++ How can I round down an int number in c++? What I want to do is basically round down a number, for instance I know that the my numbers are always going to be 5 digit long, but I want them to be rounded down as follow. If----------Round down to 25335 = 25000 26458 = 26000 50465 = 50000 I tried floor(value) but I think this is just for floats. How do I do this in c++? Thanks a lot add 500 divide by 1000 multiply by 1000 kbw --> This is for rounding, not rounding down, just divide by 1000 and multiply by 1000 You guys are awesome! Thanks a lot. I'm always amazed about how quick you guys find the solution. Anomen - That's exactly what I found out after trying what kbw suggested, thank you for clarifying this. Thanks a lot Topic archived. No new replies allowed.
{"url":"http://www.cplusplus.com/forum/general/76439/","timestamp":"2014-04-18T13:27:50Z","content_type":null,"content_length":"8354","record_id":"<urn:uuid:9738320e-e6df-4fe7-866f-de21de8fa9a2>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
Silver Lake, WI Math Tutor Find a Silver Lake, WI Math Tutor ...Topics include: functions and graphing (linear, quadratic, logarithmic, exponential), complex numbers, systems of equations and inequalities, and relations. This can also include beginning trigonometry and probability and statistics. Geometry is unlike many other Math courses in that it is a spatial/visual class and deals minimally with variables and equations. 11 Subjects: including algebra 1, algebra 2, calculus, geometry ...In addition to MBA, I have a PhD in Engineering so I believe I am well qualified to teach mathematics courses as well as any business courses(undergraduate and MBA level) . I have several years of professional experience through my work history with several companies which adds to my business exp... 22 Subjects: including algebra 1, algebra 2, calculus, geometry I graduated magna cum laude with honors from Illinois State University with a bachelor's degree in secondary mathematics education in 2011. I taught trigonometry and algebra 2 to high school juniors in the far north suburbs of Chicago for the past two years. I am currently attending DePaul University to pursue my master's degree in applied statistics. 19 Subjects: including calculus, statistics, discrete math, GRE ...I am happy to work with anyone who is willing to work and am very patient with students as they try to understand new concepts. I have been in the Glenview area the past four years and have tutored high schoolers from Notre Dame, New Trier, GBS, GBN, Deerfield High, Loyola Academy and Woodlands ... 20 Subjects: including logic, geometry, trigonometry, precalculus I have a Masters degree in Chemistry. I can help students with Chemistry and Math (pre-calculus, differential equations). I have worked as a volunteer tutor and have helped people working towards their GEDs. I have worked as a graduate assistant when I was working towards my Masters degree in Chem... 13 Subjects: including trigonometry, prealgebra, precalculus, algebra 2 Related Silver Lake, WI Tutors Silver Lake, WI Accounting Tutors Silver Lake, WI ACT Tutors Silver Lake, WI Algebra Tutors Silver Lake, WI Algebra 2 Tutors Silver Lake, WI Calculus Tutors Silver Lake, WI Geometry Tutors Silver Lake, WI Math Tutors Silver Lake, WI Prealgebra Tutors Silver Lake, WI Precalculus Tutors Silver Lake, WI SAT Tutors Silver Lake, WI SAT Math Tutors Silver Lake, WI Science Tutors Silver Lake, WI Statistics Tutors Silver Lake, WI Trigonometry Tutors Nearby Cities With Math Tutor Bassett, WI Math Tutors Benet Lake Math Tutors Burlington, WI Math Tutors Camp Lake Math Tutors Kansasville Math Tutors New Munster Math Tutors Powers Lake, WI Math Tutors Richmond, IL Math Tutors Round Lake Heights, IL Math Tutors Salem, WI Math Tutors Trevor Math Tutors Twin Lakes, WI Math Tutors Union Grove, WI Math Tutors Wilmot, WI Math Tutors Woodworth, WI Math Tutors
{"url":"http://www.purplemath.com/silver_lake_wi_math_tutors.php","timestamp":"2014-04-18T11:10:29Z","content_type":null,"content_length":"24150","record_id":"<urn:uuid:d8069a40-c019-4c58-a1a4-f0e2973daf90>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
From Wikipedia, the free encyclopedia Aryabhata (IAST: Āryabhaṭa; Sanskrit: आर्यभटः) (476–550 CE) was the first in the line of great mathematician-astronomers from the classical age of Indian mathematics and Indian astronomy. His most famous works are the Aryabhatiya (499 CE, when he was 23 years old) and the Arya-siddhanta. [edit] Biography While there is a tendency to misspell his name as "Aryabhatta" by analogy with other names having the "bhatta" suffix, his name is properly spelled Aryabhata: every astronomical text spells his name thus,^[1] including Brahmagupta's references to him "in more than a hundred places by name".^[2] Furthermore, in most instances "Aryabhatta" does not fit the metre either.^[1] Aryabhata mentions in the Aryabhatiya that it was composed 3,600 years into the Kali Yuga, when he was 23 years old. This corresponds to 499 CE, and implies that he was born in 476 CE.^[1] Aryabhata provides no information about his place of birth. The only information comes from Bhāskara I, who describes Aryabhata as āśmakīya, "one belonging to the aśmaka country." While aśmaka was originally situated in the northwest of India, it is widely attested that, during the Buddha's time, a branch of the Aśmaka people settled in the region between the Narmada and Godavari rivers, in the South Gujarat–North Maharashtra region of central India. Aryabhata is believed to have been born there.^[1]^[3] However, early Buddhist texts describe Ashmaka as being further south, in dakshinapath or the Deccan, while other texts describe the Ashmakas as having fought Alexander, which would put them further north.^[3] It is fairly certain that, at some point, he went to Kusumapura for advanced studies and that he lived there for some time.^[4] Both Hindu and Buddhist tradition, as well as Bhāskara I (CE 629), identify Kusumapura as Pāṭaliputra, modern Patna.^[1] A verse mentions that Aryabhata was the head of an institution (kulapa) at Kusumapura, and, because the university of Nalanda was in Pataliputra at the time and had an astronomical observatory, it is speculated that Aryabhata might have been the head of the Nalanda university as well.^[1] Aryabhata is also reputed to have set up an observatory at the Sun temple in Taregana, Bihar.^[5] [edit] Other hypotheses It was suggested that Aryabhata may have been from Kerala, but K. V. Sarma, an authority on Kerala's astronomical tradition, disagreed^[1] and pointed out several errors in this hypothesis.^[6] Aryabhata mentions "Lanka" on several occasions in the Aryabhatiya, but his "Lanka" is an abstraction, standing for a point on the equator at the same longitude as his Ujjayini.^[7] Aryabhata is the author of several treatises on mathematics and astronomy, some of which are lost. His major work, Aryabhatiya, a compendium of mathematics and astronomy, was extensively referred to in the Indian mathematical literature and has survived to modern times. The mathematical part of the Aryabhatiya covers arithmetic, algebra, plane trigonometry, and spherical trigonometry. It also contains continued fractions, quadratic equations, sums-of-power series, and a table of sines. The Arya-siddhanta, a lost work on astronomical computations, is known through the writings of Aryabhata's contemporary, Varahamihira, and later mathematicians and commentators, including Brahmagupta and Bhaskara I. This work appears to be based on the older Surya Siddhanta and uses the midnight-day reckoning, as opposed to sunrise in Aryabhatiya. It also contained a description of several astronomical instruments: the gnomon (shanku-yantra), a shadow instrument (chhAyA-yantra), possibly angle-measuring devices, semicircular and circular (dhanur-yantra / chakra-yantra), a cylindrical stick yasti-yantra, an umbrella-shaped device called the chhatra-yantra, and water clocks of at least two types, bow-shaped and cylindrical.^[3] A third text, which may have survived in the Arabic translation, is Al ntf or Al-nanf. It claims that it is a translation by Aryabhata, but the Sanskrit name of this work is not known. Probably dating from the 9th century, it is mentioned by the Persian scholar and chronicler of India, Abū Rayhān al-Bīrūnī.^[3] [edit] Aryabhatiya Direct details of Aryabhata's work are known only from the Aryabhatiya. The name "Aryabhatiya" is due to later commentators. Aryabhata himself may not have given it a name. His disciple Bhaskara I calls it Ashmakatantra (or the treatise from the Ashmaka). It is also occasionally referred to as Arya-shatas-aShTa (literally, Aryabhata's 108), because there are 108 verses in the text. It is written in the very terse style typical of sutra literature, in which each line is an aid to memory for a complex system. Thus, the explication of meaning is due to commentators. The text consists of the 108 verses and 13 introductory verses, and is divided into four pādas or chapters: 1. Gitikapada: (13 verses): large units of time—kalpa, manvantra, and yuga—which present a cosmology different from earlier texts such as Lagadha's Vedanga Jyotisha(ca. 1st century BCE). There is also a table of sines (jya), given in a single verse. The duration of the planetary revolutions during a mahayuga is given as 4.32 million years. 2. Ganitapada (33 verses): covering mensuration (kṣetra vyāvahāra), arithmetic and geometric progressions, gnomon / shadows (shanku-chhAyA), simple, quadratic, simultaneous, and indeterminate equations (kuTTaka) 3. Kalakriyapada (25 verses): different units of time and a method for determining the positions of planets for a given day, calculations concerning the intercalary month (adhikamAsa), kShaya-tithi s, and a seven-day week with names for the days of week. 4. Golapada (50 verses): Geometric/trigonometric aspects of the celestial sphere, features of the ecliptic, celestial equator, node, shape of the earth, cause of day and night, rising of zodiacal signs on horizon, etc. In addition, some versions cite a few colophons added at the end, extolling the virtues of the work, etc. The Aryabhatiya presented a number of innovations in mathematics and astronomy in verse form, which were influential for many centuries. The extreme brevity of the text was elaborated in commentaries by his disciple Bhaskara I (Bhashya, ca. 600 CE) and by Nilakantha Somayaji in his Aryabhatiya Bhasya, (1465 CE). [edit] Mathematics [edit] Place value system and zero The place-value system, first seen in the 3rd century Bakhshali Manuscript, was clearly in place in his work. While he did not use a symbol for zero, the French mathematician Georges Ifrah argues that knowledge of zero was implicit in Aryabhata's place-value system as a place holder for the powers of ten with null coefficients^[8] However, Aryabhata did not use the brahmi numerals. Continuing the Sanskritic tradition from Vedic times, he used letters of the alphabet to denote numbers, expressing quantities, such as the table of sines in a mnemonic form.^[9] [edit] Approximation of pi Aryabhata worked on the approximation for pi (π), and may have come to the conclusion that π is irrational. In the second part of the Aryabhatiyam (gaṇitapāda 10), he writes: caturadhikam śatamaṣṭaguṇam dvāṣaṣṭistathā sahasrāṇām ayutadvayaviṣkambhasyāsanno vṛttapariṇāhaḥ. "Add four to 100, multiply by eight, and then add 62,000. By this rule the circumference of a circle with a diameter of 20,000 can be approached."^[10] This implies that the ratio of the circumference to the diameter is ((4+100)×8+62000)/20000 = 62832/20000 = 3.1416, which is accurate to five significant figures. It is speculated that Aryabhata used the word āsanna (approaching), to mean that not only is this an approximation but that the value is incommensurable (or irrational). If this is correct, it is quite a sophisticated insight, because the irrationality of pi was proved in Europe only in 1761 by Lambert.^[11] After Aryabhatiya was translated into Arabic (ca. 820 CE) this approximation was mentioned in Al-Khwarizmi's book on algebra.^[3] [edit] Mensuration and trigonometry In Ganitapada 6, Aryabhata gives the area of a triangle as tribhujasya phalashariram samadalakoti bhujardhasamvargah that translates to: "for a triangle, the result of a perpendicular with the half-side is the area."^[12] Aryabhata discussed the concept of sine in his work by the name of ardha-jya. Literally, it means "half-chord". For simplicity, people started calling it jya. When Arabic writers translated his works from Sanskrit into Arabic, they referred it as jiba. However, in Arabic writings, vowels are omitted, and it was abbreviated as jb. Later writers substituted it with jiab, meaning "cove" or "bay." (In Arabic, jiba is a meaningless word.) Later in the 12th century, when Gherardo of Cremona translated these writings from Arabic into Latin, he replaced the Arabic jiab with its Latin counterpart, sinus, which means "cove" or "bay". And after that, the sinus became sine in English.^[13] [edit] Indeterminate equations A problem of great interest to Indian mathematicians since ancient times has been to find integer solutions to equations that have the form ax + b = cy, a topic that has come to be known as diophantine equations. This is an example from Bhāskara's commentary on Aryabhatiya: Find the number which gives 5 as the remainder when divided by 8, 4 as the remainder when divided by 9, and 1 as the remainder when divided by 7 That is, find N = 8x+5 = 9y+4 = 7z+1. It turns out that the smallest value for N is 85. In general, diophantine equations, such as this, can be notoriously difficult. They were discussed extensively in ancient Vedic text Sulba Sutras, whose more ancient parts might date to 800 BCE. Aryabhata's method of solving such problems is called the kuṭṭaka (कुट्टक) method. Kuttaka means "pulverizing" or "breaking into small pieces", and the method involves a recursive algorithm for writing the original factors in smaller numbers. Today this algorithm, elaborated by Bhaskara in 621 CE, is the standard method for solving first-order diophantine equations and is often referred to as the Aryabhata algorithm.^[14] The diophantine equations are of interest in cryptology, and the RSA Conference , 2006, focused on the kuttaka method and earlier work in the Sulvasutras. [edit] Algebra In Aryabhatiya Aryabhata provided elegant results for the summation of series of squares and cubes:^[15] $1^2 + 2^2 + \cdots + n^2 = {n(n + 1)(2n + 1) \over 6}$ $1^3 + 2^3 + \cdots + n^3 = (1 + 2 + \cdots + n)^2$ [edit] Astronomy Aryabhata's system of astronomy was called the audAyaka system, in which days are reckoned from uday, dawn at lanka or "equator". Some of his later writings on astronomy, which apparently proposed a second model (or ardha-rAtrikA, midnight) are lost but can be partly reconstructed from the discussion in Brahmagupta's khanDakhAdyaka. In some texts, he seems to ascribe the apparent motions of the heavens to the Earth's rotation. He also treated the planet's orbits as elliptical rather than circular.^[16]^[17] [edit] Motions of the solar system Aryabhata appears to have believed that the earth rotates about its axis. This is indicated in the statement, referring to Lanka , which describes the movement of the stars as a relative motion caused by the rotation of the earth: "Like a man in a boat moving forward sees the stationary objects as moving backward, just so are the stationary stars seen by the people in Lanka (or on the equator) as moving exactly towards the west." [achalAni bhAni samapashchimagAni – golapAda.9] But the next verse describes the motion of the stars and planets as real movements: "The cause of their rising and setting is due to the fact that the circle of the asterisms, together with the planets driven by the provector wind, constantly moves westwards at Lanka." As mentioned above, Lanka (lit. Sri Lanka) is here a reference point on the equator, which was the equivalent of the reference meridian for astronomical calculations. Aryabhata described a geocentric model of the solar system, in which the Sun and Moon are each carried by epicycles. They in turn revolve around the Earth. In this model, which is also found in the Paitāmahasiddhānta (ca. CE 425), the motions of the planets are each governed by two epicycles, a smaller manda (slow) and a larger śīghra (fast). ^[18] The order of the planets in terms of distance from earth is taken as: the Moon, Mercury, Venus, the Sun, Mars, Jupiter, Saturn, and the asterisms."^[3] The positions and periods of the planets was calculated relative to uniformly moving points. In the case of Mercury and Venus, they move around the Earth at the same speed as the mean Sun. In the case of Mars, Jupiter, and Saturn, they move around the Earth at specific speeds, representing each planet's motion through the zodiac. Most historians of astronomy consider that this two-epicycle model reflects elements of pre-Ptolemaic Greek astronomy.^[19] Another element in Aryabhata's model, the śīghrocca, the basic planetary period in relation to the Sun, is seen by some historians as a sign of an underlying heliocentric model.^[20] [edit] Eclipses Aryabhata states that the Moon and planets shine by reflected sunlight. Instead of the prevailing cosmogony in which eclipses were caused by pseudo-planetary nodes Rahu and Ketu, he explains eclipses in terms of shadows cast by and falling on Earth. Thus, the lunar eclipse occurs when the moon enters into the Earth's shadow (verse gola.37). He discusses at length the size and extent of the Earth's shadow (verses gola.38–48) and then provides the computation and the size of the eclipsed part during an eclipse. Later Indian astronomers improved on the calculations, but Aryabhata's methods provided the core. His computational paradigm was so accurate that 18th century scientist Guillaume Le Gentil, during a visit to Pondicherry, India, found the Indian computations of the duration of the lunar eclipse of 30 August 1765 to be short by 41 seconds, whereas his charts (by Tobias Mayer, 1752) were long by 68 seconds.^[3] Aryabhata's computation of the Earth's circumference as 39,968.0582 kilometres was only 0.2% smaller than the actual value of 40,075.0167 kilometres. This approximation was a significant improvement over the computation by Greek mathematician Eratosthenes (c. 200 BCE), whose exact computation is not known in modern units but his estimate had an error of around 5–10%.^[21]^[22] [edit] Sidereal periods Considered in modern English units of time, Aryabhata calculated the sidereal rotation (the rotation of the earth referencing the fixed stars) as 23 hours, 56 minutes, and 4.1 seconds; the modern value is 23:56:4.091. Similarly, his value for the length of the sidereal year at 365 days, 6 hours, 12 minutes, and 30 seconds is an error of 3 minutes and 20 seconds over the length of a year. The notion of sidereal time was known in most other astronomical systems of the time, but this computation was likely the most accurate of the period.^[citation needed] [edit] Heliocentrism As mentioned, Aryabhata claimed that the Earth turns on its own axis, and some elements of his planetary epicyclic models rotate at the same speed as the motion of the Earth around the Sun. The planetary orbits were also given with respect to the Sun and he also states: "Whoever knows this Dasagitika Sutra which describes the movements of the Earth and the planets in the sphere of the asterisms passes through the paths of the planets and asterisms and goes to the higher Brahman." Thus, it has been suggested that Aryabhata's calculations were based on an underlying heliocentric model, in which the planets orbit the Sun.^[23]^[24]^[25] A detailed rebuttal to this heliocentric interpretation is in a review that describes B. L. van der Waerden's book as "show[ing] a complete misunderstanding of Indian planetary theory [that] is flatly contradicted by every word of Aryabhata's description."^[26] However, some concede that Aryabhata's system stems from an earlier heliocentric model, of which he was unaware.^[27] Though Aristarchus of Samos (3rd century BCE) is credited with holding an heliocentric theory, the version of Greek astronomy known in ancient India as the Paulisa Siddhanta makes no reference to such a theory. [edit] Legacy Aryabhata's work was of great influence in the Indian astronomical tradition and influenced several neighbouring cultures through translations. The Arabic translation during the Islamic Golden Age (ca. 820 CE), was particularly influential. Some of his results are cited by Al-Khwarizmi and in the 10th century Al-Biruni stated that Aryabhata's followers believed that the Earth rotated on its His definitions of sine (jya), cosine (kojya), versine (utkrama-jya), and inverse sine (otkram jya) influenced the birth of trigonometry. He was also the first to specify sine and versine (1 − cos x) tables, in 3.75° intervals from 0° to 90°, to an accuracy of 4 decimal places. In fact, modern names "sine" and "cosine" are mistranscriptions of the words jya and kojya as introduced by Aryabhata. As mentioned, they were translated as jiba and kojiba in Arabic and then misunderstood by Gerard of Cremona while translating an Arabic geometry text to Latin. He assumed that jiba was the Arabic word jaib, which means "fold in a garment", L. sinus (c.1150).^[28] Aryabhata's astronomical calculation methods were also very influential. Along with the trigonometric tables, they came to be widely used in the Islamic world and used to compute many Arabic astronomical tables (zijes). In particular, the astronomical tables in the work of the Arabic Spain scientist Al-Zarqali (11th century) were translated into Latin as the Tables of Toledo (12th c.) and remained the most accurate ephemeris used in Europe for centuries. Calendric calculations devised by Aryabhata and his followers have been in continuous use in India for the practical purposes of fixing the Panchangam (the Hindu calendar). In the Islamic world, they formed the basis of the Jalali calendar introduced in 1073 CE by a group of astronomers including Omar Khayyam,^[29] versions of which (modified in 1925) are the national calendars in use in Iran and Afghanistan today. The dates of the Jalali calendar are based on actual solar transit, as in Aryabhata and earlier Siddhanta calendars. This type of calendar requires an ephemeris for calculating dates. Although dates were difficult to compute, seasonal errors were less in the Jalali calendar than in the Gregorian calendar. India's first satellite Aryabhata and the lunar crater Aryabhata are named in his honour. An Institute for conducting research in astronomy, astrophysics and atmospheric sciences is the Aryabhatta Research Institute of Observational Sciences (ARIES) near Nainital, India. The inter-school Aryabhata Maths Competition is also named after him,^[30] as is Bacillus aryabhata, a species of bacteria discovered by ISRO scientists in 2009.^[31] [edit] See also [edit] References 1. ^ ^a ^b ^c ^d ^e ^f ^g K. V. Sarma (2001). "Āryabhaṭa: His name, time and provenance". Indian Journal of History of Science 36 (4): 105–115. http://www.new.dli.ernet.in/rawdataupload/upload/insa/ 2. ^ Bhau Daji (1865). "Brief Notes on the Age and Authenticity of the Works of Aryabhata, Varahamihira, Brahmagupta, Bhattotpala, and Bhaskaracharya". Journal of the Royal Asiatic Society of Greatb Britain and Ireland. p. 392. http://books.google.com/?id=fAsFAAAAMAAJ&pg=PA392&dq=aryabhata. 3. ^ ^a ^b ^c ^d ^e ^f ^g Ansari, S.M.R. (March 1977). "Aryabhata I, His Life and His Contributions". Bulletin of the Astronomical Society of India 5 (1): 10–18. http://hdl.handle.net/2248/502. Retrieved 2007-07-21. 4. ^ Cooke (1997). "The Mathematics of the Hindus". pp. 204. "Aryabhata himself (one of at least two mathematicians bearing that name) lived in the late fifth and the early sixth centuries at Kusumapura (Pataliutra, a village near the city of Patna) and wrote a book called Aryabhatiya." 5. ^ "Get ready for solar eclipe". National Council of Science Museums, Ministry of Culture, Government of India. http://ncsm.gov.in/docs/Get%20ready%20for%20Solar%20eclipse.pdf. Retrieved 9 December 2009. 6. ^ For instance, one hypothesis was that aśmaka (Sanskrit for "stone") may be the region in Kerala that is now known as Koṭuṅṅallūr, based on the belief that it was earlier known as Koṭum-Kal-l-ūr ("city of hard stones"); however, old records show that the city was actually Koṭum-kol-ūr ("city of strict governance"). Similarly, the fact that several commentaries on the Aryabhatiya have come from Kerala were used to suggest that it was Aryabhata's main place of life and activity; however, many commentaries have come from outside Kerala, and the Aryasiddhanta was completely unknown in Kerala. See Sarma for details. 7. ^ See: *Clark 1930 *S. Balachandra Rao (2000). Indian Astronomy: An Introduction. Orient Blackswan. p. 82. ISBN 9788173712050. http://books.google.com/?id=N3DE3GAyqcEC&pg=PA82&dq=lanka. : "In Indian astronomy, the prime meridian is the great circle of the Earth passing through the north and south poles, Ujjayinī and Laṅkā, where Laṅkā was assumed to be on the Earth's equator." *L. Satpathy (2003). Ancient Indian Astronomy. Alpha Science Int'l Ltd.. p. 200. ISBN 9788173194320. http://books.google.com/?id=nh6jgEEqqkkC&pg=PA200&dq=lanka. : "Seven cardinal points are then defined on the equator, one of them called Laṅkā, at the intersection of the equator with the meridional line through Ujjaini. This Laṅkā is, of course, a fanciful name and has nothing to do with the island of Sri Laṅkā." *Ernst Wilhelm. Classical Muhurta. Kala Occult Publishers. p. 44. ISBN 9780970963628. http://books.google.com/?id=3zMPFJy6YygC&pg=PA44&dq=lanka. : "The point on the equator that is below the city of Ujjain is known, according to the Siddhantas, as Lanka. (This is not the Lanka that is now known as Sri Lanka; Aryabhata is very clear in stating that Lanka is 23 degrees south of Ujjain.)" *R.M. Pujari; Pradeep Kolhe; N. R. Kumar (2006). Pride of India: A Glimpse into India's Scientific Heritage. SAMSKRITA BHARATI. p. 63. ISBN 9788187276272. http://books.google.com/?id=sEX11ZyjLpYC *Ebenezer Burgess; Phanindralal Gangooly (1989). The Surya Siddhanta: A Textbook of Hindu Astronomy. Motilal Banarsidass Publ.. p. 46. ISBN 9788120806122. http://books.google.com/?id=W0Uo_-_iizwC 8. ^ George. Ifrah (1998). A Universal History of Numbers: From Prehistory to the Invention of the Computer. John Wiley & Sons. 9. ^ Dutta, Bibhutibhushan; Singh, Avadhesh Narayan (1962). History of Hindu Mathematics. Asia Publishing House, Bombay. ISBN 81-86050-86-8 (reprint) 10. ^ Jacobs, Harold R. (2003). Geometry: Seeing, Doing, Understanding (Third Edition). New York: W.H. Freeman and Company. p. 70. 11. ^ S. Balachandra Rao (1994/1998). Indian Mathematics and Astronomy: Some Landmarks. Jnana Deep Publications. ISBN 81-7371-205-0. 12. ^ Roger Cooke (1997.). "The Mathematics of the Hindus". History of Mathematics: A Brief Course. Wiley-Interscience. ISBN 0471180823. "Aryabhata gave the correct rule for the area of a triangle and an incorrect rule for the volume of a pyramid. (He claimed that the volume was half the height times the area of the base.)" 13. ^ Howard Eves (1990). An Introduction to the History of Mathematics (6 ed.). Saunders College Publishing House, New York. p. 237. 14. ^ Amartya K Dutta, "Diophantine equations: The Kuttaka", Resonance, October 2002. Also see earlier overview: Mathematics in Ancient India. 15. ^ Boyer, Carl B. (1991). "The Mathematics of the Hindus". A History of Mathematics (Second ed.). John Wiley & Sons, Inc.. p. 207. ISBN 0471543977. ""He gave more elegant rules for the sum of the squares and cubes of an initial segment of the positive integers. The sixth part of the product of three quantities consisting of the number of terms, the number of terms plus one, and twice the number of terms plus one is the sum of the squares. The square of the sum of the series is the sum of the cubes."" 16. ^ J. J. O'Connor and E. F. Robertson, Aryabhata the Elder, MacTutor History of Mathematics archive: "He believes that the Moon and planets shine by reflected sunlight, incredibly he believes that the orbits of the planets are ellipses." 17. ^ Hayashi (2008), Aryabhata I 18. ^ Pingree, David (1996). "Astronomy in India". in Walker, Christopher. Astronomy before the Telescope. London: British Museum Press. pp. 123–142. ISBN 0-7141-1746-3 pp. 127–9. 19. ^ Otto Neugebauer, "The Transmission of Planetary Theories in Ancient and Medieval Astronomy," Scripta Mathematica, 22 (1956), pp. 165–192; reprinted in Otto Neugebauer, Astronomy and History: Selected Essays, New York: Springer-Verlag, 1983, pp. 129–156. ISBN 0-387-90844-7 20. ^ Hugh Thurston, Early Astronomy, New York: Springer-Verlag, 1996, pp. 178–189. ISBN 0-387-94822-8 21. ^ "JSC NES School Measures Up", NASA, 11th April, 2006, retrieved 24th November, 2009. 22. ^ "The Round Earth", NASA, 12th December, 2004, retrieved 24th January, 2008. 23. ^ The concept of Indian heliocentrism has been advocated by B. L. van der Waerden, Das heliozentrische System in der griechischen, persischen und indischen Astronomie. Naturforschenden Gesellschaft in Zürich. Zürich:Kommissionsverlag Leeman AG, 1970. 24. ^ B.L. van der Waerden, "The Heliocentric System in Greek, Persian and Hindu Astronomy", in David A. King and George Saliba, ed., From Deferent to Equant: A Volume of Studies in the History of Science in the Ancient and Medieval Near East in Honor of E. S. Kennedy, Annals of the New York Academy of Science, 500 (1987), pp. 529–534. 25. ^ Hugh Thurston (1996). Early Astronomy. Springer. p. 188. ISBN 0387948228 26. ^ Noel Swerdlow, "Review: A Lost Monument of Indian Astronomy," Isis, 64 (1973): 239–243. 27. ^ Dennis Duke, "The Equant in India: The Mathematical Basis of Ancient Indian Planetary Models." Archive for History of Exact Sciences 59 (2005): 563–576, n. 4 [1]. 28. ^ Douglas Harper (2001). "Online Etymology Dictionary". http://www.etymonline.com/. Retrieved 2007-07-14. 29. ^ "Omar Khayyam". The Columbia Encyclopedia (6 ed.). 2001-05. http://www.bartleby.com/65/om/OmarKhay.html. Retrieved 2007-06-10. 30. ^ "Maths can be fun". The Hindu. 3 February 2006. http://www.hindu.com/yw/2006/02/03/stories/2006020304520600.htm. Retrieved 2007-07-06. 31. ^ Discovery of New Microorganisms in the Stratosphere. Mar. 16, 2009. ISRO. [edit] Other references [edit] External links Indian mathematics Mathematicians Classical
{"url":"http://file1.hpage.com/003195/31/html/aryabhata_.htm","timestamp":"2014-04-18T18:12:17Z","content_type":null,"content_length":"135472","record_id":"<urn:uuid:1cc406c4-0172-4e73-a175-9b5db22ec311>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
JREF Forum - View Single Post - Common Sense Originally Posted by 28th Kingdom Okay...let's break down what you said... firstly, near freefall isn't a scientific term... agreed. And, then you claim that 15 seconds compared to 9.2 seconds is NOT near freefall speeds. So, who endowed you with this judgment? So, scientifically speaking... what time would be near 9.2 seconds. 10 seconds? 11 seconds? 12 seconds? What definition of "near" are you working with? You understand if the towers had fallen in 9.2 seconds... it means a synthetic catalyst had to of been used, right? So you add 4-6 seconds to this... factor in probability and common logic... and what do you get? Since you still refuse to address my post, like you promised, I surmise I've made your vaunted "Ignore" list. But in case not, let me explain your fallacy here. 9.2 seconds is totally different from what you get if you "add 4-6 seconds." A simple calculation will confirm this. Suppose we compute the time it takes the roof to hit the ground. Let the acceleration be , in which case the time it takes to hit can be found using [2] a t^2 , where is the distance the roof has to fall, equal to about 417 meters, and is the time of the fall. We know , measure , and work backwards to get the effective acceleration If the structure absorbs no energy at all, i.e. we get freefall, we should measure . In this case, we should see = 9.2 seconds. If the structure absorbs some energy, then will be less than , or better expressed as a fraction of Once we have this fraction, we can then estimate how much was needed to destroy the structure as it fell. Recall that gravitational energy GPE = m g h , where is the mass of the structure. But this is the same . Since the building doesn't collapse with acceleration , the percentage of the GPE that at any time is seen as kinetic energy is used to destroy the structure, and the remainder must have been needed to destroy the structure. In other words, the fraction 1 - ( ) is equal to the fraction of energy that went into destroying the structure. From calculations elsewhere, the total GPE of the structure was equal to roughly 160 tons of TNT equivalent. I've made a table for you that describes, for certain values of collapse time, how much energy went into destroying the structure as it fell Collapse time ............... Structural fraction ............. Structural energy 9.2 seconds ................. 0 ................................... 0 10 seconds .................. 0.15 ............................... 24 tons TNT 11 seconds .................. 0.30 ............................... 48 tons TNT 12 seconds .................. 0.41 ............................... 65.6 tons TNT 13 seconds .................. 0.50 ............................... 80 tons TNT 14 seconds .................. 0.57 ............................... 91.2 tons TNT 15 seconds .................. 0.62 ............................... 99.1 tons TNT What does this mean? It means, that "adding 4-6 seconds," far from being materially identical to free-fall, means that as much as 62% of the energy was dedicated to breaking the building. Even a single second of resistance by the structure means it absorbed more energy than an entire truckload of pure high explosive -- far, far more than could possibly have been planted, under any scenario. Once again, your "common sense" as you call it, is wrong. And feel free to show where I'm "throwing my hands in the air" and disobeying the laws of physics.
{"url":"http://forums.randi.org/showpost.php?p=2236462&postcount=631","timestamp":"2014-04-17T06:53:41Z","content_type":null,"content_length":"14706","record_id":"<urn:uuid:54c6ca3e-403d-4050-ae3f-0edba5d09a27>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
pdf search for "who number. this belong to" (Page 1 of about 1,480,000 results) CLASSIFICATION OF NUMBERS - Calculus Support Page.pdf In general an irrationl number, is a number that can not be expressed as a ratio of any other two numbers. For example: ... point that belong to all of them ... Read Down Square Roots; Number and Number Sense; 7 - VDOE.pdf Reporting Category Number and Number Sense Topic Determining square roots ... o Which number does not belong: 81, 99, 100, and 121? Why? Journal/Writing Prompts ...Read Down Which One Doesn’t Belong?.pdf Which One Doesn’t Belong? Unit 3: Fun With Shapes Grade Level Grade 1 Overview In this task, students will become familiar with the different ways to classify shapes by ... Read Down
{"url":"http://www.freedocumentsearch.com/pdf/who-number.-this-belong-to.html","timestamp":"2014-04-17T19:00:03Z","content_type":null,"content_length":"14679","record_id":"<urn:uuid:797b7ce4-23f4-473f-831d-fd73fb87b799>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
Adding Equations to System F Nick and I have a new draft out, on adding types for term-level equations to System F. Contrary to the experience of dependent types, this is not a very hairy extension -- in fact, I would not even hesitate to call it simple. However, it does open the door to all sorts of exciting things, such as many peoples' long-standing goal of putting semantic properties of modules into the module interfaces. This is good for documentation, and also (I would hope) good for compilers --- imagine Haskell, if the Monad typeclass definition also told you (and ghc!) all the equational rewriting that it was supposed to do. 7 comments: 1. This comment has been removed by the author. 2. This comment has been removed by the author. 3. Will this hypothetical version of Haskell allow us to write machine-checkable proofs of e.g. monad laws for custom Monad instances in the language itself? 4. In fact, quite the opposite -- the whole point of our design is to *not* compel the use of the language itself for proofs. If you try to combine the two, you get systems of dependent types. Unfortunately, getting dependent types to play nicely with data abstraction is kind of an open reseach problem. So our idea is to give a language that doesn't support proving internal to the language, but does (a) support data abstraction well, and (b) gives you a clean way of shipping proof obligations to proof assistants like Coq. (After all, Coq already exists and there's no reason not to use it!) 5. There's a typo in the first sentence of the first paragraph of your Introduction. 6. Thanks for clarification. 7. Karl: that's a hilariously awful typo. :(
{"url":"http://semantic-domain.blogspot.com/2011/10/adding-equations-to-system-f.html","timestamp":"2014-04-17T06:40:44Z","content_type":null,"content_length":"102774","record_id":"<urn:uuid:e0ab805d-e428-4cb5-9bb8-169905d09957>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
An Introduction to LEGO Next: LEGO projects Up: Contents Previous: Contents An Introduction to LEGO LEGO is an interactive proof development system (proof assistant) designed and implemented by Randy Pollack in Edinburgh using New Jersey ML. It implements various related type systems - the Edinburgh Logical Framework (LF), the Calculus of Constructions (CC), the Generalized Calculus of Constructions (GCC) and the Unified Theory of Dependent Types (UTT). LEGO is a powerful tool for interactive proof development in the natural deduction style. It supports refinement proof as a basic operation. The system design emphasizes removing the more tedious aspects of interactive proofs. For example, features of the system like argument synthesis and universe polymorphism make proof checking more practical by bringing the level of formalization closer to that of informal mathematics. The higher-order power of its underlying type theories, and the support of specifying new inductive types, provide an expressive language for formalization of mathematical problems and program specification and development. Particularly, Zhaohui Luo's type theory UTT includes: • type universes, which make it possible to formalize abstract mathematics; • strong sum types, which can be used to naturally express abstract structures, mathematical theories and program specifications, and • a schema for inductive data types, which captures the common inductive structures in programming languages and mathematics. LEGO may also be used to formalize different logical systems and prove theorems based on the defined logics, following the philosophy of the Edinburgh Logical Framework. For further information, go to the Contents page.
{"url":"http://www.dcs.ed.ac.uk/home/lego/html/intro.html","timestamp":"2014-04-17T01:48:52Z","content_type":null,"content_length":"3282","record_id":"<urn:uuid:1cdb58ee-2962-49ec-a40f-68b28474461b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
"The Apple iPod iTunes Anti-Trust Litigation" "The Apple iPod iTunes Anti-Trust Litigation" Filing 665 Declaration in Support of 663 Response ( Non Motion ) to Professor Noll's July 18 Declaration filed byApple Inc.. (Related document(s) 663 ) (Kiernan, David) (Filed on 7/22/2011) 1 2 3 4 5 6 7 8 Robert A. Mittelstaedt #60359 ramittelstaedt@jonesday.com Craig E. Stewart #129530 cestewart@jonesday.com David C. Kiernan #215335 dkiernan@jonesday.com 555 California Street, 26th Floor San Francisco, CA 94104 Telephone: (415) 626-3939 Facsimile: (415) 875-5700 Attorneys for Defendant APPLE INC. 9 UNITED STATES DISTRICT COURT 10 NORTHERN DISTRICT OF CALIFORNIA 11 SAN JOSE DIVISION 12 13 14 THE APPLE iPOD iTUNES ANTITRUST LITIGATION Lead Case No. C 05-00037 JW (HRL) [Class Action] 15 __________________________________ SUPPLEMENTAL REPORT OF DR. MICHELLE M. BURTIS 16 This Document Relates To: 17 ALL ACTIONS 18 19 20 21 22 23 24 25 26 27 28 Supp. Burtis Expert C 05-00037 JW (HRL) 1 1. In his most recent report filed July 18, 2011 (“Supplemental Noll Declaration”), 2 Professor Noll updated his “preliminary regression” analysis set forth in his March 28, 2011 3 declaration (“Noll Reply Declaration”) in an attempt to account for iTunes 7.0. Counsel for 4 Apple has asked me to review the Supplemental Noll Declaration and address whether this 5 preliminary analysis demonstrates that impact and damages can be shown on a class-wide basis. 6 2. Professor Noll’s new declaration does not show that his proposed methods will 7 work. Rather than demonstrating that iTunes 7.0 resulted in any class-wide damage, his latest 8 analysis actually shows that iTunes 7.0 reduced iPod prices—the opposite of plaintiffs’ theory of 9 class-wide harm. Moreover, although Professor Noll has admitted that his previous regression 10 analysis was not reliable and could not be used to show any causal effect on iPod prices, he has 11 done nothing in his current model to correct the deficiencies he previously identified. 12 Professor Noll’s Model, if Anything, Shows that iTunes 7.0 Had No Impact 13 3. Professor Noll asserts that his new model shows that iTunes “caused the wholesale 14 price of iPods to be elevated by $4.85.” 1 He relies for this assertion on the coefficient his model 15 estimates for his new iTunes 7.0 variable. But Professor Noll misinterprets this coefficient, which 16 he obtains in a manner that is inconsistent with his treatment of other similar variables. In 17 particular, Professor Noll has specified the iTunes 4.7 variable to be “on” over the period 18 beginning with the iTunes 4.7 update in October 2004 and ending when the 7.0 update occurred 19 on September 12, 2006. 2 The specification of this variable is different from the way in which 20 Professor Noll previously specified variables associated with the iTunes 4.7 update and it is 21 different from the way that Professor Noll has specified all of the other “dummy,” or indicator, 22 variables in his regressions. All of the other “dummy” variables are specified to be “on” 23 24 25 26 27 28 1 Supplemental Noll Declaration at p. 4. 2 Professor Noll claims that he is “separating” the period affected by update 4.7 from the period affected by update 7.0.” Supplemental Noll Declaration at p. 2 (“Hence, the econometric model in my period report would need to be amended to separate the period affected by update 4.7 from the period affected by update 7.0.”) While the model may separate the periods, Professor Noll does not correctly describe the “separate” effect from 7.0. And if he had modeled a “separate” effect like he modeled the effects of other variables in his model, he would have found the effect to be negative, not positive. -1- Supp. Burtis Expert C 05-00037 JW (HRL) 1 beginning with the particular event and staying “on” throughout the remainder of the estimation 2 period. This is true of the “Post-iTMS” variable, the “Harmony launched” variable, the “iTunes 3 7.0” variable, the “iTMS competitiors fully DRM-free” variable, and the “iTMS fully DRM free” 4 variable. 3 Had Professor Noll specified the “iTunes 4.7” variable as he had in his previous 5 regression and consistent with the other dummy variables in his previous and current regressions, 6 the coefficient on the 7.0 update variable would have been negative, and equal to -1.69. If 7 Professor Noll’s model were otherwise valid, this negative coefficient means that iTunes 7.0 8 caused iPod prices to decrease rather than increase as plaintiffs claim. 9 4. That Professor Noll’s model actually shows a price decrease from iTunes 7.0 can 10 be seen by comparing the relevant coefficients. According to Professor Noll, the iTunes 4.7 11 update “caused” the price of iPods to be $6.54 higher over the period beginning with the iTunes 12 4.7 update and ending with the iTunes 7.0 update. The results then show that the impact of the 13 iTunes 4.7 update fell from $6.54 to $4.85 over the period beginning with the iTunes 7.0 update 14 throughout the rest of the class period. The difference between 4.85 and 6.54 is -1.69. Again, his 15 regression shows that plaintiffs’ theory is wrong—iTunes 7.0 caused a decline in price, not an 16 increase. This is what his regression would show if he had specified his iTunes 4.7 consistently 17 with his other dummy variables, and consistently with his treatment of iTunes 4.7 in his prior 18 report. 19 5. Thus, when Professor Noll asserts that iTunes 7.0 “caused the wholesale price of 20 iPods to be elevated by $4.85,” what his analysis actually shows is only that iTunes 7.0 “caused” 21 the amount by which the price was allegedly elevated from iTunes 4.7 to decrease from $6.54 to 22 23 24 25 26 27 28 3 Professor Noll’s specification of the iTunes 4.7 variable is clearly a departure from his treatment of other variables. For example, consider the variables included in his model that attempt to capture the impact of DRM-free music on iPod prices—i.e., “iTMS competitors fully DRM-free” and “iTMS fully DRM-free.” Similar to his interpretation of the 4.7 and 7.0 variables, these variables could be interpreted to measure the impact of some change in the marketplace that changed over time. The first variable would measure the impact of certain suppliers offering DRM-free music and the second would measure the impact from Apple offering DRM-free music. Professor Noll models each of these variables to be equal to one (or “on”) at the beginning dates and then throughout the sample period. -2- Supp. Burtis Expert C 05-00037 JW (HRL) 1 $4.85. Similarly, when Professor Noll claims that his specification allows him to analyze whether 2 the iTunes 7.0 update “perpetuated” the elevation in prices caused by the iTunes 4.7 update, 4 all 3 he is actually measuring is whether some of that increase in price (from the no longer implicated 4 4.7 update) continued to exist after the 7.0 update. In other words, Professor Noll is not testing, 5 or claiming to test, whether 7.0 independently “caused” iPod prices to be higher. He is simply 6 measuring (if his methods were otherwise valid) how much of the alleged impact of 4.7 remained 7 after 7.0 was released. That the alleged impact from iTunes 4.7 supposedly continued after 8 iTunes 7.0, however, does not establish that iTunes 7.0 had any anticompetitive impact. 9 Professor Noll’s Statistical Measures Do Not Show That His Model is Valid 10 6. Professor Noll’s claim that his proposed regression results are “highly significant 11 and very precisely estimated” 5 is mistaken. The claim is based on Professor Noll’s calculation of 12 the “standard errors” of his estimated coefficients. Standard errors reflect that, because each 13 coefficient is estimated, there is an “error” around the estimation. Thus, Professor Noll’s finding 14 that the coefficient on his iTunes 7.0 variable is 4.85 is, in actuality, a finding not of the single 15 number 4.85, but of a range with 4.85 in the middle. The size of the range is determined by the 16 standard error. The smaller the standard error, the tighter the range, and in Professor Noll’s 17 words, the more “precise” the estimate. 18 7. The standard errors that Professor Noll reports are artificially small because he is 19 using observations in his data set that are not independent from one another. Standard errors are 20 determined, in part, by the number of observations in a sample (more data, in general, means 21 more precise estimates). But the observations must be independent. If they are not, the 22 calculated standard errors will be artificially small unless the model is appropriately corrected. 23 Consider, for example, a model that uses a given set of data, resulting in a set of coefficients with 24 25 26 27 28 4 Supplemental Noll Declaration at p. 2. (“Regardless of whether the 4.7 update was anticompetitive, my prior analysis found that the 4.7 update elevated iPod prices. If the 7.0 update caused increased lock-in to iPods, the effect would have been to perpetuate at least some of the elevation in prices arising from update 4.7.”) 5 Noll Supplemental Declaration at p. 4. -3- Supp. Burtis Expert C 05-00037 JW (HRL) 1 a calculated standard error. If the size of that data set is then doubled by simply duplicating the 2 original data, the standard error of the estimated coefficients using the doubled data will fall. But 3 the explanatory value of the model will not have increased because the observations in this 4 duplicated data are not independent of the previous data. 5 8. Professor Noll’s results suffer from this problem. As I described in my earlier 6 report, Professor Noll’s data has very little variation. 6 For example, on a given day, Professor 7 Noll may have many price observations for a given product, but the prices are all the same and 8 they are not independent from one another. Put differently, the price one reseller pays on a given 9 day (or in a given quarter) is not independent of the price another reseller pays. Without 10 correction, this has the effect of reducing the standard errors and generating coefficient estimates 11 that Professor Noll claims are precise. But the precision has little to do with the underlying 12 variation and information in the sample, but instead is due to the number of repetitive data 13 observations. This problem, well known in empirical economics, is called clustering. 7 When 14 observations are clustered, it means that they are not independent from one another, but rather are 15 correlated with each other within groups (or clusters). In his deposition, Professor Noll 16 recognized this problem but claimed that he did not investigate it because he “ran out of time.” 8 17 Exhibit B shows the standard errors for Professor Noll’s coefficients when the data is corrected to 18 account for clustering. As that exhibit shows, once corrected, the standard errors associated with 19 the iTunes 4.7 and iTunes 7.0 coefficients are not only not “highly significant,” they are not 20 statistically significant at either the 1% or the 10% level. In fact, the coefficient on the iTunes 7.0 21 22 6 23 7 24 25 26 27 28 2011 Burtis Reply Report at ¶ 15. See for example, Larry B. Hedges and Christopher H. Rhoads, “Correcting an analysis of variance for clustering,” British Journal of Mathematical and Statistical Psychology, (2011), 64, pp. 20-37 at p. 20. (“A great deal of educational and social data arises from cluster sampling designs where clusters involve schools, classrooms, or communities. A mistake that is sometimes encountered in the analysis of such data is to ignore the effect of clustering and analyze the data as if it were based on a simple random sample. This typically leads to an overstatement of the precision of results and too liberal conclusions about precision and statistical significance of mean differences.”) 8 Noll Deposition at pp. 112-113. -4- Supp. Burtis Expert C 05-00037 JW (HRL) 1 variable would be significant at the 70% level. This means the coefficient has very little, if any, 2 significance and for all practical purposes can be considered to be zero. 3 9. Professor Noll also claims that the “fit” of the regression is “very high, with an 4 adjusted R2 of 0.98.” 9 Professor Noll asserts that this statistic shows that “all but two percent of 5 the variation in prices of iPod models is explained by the regression.” It is well known, however, 6 that a high R2 may or may not be associated with capturing a true underlying relationship. 7 Dummy variables in a model (such as used by Professor Noll) may generate a high R2 even 8 though they have little actual explanatory power. See A Guide to Econometrics, Peter Kennedy, 9 Second Edition at p. 185. 10 10. That Professor Noll’s purported statistical measures do not show that his model is 11 valid is confirmed by the fact that Professor Noll made the same assertions about his previous 12 model—i.e., that the previous model had an adjusted R2 of 0.98 and very low standard errors. 10 13 Despite that assertion, however, Professor Noll admitted at his deposition (as discussed in the 14 following section) that his previous model was not reliable and could not support any conclusion 15 that iTunes 4.7 caused any effect on prices. The same is true of his current model. 16 Professor Noll’s Current Model Does Not Address the Deficiencies He Admitted Existed in 17 His Previous Model 18 11. At his deposition, Professor Noll testified that his earlier preliminary regression 19 analysis with respect to iTunes 4.7 was unreliable, incomplete, had omitted variables, may be 20 biased, may be affected by spurious correlation, did not take Apple’s pricing strategy into 21 account, and should not be used to draw any inferences about issues fundamental to the case, such 22 as the price effect of the launch of iTS, the entry of Harmony, or the disabling of Harmony. 11 23 Professor Noll thus acknowledged that he could not make any “causal inferences” from the 24 25 26 9 Supplemental Noll Declaration at p. 4. 27 10 Noll Reply Declaration at pp. 38-39 11 2011 Burtis Reply Report at ¶ 7. 28 -5- Supp. Burtis Expert C 05-00037 JW (HRL) 1 regression. 12 In other words, the regression could not, and thus did not, show that iTunes 4.7 had 2 any impact on iPod prices. 3 12. Professor Noll’s new model does not address these issues. He has simply taken 4 the same model he used previously and made only a few adjustments to purportedly measure the 5 effect of the iTunes 7.0 update. He does not remedy the problems he previously identified. With 6 the exception of adding a new variable for the U2 Special Edition and a variable for iTunes 7.0, 7 he has not attempted to identify the omitted variables and try to include them in his model. He 8 has not corrected for bias or spurious correlation. He has not taken Apple’s pricing strategy into 9 account. He states that he has corrected some (but not all) data problems. But the omitted 10 variable, bias, spurious correlation and other problems he identified with his model are 11 independent of the data issues to which he refers. They are problems with the model’s 12 specification. And this has not meaningfully changed. 13 Professor Noll’s Model Remains Flawed for Additional Reasons 14 13. Professor Noll’s regression returns a single estimate that is an average across 15 proposed class members that buy different iPod models and who purchase iPods at different 16 times. 13 The measure of impact obtained from this model is an average amount across those 17 products that are sold in the periods both before and after the 7.0 update. He finds a single 18 estimate of price elevation (which he incorrectly interprets as $4.85), which would be applied, 19 apparently, to iPod shuffles that are priced at retail as low as $49 as well as to iPod touch models 20 that are priced at retail as high as $499. Similarly, the average will apparently be applied to all 21 iPods purchased after September 2006, whether they were purchased one week or two years after 22 the update. Impact on different products purchased by different proposed class members at 23 different times cannot be inferred based on an average. 24 14. Further, Professor Noll’s model does not, and indeed cannot, be used to measure 25 impact for iPod models that were introduced for the first time after the introduction of 26 12 27 13 28 Noll Deposition at 90, (“I drew no causal inferences from that regression.”) 2011 Burtis Reply Report at ¶ 11. Professor Noll admitted the result was an average. See Noll Deposition at p. 142. -6- Supp. Burtis Expert C 05-00037 JW (HRL) Exhibit A Professor Noll's Regression with Clustered Standard Errors Dependent variable Adjusted R2 Number of Observations iPod transaction price 0.9762 2,098,663 Variable Coefficient Estimate Standard Error 301.65 -47.41 -62.81 -35.58 -49.23 -206.39 -139.68 -119.62 -94.52 -194.62 -131.18 -94.41 -123.57 -39.10 -0.14 -48.13 -5.88 201.09 -19.16 32.99 63.49 -183.99 -0.26 4.16 0.34 -0.07 -0.17 12.09 4.00 0.36 3.52 -5.17 -1.81 -0.06 -2.03 -7.10 2.69 -1.64 -3.99 5.70 -0.50 70.58 9.19 14.44 10.25 14.22 39.01 36.81 37.10 34.91 40.59 119.69 31.77 43.53 26.90 36.27 38.45 25.74 69.06 37.50 98.69 39.96 52.09 1.04 1.77 0.92 0.95 1.02 5.62 8.49 0.89 3.42 1.89 1.13 2.09 0.98 2.10 2.74 5.74 1.72 1.77 0.45 Intercept Classic Mini Nano Shuffle Capacity 512MB Capacity 1024MB Capacity 2048MB Capacity 4096MB Capacity 5120MB Capacity 6144MB Capacity 8192MB Capacity 10240MB Capacity 15360MB Capacity 16384MB Capacity 20480MB Capacity 30720MB Capacity 32768MB Capacity 40960MB Capacity 61440MB Capacity 81920MB Capacity 122880MB Time trend Time Trend * Capacity 512MB Time Trend * Capacity 1024MB Time Trend * Capacity 2048MB Time Trend * Capacity 4096MB Time Trend * Capacity 5120MB Time Trend * Capacity 6144MB Time Trend * Capacity 8192MB Time Trend * Capacity 10240MB Time Trend * Capacity 15360MB Time Trend * Capacity 16384MB Time Trend * Capacity 20480MB Time Trend * Capacity 30720MB Time Trend * Capacity 32768MB Time Trend * Capacity 40960MB Time Trend * Capacity 61440MB Time Trend * Capacity 81920MB Time Trend * Capacity 122880MB Medium volume purchaser *** *** *** *** *** *** *** *** *** *** *** *** *** *** ** ** *** ** *** ** *** Page 1 Exhibit A High volume purchaser 1 to 5 units purchased 1st quarter 2nd quarter 3rd quarter Photo capability Video and photo capability Post-repricing transaction Post-end of life transaction Post-iTMS Harmony launched iTunes 4.7 iTunes 7.0 iTMS competitors fully DRM-free iTMS fully DRM-free Size (in3) Standard cost per unit -0.75 2.94 5.01 13.73 5.10 9.73 -6.34 -11.35 -19.95 -50.49 -31.07 5.50 3.34 -9.78 -13.79 -3.16 0.80 *** ** *** ** *** ** *** *** ** *** *** 0.62 0.61 1.97 2.41 2.32 7.02 5.17 3.77 8.32 14.73 9.96 7.31 8.71 4.33 4.31 4.60 0.08 Source: Noll Reply Declaration Backup Note: Standard errors are clustered around iPod model and the quarter during which the model was sold. *** Denotes statistical significance at the 1% level. ** Denotes statistical significance at the 5% level. * Denotes statistical significance at the 10% level. Page 2 Exhibit B Professor Noll's Regression on iPod Touch Transactions Only Dependent variable Adjusted R2 Number of Observations iPod transaction price 0.9668 353,669 Variable Coefficient Estimate Intercept Classic Mini Nano Shuffle Capacity 512MB Capacity 1024MB Capacity 2048MB Capacity 4096MB Capacity 5120MB Capacity 6144MB Capacity 8192MB Capacity 10240MB Capacity 15360MB Capacity 16384MB Capacity 20480MB Capacity 30720MB Capacity 32768MB Capacity 40960MB Capacity 61440MB Capacity 81920MB Capacity 122880MB Time trend Time Trend * Capacity 512MB Time Trend * Capacity 1024MB Time Trend * Capacity 2048MB Time Trend * Capacity 4096MB Time Trend * Capacity 5120MB Time Trend * Capacity 6144MB Time Trend * Capacity 8192MB Time Trend * Capacity 10240MB Time Trend * Capacity 15360MB Time Trend * Capacity 16384MB Time Trend * Capacity 20480MB Time Trend * Capacity 30720MB Time Trend * Capacity 32768MB Time Trend * Capacity 40960MB Time Trend * Capacity 61440MB Time Trend * Capacity 81920MB Time Trend * Capacity 122880MB Medium volume purchaser 807.62 -245.89 -120.35 146.58 -7.59 3.39 1.02 -6.28 -0.32 Standard Error 0.52 0.24 0.23 0.22 0.01 0.01 0.01 0.01 0.18 Page 1 Exhibit B High volume purchaser 1 to 5 units purchased 1st quarter 2nd quarter 3rd quarter Photo capability Video and photo capability Post-repricing transaction Post-end of life transaction Post-iTMS Harmony launched iTunes 4.7 iTunes 7.0 iTMS competitors fully DRM-free iTMS fully DRM-free Size (in3) Standard cost per unit -0.29 2.97 9.04 20.97 10.67 -60.39 -5.86 -7.56 -81.88 0.32 0.17 0.52 0.00 0.01 0.01 0.09 0.02 0.01 0.05 0.00 Source: Noll Reply Declaration Backup Note: Coefficients that are not able to be estimated are denoted by a "-". Standard errors have been calculated using Professor Noll's methodology. Page 2
{"url":"http://docs.justia.com/cases/federal/district-courts/california/candce/5:2005cv00037/26768/665/","timestamp":"2014-04-20T03:25:17Z","content_type":null,"content_length":"41866","record_id":"<urn:uuid:de0888a8-8a80-4cf6-b882-e5c80b3c19fe>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
Complex Networks Mon 5-6Session W833 Credits Lecture:2 Practice:0 Experiment:0 / code:76053 Update : 2011/12/23 Access Index : Fall Semester Outline of lecture Basic knowledge for analyzing network data is introduced. Topics include metrics of networks, common properties of real networks, algorithms for processing networks, models of networks, visualization of networks, and tools for analyzing Purpose of lecture The purpose of this lecture is to learn basic knowledge for analyzing and modeling networks, such as 1) fundamentals of networks, 2) network algorithms, 3) network models, and 4) processes on networks. Plan of lecture 1. introduction 2. tools for analyzing networks 3. fundamentals (1) mathematics of networks 4. fundamentals (2) measures and metrics 5. fundamentals (3) the large-scale structure of networks 6. network algorithms (1) representation 7. network algorithms (2) matrix algorithms 8. network algorithms (3) graph partitioning 9. network models (1) random graphs 10. network models (2) network formation 11. network models (3) small-world model 12. processes on networks (1) percolation 13. processes on networks (2) epidemics 14. summary Textbook and reference Networks, An Introduction Networks, Crowds, and Markets Related and/or prerequisite courses Discrete Structures and Algorithms Based on 2 or 3 times of assignments Comments from lecturer Please visit the following site for more information about this lecture.
{"url":"http://www.ocw.titech.ac.jp/index.php?module=General&Nendo=2011&action=T0300&GakubuCD=226&GakkaCD=226716&KougiCD=76053&Gakki=2&lang=EN","timestamp":"2014-04-19T09:24:03Z","content_type":null,"content_length":"14176","record_id":"<urn:uuid:3f410c43-e53c-4191-831b-9d1f512ef147>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
Idempotent elements in matrix ring up vote 3 down vote favorite Is there a relation between the idempotent elements of a ring $R$ and those of $M_{n}(R)$ - the ring of $n \times n$ matrices over $R$? add comment 2 Answers active oldest votes Any idempotent $e$ of $R$ induces an idempotent $E=\mathop{diag}(e,\ldots,e)$ of $M_n(R)$. In fact, if $e_i$ are idempotents in $R$, then $E=\mathop{diag}(e_1,\ldots,e_n)$ is an idempotent of $M_n(R)$. Conversely, if $R$ is nice enough, an idempotent $E$ in $M_n(R)$ can be diagonalized to $E=U^{-1} \cdot \mathop{diag}(e_1,\ldots,e_n) \cdot U$ for some $U \in M_n(R)$ and up vote 5 down idempotents $e_i$ in $R$. Of course, this relies crucially on $R$ being nice enough. One sufficient "nicety" condition is that $R$ is an AW*-algebra; see for example this paper. add comment Yes. You can see the relationship easily in the following way. Suppose the ring $R=R_1\times R_2$ so that there are two primitive idempotents $e_1$ and $e_2$ with $1=e_1 +e_2$. Next note that $M_n(R)\approxeq Hom_R(R^n,R^n)$. Then $M_n(R)\approxeq Hom_{R_1\times R_2}(\(R_1\times R_2\)^n,\(R_1\times R_2\)^n)\approxeq Hom_{R_1}(R_1^n,R_1^n)\times Hom_{R_2}(R_2^n,R_2^n) $ up vote 1 since $e_1 \centerdot Hom_{R_2}(R_2^n,R_2^n)=0$ and similarly for $e_2$. Thus idempotents are calculated relative to each factor in the ring decomposition. down vote 2 I think you answered a question different from the one asked ;-) – Mariano Suárez-Alvarez♦ Nov 25 '12 at 23:00 Dear Ray, Thank you very much of your answer. If we have idempotent of $R$ then it is easy to generated for $M_{n}(R)$, But i want to construct idempotent for $R$, if we have idempotent of $M_{n}(R)$. so the above proof is not useful for me, – Ali Nov 26 '12 at 4:19 1 Hi.Let $F$ be an infinite field, then there exist infinitely many distinct pair $(I,J)$ of minimal left ideals of $M_2(F)$ such that $M_2(F)=I \oplus J$. so this shows that in this case $F$ has only two trivial idempotent, But $M_2(F)$ has infinitely many nontrivial idempotent. you could find this point at exercise [11.b] page 443 of Hubgerford. So I think you should determine the property of your ring$R$ which entries of matrix ring come from it. – Ali Reza Nov 26 '12 at 5:59 Ok. But in local ring we have no problem. See Lemma 7 in the paper Journal of Algebra 301 (2006) 280–293, when is 2x2 matrix ring over a commutative local ring are strongly clean. – Ali Nov 26 '12 at 9:19 Dear friends. Let me ask my quedtion in another case. If $J$ is an ideal of $M_{n}(R)$ then there is an ideal $I$ of $R$ such that $J=M_{n}(I)$. Now, if $J$ is generated by a subset of idempotent of $M_{n}(R)$ say $S$, does $I$ is generated by a subet of idempotent of $R$ relatesd to $S$. – Ali Nov 26 '12 at 10:37 show 1 more comment Not the answer you're looking for? Browse other questions tagged ra.rings-and-algebras or ask your own question.
{"url":"http://mathoverflow.net/questions/114447/idempotent-elements-in-matrix-ring/114488","timestamp":"2014-04-20T13:43:46Z","content_type":null,"content_length":"59763","record_id":"<urn:uuid:5254bfae-99ee-4aa7-b472-9c0553c20738>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Indefinite integrals January 9th 2010, 05:38 AM #1 Jan 2009 $\int \sqrt[3] {3x+5}.dx$ So the equation will be $\int (3x+5)^\frac{1}{3}.dx$ $u=3x+5 \rightarrow \int u^\frac{1}{3}$ After this step i dont know what to do , do i divide by 3 to produce 1? Please answer in step .Thank you Here's your problem! du is NOT equal to 3, du= 3dx. And from that it should be easy to see that dx= (1/3)du. After this step i dont know what to do , do i divide by 3 to produce 1? Please answer in step .Thank you I have no idea what you could mean by "do i divide by 3 to produce 1?". Divide what by 3? "Produce 1" where and how? The more precise you are in mathematics, the bettter. Practise being precise. $u=3x+5$ then So, we have Here's your problem! du is NOT equal to 3, du= 3dx. And from that it should be easy to see that dx= (1/3)du. I have no idea what you could mean by "do i divide by 3 to produce 1?". Divide what by 3? "Produce 1" where and how? The more precise you are in mathematics, the bettter. Practise being precise. $\int \frac{1}{4} (3x+5)^\frac{4}{3} +C$ Is my answer Correct ? It might help to write it out like this $u = 3x + 5$ so $\frac{du}{dx} = 3$. Now work on the integral $\int{(3x + 5)^{\frac{1}{3}}\,dx} = \frac{1}{3}\int{(3x + 5)^{\frac{1}{3}} \cdot 3\,dx}$ $= \frac{1}{3}\int{u^{\frac{1}{3}}\,\frac{du}{dx}\,dx }$ $= \frac{1}{3}\int{u^{\frac{1}{3}}\,du}$. Might be an extra step but easier to avoid making mistakes... It might help to write it out like this $u = 3x + 5$ so $\frac{du}{dx} = 3$. Now work on the integral $\int{(3x + 5)^{\frac{1}{3}}\,dx} = \frac{1}{3}\int{(3x + 5)^{\frac{1}{3}} \cdot 3\,dx}$ $= \frac{1}{3}\int{u^{\frac{1}{3}}\,\frac{du}{dx}\,dx }$ $= \frac{1}{3}\int{u^{\frac{1}{3}}\,du}$. Might be an extra step but easier to avoid making mistakes... $\frac{1}{2} (3x+5)^\frac{4}{3} +C$ Is my answer Correct ? $\frac{1}{3}\int{u^{\frac{1}{3}}\,du} = \frac{1}{3}\cdot\frac{3}{4}u^{\frac{4}{3}} + C$ $= \frac{1}{4}u^{\frac{4}{3}} + C$ $= \frac{1}{4}(3x + 5)^{\frac{4}{3}} + C$. substitution isn't needed in this question. If you remember this: if you have a derivative that is in the form: (ax+b)^n then the integral will be: [(ax+b)^n+1] / a(n+1) so just substitute your question into that rule. Don't try remembering lots of rules, it gets to be too much of a headache. It's easier to remember a few basic rules and to use substitution for the rest. Besides, the rule you speak of comes from substitution anyway. January 9th 2010, 05:42 AM #2 MHF Contributor Apr 2005 January 9th 2010, 05:43 AM #3 January 9th 2010, 05:46 AM #4 Jan 2009 January 9th 2010, 05:49 AM #5 January 9th 2010, 05:51 AM #6 Jan 2009 January 9th 2010, 05:57 AM #7 January 9th 2010, 07:08 AM #8 MHF Contributor Apr 2005 January 9th 2010, 08:10 PM #9 Jan 2010 January 9th 2010, 08:35 PM #10
{"url":"http://mathhelpforum.com/calculus/123004-indefinite-integrals.html","timestamp":"2014-04-16T15:09:23Z","content_type":null,"content_length":"70553","record_id":"<urn:uuid:246533dd-2e96-452a-a8b5-7f679a34934c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Basic Arithmetic in Other Bases Learn arithmetic in other bases, from base 2 (binary) through base 16 (hexadecimal). Basic facts are provided in the addition and multiplication tables for single-digit numbers. A simple four-function calculator does computations involving larger numbers. The "CE" button deletes the last digit(s) of the number currently being entered. You can change the operation before clicking "=". Contributed by: Marc Brodie (Wheeling Jesuit University)
{"url":"http://demonstrations.wolfram.com/BasicArithmeticInOtherBases/","timestamp":"2014-04-21T04:36:29Z","content_type":null,"content_length":"41728","record_id":"<urn:uuid:3b5ac8fe-df0e-4741-a8cf-1639462bd7be>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: How to calculate the partial derivative? Replies: 4 Last Post: Nov 23, 2012 3:35 AM Messages: [ Previous | Next ] Re: How to calculate the partial derivative? Posted: Nov 23, 2012 3:30 AM Dear all, I am trying to calculate the partial derivative by mathematica, I have the following commands: I got the following result: DNex=b2 (a1+b1 x+c1 y)+b1 (a2+b2 x+c2 y) How to do to have the following result? DNex=b2*L1 + b1 * L2 Tang Laoya Hi, Tang, Try the following. This is your definition and I denote the derivative result by expr: a2 b1+a1 b2+2 b1 b2 x+b2 c1 y+b1 c2 y What you want is to exclude x and y from the final expression. Let us denote L1 and L2 by l1 and l2, and find x and y from the first and the second equations: sol = Solve[{l2 == a2 + b2*x + c2*y, a1 + b1*x + c1*y == l1}, {x, {x -> -((a2 c1 - a1 c2 + c2 l1 - c1 l2)/(b2 c1 - b1 c2)), y -> -((-a2 b1 + a1 b2 - b2 l1 + b1 l2)/(b2 c1 - b1 c2))} Now substitute it into the result: expr /. sol // Simplify b2 l1 + b1 l2 Have fun, Alexei Alexei BOULBITCH, Dr., habil. IEE S.A. ZAE Weiergewan, 11, rue Edmond Reuter, L-5326 Contern, LUXEMBOURG Office phone : +352-2454-2566 Office fax: +352-2454-3566 mobile phone: +49 151 52 40 66 44 e-mail: alexei.boulbitch@iee.lu Date Subject Author 11/22/12 Re: How to calculate the partial derivative? Bob Hanlon 11/23/12 Re: How to calculate the partial derivative? Alexei Boulbitch 11/23/12 Re: How to calculate the partial derivative? Bob Hanlon 11/23/12 Re: How to calculate the partial derivative? Bob Hanlon
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2416389&messageID=7927080","timestamp":"2014-04-17T18:28:57Z","content_type":null,"content_length":"20922","record_id":"<urn:uuid:b8712343-ee5c-47cd-885e-8d9ffe6a9615>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
East Brunswick Math Tutor Find an East Brunswick Math Tutor ...Through my education and teaching experience, I've gained deep insight into how to best motivate students to create strong habits and schedules for themselves. I'm trained in a character education/study habit/life skills program entitled Quantum Learning. This is a research-based program that f... 22 Subjects: including algebra 1, algebra 2, vocabulary, grammar ...My availability is quite open in Old Bridge, NJ and surrounding towns. I know I can help you. I have have had 13 years in the public school as a Math teacher. 6 Subjects: including algebra 1, algebra 2, geometry, prealgebra I specialize in EMERGENCY high-stakes Test Prep. My areas of expertise include EMERGENCY Test Prep for PSAT, SAT, ACT, GMAT, and MCAT. What is EMERGENCY high-stakes Test Prep (EHSTP)? EHSTP is literally what it sounds like. 23 Subjects: including algebra 1, ACT Math, MCAT, geometry ...I presently tutor two students from my church ages 9 and 16; I tutor anything that they need help in but particularly English, math, Spanish, history. I work with kids of any age from kindergarten through 12th grade as I am a substitute teacher with the Franklin Twp. school district. As you kno... 4 Subjects: including algebra 1, geometry, prealgebra, ESL/ESOL ...Services Available*: SAT, SAT 2, AP Subjects and Tests, College Admissions Counseling, College Application Preparation LSAT, Law School Admissions Counseling, Law School tutoring, Bar Examination Preparation. *Other specific subjects available upon request. About me: Princeton Graduate, Distin... 34 Subjects: including algebra 2, algebra 1, prealgebra, SAT math
{"url":"http://www.purplemath.com/east_brunswick_math_tutors.php","timestamp":"2014-04-16T13:37:30Z","content_type":null,"content_length":"23767","record_id":"<urn:uuid:fe423939-0779-4338-a4bd-47190a7537ae>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
Stochastic Dynamics Stochastic Turbulence Theory Applied to Quasigeostrophic Turbulence The goal of the project on stochastic turbulence theory is to develop understanding of fully developed synoptic scale turbulence in baroclinic jets making use of stochastic dynamics to model the highly non-normal dynamical system underlying the turbulence. Farrell, B.F., and P. J. Ioannou, 1993: Stochastic dynamics of baroclinic waves., J. Atmo. Sci. , 50, 4044-4057. pdf Farrell, B.F., and P. J. Ioannou, 1994: A theory for the statistical equilibrium energy and heat flux produced by transient baroclinic waves. J. Atmos. Sci., 51, 2685-2698. pdf DelSole, T. M., and B. F. Farrell 1995: A stochastically excited linear system as a model for quasigeostrophic turbulence: analytic results for one- and two-layer fluids. J. Atmos. Sci., 52, 2531-2547. pdf Farrell, B.F., and P. J. Ioannou, 1995: Stochastic dynamics of the mid-latitude atmospheric jet. J. Atmos. Sci., 52, 1642-1656. pdf DelSole, T. M., and B. F. Farrell 1996: The quasilinear equilibration of a thermally maintained, stochastically excited jet in a quasigeostrophic model. J. Atmos. Sci. , 53, 1781-1797. pdf Farrell, B. F., and P. J. Ioannou, 2004: Sensitivity of Perturbation Variance and Fluxes in Turbulent Jets to Changes in the Mean Jet. J. Atmos , Sci., 61, 2644–2652. pdf
{"url":"http://www.fas.harvard.edu/~epsas/dynamics/stochastic%20quasigeostrophic/index.html","timestamp":"2014-04-19T17:04:47Z","content_type":null,"content_length":"7238","record_id":"<urn:uuid:a9da1cc0-9f17-4ede-a45b-2cb6eb2396ff>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
Calca — The Math App That Understands Your Writing I’ll never forget the first time I installed Mathematica in college. I was excited by the demos, and wanted to see how much it could help me take my calculus knowledge further — and take the drudgery out of math. Turns out, it was far more complicated to use than I ever anticipated, even more so than my trusty TI-89. Couldn’t CAS — computer algebra systems — be a bit less complex and more accessible to everyone who doesn’t have time to take a whole class on using them? Computers were designed originally to solve complex math, but normal calculators, spreadsheets, and CAS systems have remained too basic on the one end and too complex on the other to change the way most of us feel about math. It’s more than understandable that we’d tend to be skeptical when a new app claims to make math simpler for everything from engineering to basic budgets at the same time — but that’s exactly what Calca claims. It’s a markdown text editor fused with a CAS; can it possibly be the answer to the frustrations of math? Calculated Writing Calca at first glance would seem to be a text editor more than a math tool, but dig deeper and it’s easily more of the latter than the former. But it’s not a half-bad text editor at that, complete with Markdown support that’ll show the formatting as you add it and makes links clickable. Everything — including the calculated numbers — are saved in your Calca document in plain text format with a .txt extension, so you can open your notes and calculations in any app or share the finished document with anyone even if they’re not on a Mac. But that’s not the best part. The best part is Calca’s brilliant math engine that lets you type out equations just as you’d solve them on paper, and then it’ll go ahead and solve them for you when you type the function name followed by =>. You can write everything out in words, as in the examples above, defining variables naturally, and then ask it what the final answer is. 99.9% of the time, it’ll give you back exactly what you’re looking for (and the other 0.1% of the time, you’ll realize that you’ve messed something up, declaring a variable twice or mistyping something). Calca is very easy to use. Essentially, you can declare a variable by using any normal word or phrase, followed by an = and the value or equation it’s equal to. This can be something simple, such as the things that are in your budget, or it can be a standard f(x)= algebra function. Then, you can see the final value of your variable by typing your variable followed by =>. Anything in bold black is a variable, anything blue is a number value, and anything with a grey background is a result that’s been generated by Calca. Then, if you need more info, you can find out numerical facts from Google directly in the app. Just type in “USD to Euro exchange rate” or “distance from earth to sun” or anything else you want to find, then type =? and Calca will find the answer for you from Google. You can then use that in your following equations. It won’t find everything, but I’ve already found it powerful and useful. Calca goes far beyond the basic math you’d think of at first with an app like this, and can do everything from compute logs, solve matrices, compute basic logic statements and for statements, solve functions for a variable, or even just simplify equations as much as it can. Just look through the examples on the Calca site to see what it can do — it’s one powerful app. Haven’t We Seen This Before? It’d be impossible to hear about Calca without thinking of Soulver, the original text-based simple calculator for the Mac. There’s a lot of similarities, but Calca is definitely the more powerful of the two. Soulver is designed to keep things simple, with calculation bar on the right that automatically shows the value of each line, and a sum at the bottom. You can use variables and solve simple functions with it, depending on how you set them up, but its primary purpose is more ordinary calculations such as budgets that end with a tally at the bottom. Soulver is likely simpler to get started with, but it can be confusing in its own right, and I’d tend to think most people who’d like Soulver would equally like Calca. You may miss Soulver’s quick conversions, though, and if you’re looking for the simplest way to do quick text-based math that mainly involves sums and conversions, it still can come out on top. Calca’s Markdown text formatting, built-in Google search function, and far cheaper price tag, though, make it more attractive, even aside from the advanced math features. Calca is an incredibly promising new way to work with math and text together on your Mac, one that’s even more surprising than FoldingText‘s text-based timer and other plain text innovations we’ve seen recently. It’s really, really impressive, and is an app you’ll have to try out if you use math often at all. It’s already got a companion iPad app — one that actually came slightly before the Mac version — and iCloud sync, so it’s one of the best ways to calculate and keep your thoughts straight at the same time, anywhere you are. A Markdown text editor and computer algebra system happily married together, Calca lets you compute naturally. A text editor for engineers, one that's still a math app for the rest of us.
{"url":"http://mac.appstorm.net/reviews/productivity-review/calca-the-math-app-that-understands-your-writing/","timestamp":"2014-04-18T03:15:13Z","content_type":null,"content_length":"43543","record_id":"<urn:uuid:bc21b9d0-7b33-4eba-bb49-33deeb839b46>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
Electricity – Notes Basic Ideas Electric charge is a fundamental quantity like mass, distance, or time. Charge is observable and measurable by the force it exerts on other charges. There are two types of charges: positive and negative. Like charges repel one another: positive repels positive, negative repels negative. Opposite charges attract one another: positive attracts negative, negative attracts positive. The variable q or Q is used to represent an amount of charge (may be positive or negative). The SI unit of charge is the Coulomb, abbreviated C. Since all matter contains protons and electrons, there is charge present (often in great quantities) in every object. Typically, however, the number of protons in an object essentially equals the number of electrons – adding the amounts of charge gives a total of zero – the object is said to have no net charge and to be neutral. An object is charged or has a net charge when there are unequal numbers of protons and electrons. This occurs almost always as a result of electrons being transferred to or from an object. Charge is conserved! The net amount of electric charge produced in any process is zero. Means of becoming charged: Conduction is the transfer of charge from one object to another, usually as a result of contact. Induction involves the rearrangement of charge within an object due to the presence of an external charge (or electric field). There is no contact. Insulators vs. Conductors Both insulators and conductors can possess charge. Key difference: charge can travel freely through conductors. Charge cannot travel freely through insulators, but rather tends to be “locked” in place. Explanation: It is now known that electrons are carriers of charge. In conductors (metals) the electrons are not tightly bound to nuclei and easily “roam” from one atom to another. In insulators (e.g. rubber, plastic, wood, etc.) electrons are held more tightly in orbits around nuclei and do not easily move from one atom to another. Coulomb’s Law The force one charge, q[1], exerts on another, q[2], has a magnitude given by: F = k q[1] q[2 ] where r is the distance between q[1] and q[2] and k is a constant. The direction of this force is either toward or away from the other charge – depending on whether it is attraction or repulsion. k = 9.0 ´ 10^9 N m^2/C^2 Quantization of Charge The smallest possible amount of charge is that on an electron or proton. This amount is called the fundamental or elementary charge, e. e = 1.602 ´ 10^-19 C An electron has charge: q = -e = -1.602 ´ 10^-19 C A proton has charge: q = +e = 1.602 ´ 10^-19 C Furthermore, any amount of charge greater than the elementary charge is an exact integer multiple of the elementary charge! Weird, eh? q = ne, where n is an integer For this reason, charge is said to be “quantized”. It comes in quantities of 1.602 ´ 10^-19 C. Electric Potential V = ^W/[q] or V = ^E/[q] V = Electric potential (A.K.A. Voltage, Potential Difference, Electromotive Force or EMF) W = Work done to move charge between two points E = Potential energy due to position (separation) of charge q = Amount of charge SI unit for electric potential: the Volt 1 Volt = ^1 Joule/[1 Coulomb] or V = ^J/[C] (The number of volts indicates the number of joules work or energy per coulomb.) Electric Current I = ^Q/[t] I = electric current – rate at which charge flows in a certain pathway Q = amount of charge flowing past a certain point t = time SI unit for electric current: the Ampere 1 Ampere = ^1 Coulomb/[1 second] or A = ^C/[s] (The number of amperes indicates the number of coulombs of charge flowing per second.) Electric Power P = VI The power for an electrical device is equal to the voltage times the current. This value will indicate (in Watts) the rate at which the device transforms energy. R = ^V/[I] more commonly written as: V = IR (known as Ohm’s Law) R = electric resistance (this is the “resistance” to flow of charge through a device) An object or device with greater resistance will require a greater voltage to produce a certain amount of current. SI unit for resistance: the Ohm 1 Ohm = ^1 Volt/[1 Ampere] or W = ^V/[A] (The number of ohms indicates how many volts are required to produce 1 ampere of current.) For certain materials and devices the resistance will be constant over a wide range of voltages and currents; such a device or material is said to be “ohmic”. Ohmic materials include metals like copper, silver, etc. Common carbon based resistors are also ohmic. Nonohmic devices have a resistance that changes depending on voltage and current. A light bulb filament is nonohmic because its resistance increases as its temperature increases. Semiconductors, such as diodes and transistors, and electric motors are also nonohmic.
{"url":"http://www.farraguttn.com/science/milligan/physics/ElecNote.htm","timestamp":"2014-04-21T14:48:45Z","content_type":null,"content_length":"28985","record_id":"<urn:uuid:0b5f0347-172c-4568-b5a8-9a01f85bcbcd>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
pdf search for "who number. this belong to" (Page 1 of about 1,480,000 results) CLASSIFICATION OF NUMBERS - Calculus Support Page.pdf In general an irrationl number, is a number that can not be expressed as a ratio of any other two numbers. For example: ... point that belong to all of them ... Read Down Square Roots; Number and Number Sense; 7 - VDOE.pdf Reporting Category Number and Number Sense Topic Determining square roots ... o Which number does not belong: 81, 99, 100, and 121? Why? Journal/Writing Prompts ...Read Down Which One Doesn’t Belong?.pdf Which One Doesn’t Belong? Unit 3: Fun With Shapes Grade Level Grade 1 Overview In this task, students will become familiar with the different ways to classify shapes by ... Read Down
{"url":"http://www.freedocumentsearch.com/pdf/who-number.-this-belong-to.html","timestamp":"2014-04-17T19:00:03Z","content_type":null,"content_length":"14679","record_id":"<urn:uuid:797b7ce4-23f4-473f-831d-fd73fb87b799>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
Whippany Math Tutor Find a Whippany Math Tutor ...I have worked with students from elementary school to college level classes and have made a huge difference in their lives. Give me a chance, and I am positive that you will love the results that you see. You will see your your grades go up from the very first meeting. 13 Subjects: including discrete math, linear algebra, algebra 1, algebra 2 ...I scored a perfect 800 on the SAT Critical Reading, Mathematics, and Writing tests, and I've been helping students improve their scores for over ten years. I love tutoring, and I make each lesson dynamic, engaging, and enjoyable. I've worked with students of many different backgrounds, including non-native English speakers and students with learning disabilities. 10 Subjects: including SAT math, ACT Math, GMAT, SAT writing Hello,My goal in tutoring is to develop your skills and provide tools to achieve your goals. My teaching experience includes varied levels of students (high school, undergraduate and graduate students).For students whose goal is to achieve high scores on standardized tests, I focus mostly on tips a... 15 Subjects: including algebra 1, algebra 2, calculus, chemistry ...I am confident that given the opportunity for a position, I will be a valuable member to any family. Thank you for your time and I look forward to speaking with you at your convenience. Danielle C. 33 Subjects: including ACT Math, SAT math, English, reading ...Mastery of the subject.5. Organization, planning and execution.6. Empathy and genuine concern for students.7. 17 Subjects: including algebra 1, geometry, SAT math, reading
{"url":"http://www.purplemath.com/whippany_nj_math_tutors.php","timestamp":"2014-04-21T02:16:30Z","content_type":null,"content_length":"23497","record_id":"<urn:uuid:8216bcff-3e38-4fd5-8c1f-b96dd985d2b2>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
Math we can’t do There are lots of humbling aspects of being a parent of a teenager. Among them, is the uneasy realization that they are getting smarter than you. Proof of that comes to me daily in the SAT Question of the Day. With my son starting to look at schools and getting ready for the ACT, I’ve been reading the question of the day for the SAT and the ACT the past few months. Bottom line: 32 years after high school, I’m still pretty good at understanding what I read — and I still stink at math. Here’s a recent SAT question of the day where I didn’t know where to start. Read the following SAT test question and then click on a button to select your answer. If S is the set of positive integers that are multiples of 7, and if T is the set of positive integers that are multiples of 13, how many integers are in the intersection of S and T? (A) None (B) One (C) Seven (D) Thirteen (E) More than thirteen I’m buoyed by the fact that only 40 percent of the 231,758 who tried this question got it right. For all the agonizing we do about public education, I’m pretty sure the expectations and demands schools place on our children when it comes to math are harder than what I dealt with as a teen. Bonus: Answer is E. hardly even a math question so much as one of logic… if you have numbers going out to infinity then you are bound to have more then 13. Yeah – not to pick on you, Paul – but this is more of a quetion of math terminology recognition and logic. A rephrase might look like this: “If a group of numbers named “S” included only multiples of 7 and are greater than 0, and a group of numbers named “T” included only multiples of 13 and are greater than zero, how many numbers would S and T have in common?” If S and T include an infinite group of numbers, they must have infinite numbers in common – or, more than 13. I went to a public school and got 6/7 right. I might swing the other way and say grammar is something not focused on, but I might be bitter about missing the sovereign question. When I was a few years out of college, and more than 8 years past my last math class, I decided to go to graduate school. This meant taking the GRE and reteaching myself a lot of math, and learning some that I never learned in the first place, like geometry and trigonometry. It wasn’t fun and I forgot it all within a week of taking the test. Rarely do I need to do even basic arithmetic without the help of a calculator. I use my high school Spanish skills and geography knowledge more often than I use any of the math I learned in high school. In fact, I don’t think I use any math beyond what I learned in elementary school ever. I’m not saying math isn’t important. It is very important and students should be expected to learn advance math skills. But I’m not going to feel bad, or stupid, for not knowing how to answer an SAT math question. It just isn’t relevant in my daily life. My brain is too busy trying to remember my bank PIN and my co-workers’ names to keep track of these kind of things. It’s a good problem in that it tests problem solving skills, which are more important than arithmetic. If somebody asked you straight away, “How many numbers are multiples of both 7 and 13?”, you’d probably get it right away — every multiple of 7*13 works, and there are infinitely many of those. So what’s being tested here is the ability to turn a problem that looks complicated into one you can think about, and then take a few seconds to solve that one. I teach math at a university, and I can tell you I’d much rather have that sort of cognitive maturity in a student than the basic algebra skills these things usually measure. To clarify — because this seems to be a big mystery out there — we teach students math because it gives them problem solving skills. It’s sort of like, I do 15 pushups every morning, but not because I think pushups are an important life skill — it just gives me the strength and energy to do lots and lots of other things that are actually important. Joey – Nice analogy. If it’s yours, I’ll bet you’re not getting paid enough. About the blogger Paul Tosto Paul Tosto is a Web Editor for MPR News.
{"url":"http://blogs.mprnews.org/newscut/2012/05/math_i_cant_do/","timestamp":"2014-04-20T20:56:14Z","content_type":null,"content_length":"41471","record_id":"<urn:uuid:dbe62f4a-5e32-47c1-ad95-84b89b5a6bde>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Elizabeth, NJ Math Tutor Find an Elizabeth, NJ Math Tutor ...As for teaching style, I feel that the concept drives the skill. If you have the idea of what to do on a problem, you do not need to complete 10 similar problems. As such, I like to spend more time on the why than the what. 26 Subjects: including trigonometry, linear algebra, logic, ACT Math As a Chemistry and Math tutor, I am excited and committed to motivate students in order for them to succeed. After a successful career as a Ph.D. Chemist in the Pharmaceutical Industry, I mentored several chemists, and my experience is a great asset in my tutoring sessions. 7 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...Hopefully, you will give me a chance at this subject area. For Regents Algebra: I have a 99% pass for the Regents Algebra including students from Wyzant. I am detail oriented. 47 Subjects: including statistics, SAT math, accounting, writing ...My approach to mathematics tutoring is creative and problem-oriented. I focus on proofs, derivations and puzzles, and the natural progression from one math problem to another. My problem-solving skills were honed while training for the 40th International Mathematical Olympiad in Bucharest, Romania, at which I won a Bronze Medal. 9 Subjects: including discrete math, algebra 1, algebra 2, calculus ...My name is William and I've been tutoring the SAT for the past 5 years. As a young guy myself, I work very well with high school students and aim to go above and beyond by giving advice on scholarship and college applications. Whenever working with students, I give the first session free so that students can get accustomed to my teaching style. 9 Subjects: including algebra 1, algebra 2, geometry, prealgebra Related Elizabeth, NJ Tutors Elizabeth, NJ Accounting Tutors Elizabeth, NJ ACT Tutors Elizabeth, NJ Algebra Tutors Elizabeth, NJ Algebra 2 Tutors Elizabeth, NJ Calculus Tutors Elizabeth, NJ Geometry Tutors Elizabeth, NJ Math Tutors Elizabeth, NJ Prealgebra Tutors Elizabeth, NJ Precalculus Tutors Elizabeth, NJ SAT Tutors Elizabeth, NJ SAT Math Tutors Elizabeth, NJ Science Tutors Elizabeth, NJ Statistics Tutors Elizabeth, NJ Trigonometry Tutors Nearby Cities With Math Tutor Elizabethport, NJ Math Tutors Elmora, NJ Math Tutors Hillside, NJ Math Tutors Irvington, NJ Math Tutors Linden, NJ Math Tutors Midtown, NJ Math Tutors Newark, NJ Math Tutors North Elizabeth, NJ Math Tutors Parkandbush, NJ Math Tutors Peterstown, NJ Math Tutors Roselle Park Math Tutors Roselle, NJ Math Tutors Union Center, NJ Math Tutors Union Square, NJ Math Tutors Union, NJ Math Tutors
{"url":"http://www.purplemath.com/Elizabeth_NJ_Math_tutors.php","timestamp":"2014-04-20T13:23:41Z","content_type":null,"content_length":"23851","record_id":"<urn:uuid:caa2a0a4-5d19-4373-8661-48ef1b172cbd>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Return to List Tangents and Secants of Algebraic Varieties &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp Translations of During the last twenty years algebraic geometry has experienced a remarkable shift from development of abstract theories to investigation of concrete properties of projective Mathematical varieties. Many problems of classical algebraic geometry centered on linear systems, projections, embedded tangent spaces, and so on. Use of modern techniques has made it possible Monographs to make progress on some of these problems. Following these themes, this book covers these topics, among others: tangent spaces to subvarieties of projective spaces and complex tori, projections of algebraic varieties, classification of Severi varieties, higher secant varieties, and classification of Scorza varieties over an algebraically closed field of 1993; 164 pp; characteristic zero. Research mathematicians. Volume: 127 • Theorem on tangencies and Gauss maps Reprint/Revision • Projections of algebraic varieties History: • Varieties of small codimension corresponding to orbits of algebraic groups • Severi varieties reprinted 2005 • Linear systems of hyperplane sections on varieties of small codimension • Scorza varieties ISBN-10: • References 0-8218-3837-7 • Index of notations List Price: US$68 Member Price: Order Code: MMONO
{"url":"http://ams.org/bookstore?fn=20&arg1=mmonoseries&ikey=MMONO-127-S","timestamp":"2014-04-17T19:14:00Z","content_type":null,"content_length":"14851","record_id":"<urn:uuid:b69bdea8-87ed-438d-965c-1dfb7ecae4cc>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Marking Up Mathematical Symbols From: Alexey Beshenov <al@beshenov.ru> Date: Wed, 12 Sep 2007 19:01:33 +0400 To: "Andy Laws" <adlaws@gmail.com> Cc: WAI Interest Group list <w3c-wai-ig@w3.org> Message-Id: On Wednesday 12 September 2007 17:38, you wrote: > Can any one please advise on best practices for marking up Mathematical > Symbols such as ------> ø Unicode contains most of the needed symbols. I think you mean mathematical expressions, not only separate operators/signs. You should use proper markup language for mathematical content, MathML. It's not widely supported (as far as I know, by Geckos native engine and MSIE standalone plugins only), but it's supposed to be the most accessible If you need to support users with old graphical browsers, you can use images. Good idea is to prepare them with LaTeX (as far as I know, there are no better rendering engine) and use LaTeX code as alternative text. There are some XSLT solutions for LaTeX to MathML conversion, so there are no need to work with LaTeX manually. Anyway, it's better to suggest users to choose from MathML and images. See the W3C MathML FAQ http://www.w3.org/Math/mathml-faq.html for more information on MathML (note that some information on implementations isn't P.S. It seems that your message contains typo. 'ø' is the U+00F8 "LATIN SMALL LETTER O WITH STROKE" (which is not a mathematical symbol), I think you want to use the U+2205 "EMPTY SET". Alexey Beshenov <al@beshenov.ru> Received on Wednesday, 12 September 2007 15:01:52 GMT This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 19 July 2011 18:14:27 GMT
{"url":"http://lists.w3.org/Archives/Public/w3c-wai-ig/2007JulSep/0110.html","timestamp":"2014-04-21T03:03:38Z","content_type":null,"content_length":"9909","record_id":"<urn:uuid:7e9a4cc9-d906-4e21-bf21-95dcc28095b5>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
REU Site: Interdisciplinary Program in High Performance Computing ┃ Team Members: │ Jeremy Bejarano^1, Koushiki Bose^2, Tyler Brannan^3, and Anita Thomas^4 ┃ ┃ Faculty Mentors: │ Kofi Adragni^5 and Nagaraj K. Neerchal^5 ┃ ┃ Client: │ George Ostrouchov^6 ┃ ^1Brigham Young University , Provo, UT ^2Brown University , Providence, RI ^3North Carolina State University , Raleigh, NC ^4Illinois Institute of Technology , Chicago, IL ^5University of Maryland, Baltimore County , Baltimore, MD ^6Oak Ridge National Laboratory , Oak Ridge, TN Team 2, from left to right: Jeremy Bejarano, Koushiki Bose, Anita Thomas, Tyler Brannan Our team of undergraduates, Jeremy Bejarano, Koushiki Bose, Tyler Brannan, and Anita Thomas, researched methods to incorporate sampling within the standard k-means algorithm to cluster large data. The research took place during the summer of 2011 at the University of Maryland, Baltimore County, as a part of their REU Site: Interdisciplinary Program in High Performance Computing. We were mentored by Dr. Kofi Adragni and also received help from Dr. Nagaraj Neerchal and Dr. Matthias Gobbert. The research was proposed by Dr. George Ostrouchov from Oak Ridge National Laboratory, and we also received guidance from him during the process. Due to advances in data collection technology, our ability to gather data has far surpassed our ability to analyze it. For instance, NASA's Earth Science Data and Information System Project collects three Terabytes (1 Terabyte = 1024 Gigabytes) of data per day, from seven satellites called the Afternoon Constellation. Datasets such as these are difficult to move or analyze. Analysts often use clustering algorithms to group data points with similar attributes. Our research focused on improving Lloyd's k-means clustering algorithm which is ill-equipped to handle extremely large datasets on even the most powerful machines. It must calculate N*k distances where N is the total number of points assigned to k clusters; these distance calculations are often time-intensive, since the data can be multi-dimensional. Dr. Ostrouchov suggested that sampling could ease this burden. The key challenge was to find a sample that was large enough to yield accurate results but small enough to outperform the standard k-means' runtime. We developed a k-means sampler and performed a simulation study to compare it to the standard k-means. We analyzed the tradeoff between speed and accuracy of our method against the standard, concluding that our algorithm was able to match accuracy with significantly decreased runtime. The standard k-means algorithm is an iterative method that alternates between two steps: classification and calculation of cluster centers. It begins with either user-specified centers or random centers. First, each data point is assigned to the cluster whose center is closest to it. Second, the mean (center) of each cluster is calculated. The algorithm iterates until convergence. Convergence occurs when no points are reassigned to a different cluster. Below is an illustration of k-means clustering. For display purposes, we plotted points as longitude by latitude. However, one could display the points by its attributes, e.g., relative humidity by pressure. The left plot gives the points colored by a temperature scale. Note, the color bar only applies to the left plot. The right plot shows the points in three groups (represented by red, green, and blue) clustered by temperature. We improve the algorithm's speed by introducing a sampling function within each of its iterations. The sample size required to achieve a confidence interval with a specified width for each cluster's center is calculated and used for the next iteration. Once the algorithm converges on the sample, all N points are classified, and k centers are obtained using those final classifications. We ran four sets of simulations with datasets of one, two, three, and four dimensions to compare the accuracy of the standard and sampler k-means. Accuracy is measured by the percentage of points correctly classfied. The table below lists results for a desired confidence interval of 95% with a width of 0.01 using over 119 million datapoints. For each dimension we present the best accuracy for 20 trials, along with the average and standard error (St. Err.) of the accuracies. Notice that within a single simulation, both algorithms have similar values for all three statistics reported. Accuracy Comparison │ │ One Dimension │ Two Dimensions │ Three Dimensions │ Four Dimensions │ │ │ Best │Average│St. Err.│ Best │Average│St. Err.│ Best │Average│St. Err.│ Best │Average│St. Err.│ │Standard│99.0679│99.0679│ 0.0000 │98.7610│98.7610│ 0.0000 │99.8563│76.1653│ 5.7822 │99.9719│69.9848│ 6.9862 │ │Sampler │99.0679│99.0675│ 0.0001 │98.7610│98.7611│ 0.0000 │99.8562│77.4223│ 5.4655 │99.9719│69.9910│ 6.9855 │ The plot illustrates the total runtime of all twenty trials in seconds versus number of dimensions for both the standard (blue line) and sampler (red line) algorithms. Note, the blue line shows a significant inrease in time for the standard algorithm on three and four dimensions. However, the red line shows the sampler's steady (almost linear) growth across dimensions. This difference between the two lines illustrates that as our datasets enter three and four dimensions, the sampler is approxmiately ten times faster than the standard, rather than twice as fast in one and two dimensions. Our sampler algorithm used approximately 0.5% of the datapoints. Hence, it achieves substantial speedup without losing accuracy compared to the standard k-means algorithm. The best accuracies are virtually the same for both algorithms; the averages and standard errors are also comparable. Thus, the sampler meets the criterion of matching the standard k-means in accuracy. Furthermore, our sampler algorithm always has a smaller runtime than the standard, and the difference grows significantly for three and four dimensions. Thus, our algorithm also meets the criterion of significantly decreasing runtime. Therefore, analysts should use our sampler algorithm, expecting accurate results in considerably less time. Jeremy Bejarano, Koushiki Bose, Tyler Brannan, Anita Thomas, Kofi Adragni, Nagaraj K. Neerchal, and George Ostrouchov. Sampling within k-Means Algorithm to Cluster Large Datasets. Technical Report HPCF-2011-12, UMBC High Performance Computing Facility, University of Maryland, Baltimore County, 2011. Reprint in HPCF publications list Poster presented at the Summer Undergraduate Research Fest (SURF) Participant Anita Thomas Presents in Undergraduate Mathematics Symposium Click here to view Team 1's project Click here to view Team 3's project Click here to view Team 4's project
{"url":"http://www.umbc.edu/hpcreu/2011/projects/team2.html","timestamp":"2014-04-20T11:34:35Z","content_type":null,"content_length":"18108","record_id":"<urn:uuid:f9a144c3-8571-44f0-ba72-4bb3352748b9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
What prerequisites do I need to read the book Ricci Flow and the Poincare Conjecture, published by CMI MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. As mentioned in the title, I want to understand the proof of Poincare Conjecture by Perelman, what prerequisites do I need? up vote 6 down vote favorite gt.geometric-topology ricci-flow reference-request show 4 more comments As mentioned in the title, I want to understand the proof of Poincare Conjecture by Perelman, what prerequisites do I need? If I were going there I wouldn't start from here. If you're new to 3-manifolds, it might better to familiarise yourself with them intimately before starting on Perelman's work. In fact, learning some knot theory (in particular Dehn surgery) would be a good first step. I don't remember where I first learned this stuff, but I do remember sitting on the floor in the library in front of the low-dimensional topology section and looking at lots of books (perhaps a better search mechanism than Google when you're not quite sure what you're looking for). One good such book is Rolfsen's "Knots and Links". I remember being very happy when I worked out why $S^1\times S^2$ is the result of doing 0-surgery on $S^3$ (there's a nice picture). Maybe using the Wirtinger presentation and van Kampen's theorem to compute the fundamental group of the Poincaré sphere would be a good exercise to convince yourself you understand what's going on with Dehn surgery. The basic observation in all of this is that the 3-sphere is the union of two solid tori (or indeed of two handlebodies of arbitrary genus). If that grabs your imagination then a good step would be to convince yourself that every 3-manifold can be presented as (a) a Heegaard splitting, (b) a sequence of Dehn surgeries on the 3-sphere. This uses the Lickorish theorem (that the mapping class group of a surface is generated by Dehn twists) and that will lead you into studying 2-manifolds (see Farb and Margalit's book on mapping classes for an excellent presentation). up vote 11 When you have convinced yourself that the classification of 3-manifolds is an interesting and worthwhile subject then there are Hatcher's survey, Allen Hatcher's notes on 3-manifolds and down vote Hempel's book (amongst other places). You could have a look at Stalling's "How not to prove the Poincaré conjecture" (available on his website) and maybe at the proof of the Poincaré accepted conjecture in high dimensions (either Smale's original paper or Milnor's wonderful h-cobordism theorem book) to get an idea of what you're missing by living in three dimensions. Perelman's approach comes from a completely different world to any of this: the world of Thurston's geometrisation conjecture. Thurston's book introduces some of these ideas (with an emphasis on the hyperbolic) and his papers are full of beautiful insights. Once you have at least some familiarity with this stuff you could reasonably crack open a book on Ricci flow and start learning about that, but be warned that it won't necessarily bear much resemblance to anything else you've read about 3-manifolds. Of course you don't need all this background to understand Ricci flow, but at least you'll know what a 3-manifold is. I also stand by my comment that the best way to learn something is to pick up a difficult book containing something you would like to understand and then look stuff up as and when you need it. Google and Wikipedia are wonderful for quick reference but they are not an easy place to learn a subject thoroughly for the first time. Edit: As Deane Yang points out below, if you're more interested in Ricci flow itself, there may be better learning approaches. For instance, Chow and Knopf have a nice book in which they introduce Ricci flow and use it to prove the uniformisation theorem in two dimensions. They also cover Hamilton's theorem that a positively curved 3-manifold admits a metric of constant positive sectional curvature. These are both strictly easier than Perelman, while still involving hard differential geometry. Of course, you need to learn some differential geometry but there are plenty of good books about that. add comment If I were going there I wouldn't start from here. If you're new to 3-manifolds, it might better to familiarise yourself with them intimately before starting on Perelman's work. In fact, learning some knot theory (in particular Dehn surgery) would be a good first step. I don't remember where I first learned this stuff, but I do remember sitting on the floor in the library in front of the low-dimensional topology section and looking at lots of books (perhaps a better search mechanism than Google when you're not quite sure what you're looking for). One good such book is Rolfsen's "Knots and Links". I remember being very happy when I worked out why $S^1\times S^2$ is the result of doing 0-surgery on $S^3$ (there's a nice picture). Maybe using the Wirtinger presentation and van Kampen's theorem to compute the fundamental group of the Poincaré sphere would be a good exercise to convince yourself you understand what's going on with Dehn surgery. The basic observation in all of this is that the 3-sphere is the union of two solid tori (or indeed of two handlebodies of arbitrary genus). If that grabs your imagination then a good step would be to convince yourself that every 3-manifold can be presented as (a) a Heegaard splitting, (b) a sequence of Dehn surgeries on the 3-sphere. This uses the Lickorish theorem (that the mapping class group of a surface is generated by Dehn twists) and that will lead you into studying 2-manifolds (see Farb and Margalit's book on mapping classes for an excellent presentation). When you have convinced yourself that the classification of 3-manifolds is an interesting and worthwhile subject then there are Hatcher's survey, Allen Hatcher's notes on 3-manifolds and Hempel's book (amongst other places). You could have a look at Stalling's "How not to prove the Poincaré conjecture" (available on his website) and maybe at the proof of the Poincaré conjecture in high dimensions (either Smale's original paper or Milnor's wonderful h-cobordism theorem book) to get an idea of what you're missing by living in three dimensions. Perelman's approach comes from a completely different world to any of this: the world of Thurston's geometrisation conjecture. Thurston's book introduces some of these ideas (with an emphasis on the hyperbolic) and his papers are full of beautiful insights. Once you have at least some familiarity with this stuff you could reasonably crack open a book on Ricci flow and start learning about that, but be warned that it won't necessarily bear much resemblance to anything else you've read about 3-manifolds. Of course you don't need all this background to understand Ricci flow, but at least you'll know what a 3-manifold is. I also stand by my comment that the best way to learn something is to pick up a difficult book containing something you would like to understand and then look stuff up as and when you need it. Google and Wikipedia are wonderful for quick reference but they are not an easy place to learn a subject thoroughly for the first time. Edit: As Deane Yang points out below, if you're more interested in Ricci flow itself, there may be better learning approaches. For instance, Chow and Knopf have a nice book in which they introduce Ricci flow and use it to prove the uniformisation theorem in two dimensions. They also cover Hamilton's theorem that a positively curved 3-manifold admits a metric of constant positive sectional curvature. These are both strictly easier than Perelman, while still involving hard differential geometry. Of course, you need to learn some differential geometry but there are plenty of good books about that. My humble advice for learning about Ricci flow generally, after obtaining some background in Riemannian geometry, would be to start with a book which gets you to important results quickly. An excellent book is the one by Peter Topping. (The only typo I observed there is the one regarding backwards uniqueness, which is now due to Brett Kotschwar.) After that, there are excellent books on the differentiable spherical space form theorem by Brendle and Andrews--Hopper; see also the original papers of Boehm--Wilking and Brendle--Schoen. What is irreplaceable is to read and to reread the original works by the masters, i.e., Hamilton and Perelman. A collection of Ricci flow papers, mostly by Hamilton, is edited by H.D. Cao, up vote 3 etal.; this is a convenient place to get Hamilton's papers in one place. Perelman's papers are on arXiv. There are a number of excellent expositions of their work (focused on Perelman's down vote work), which actually go beyond expositions and include various degrees of original work, namely in alphabetical order: Bessieres--Besson--Boileau--Maillot--Porti, Cao--Zhu, Kleiner--Lott, and Morgan--Tian. The above remarks only pertain to Riemannian Ricci flow. add comment My humble advice for learning about Ricci flow generally, after obtaining some background in Riemannian geometry, would be to start with a book which gets you to important results quickly. An excellent book is the one by Peter Topping. (The only typo I observed there is the one regarding backwards uniqueness, which is now due to Brett Kotschwar.) After that, there are excellent books on the differentiable spherical space form theorem by Brendle and Andrews--Hopper; see also the original papers of Boehm--Wilking and Brendle--Schoen. What is irreplaceable is to read and to reread the original works by the masters, i.e., Hamilton and Perelman. A collection of Ricci flow papers, mostly by Hamilton, is edited by H.D. Cao, etal.; this is a convenient place to get Hamilton's papers in one place. Perelman's papers are on arXiv. There are a number of excellent expositions of their work (focused on Perelman's work), which actually go beyond expositions and include various degrees of original work, namely in alphabetical order: Bessieres--Besson--Boileau--Maillot--Porti, Cao--Zhu, Kleiner--Lott, and Morgan--Tian.
{"url":"https://mathoverflow.net/questions/89748/what-prerequisites-do-i-need-to-read-the-book-ricci-flow-and-the-poincare-conjec/89841","timestamp":"2014-04-17T19:10:49Z","content_type":null,"content_length":"73970","record_id":"<urn:uuid:55b45521-3408-498c-be21-bb34ef231466>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00243-ip-10-147-4-33.ec2.internal.warc.gz"}