content
stringlengths
86
994k
meta
stringlengths
288
619
Itasca, IL Geometry Tutor Find a Itasca, IL Geometry Tutor ...I just completed my student teaching experience (teaching Algebra I and Algebra II) and will be certified June 2014. I have gained a lot of experience learning how to help students through the step by step process of thinking through Math problems. I love to work with students on algebra, geometry, trigonometry, and precalculus!I have a degree in Mathematics from Augustana College. 7 Subjects: including geometry, algebra 1, algebra 2, trigonometry ...I was the top student in all my chemistry classes, so I have a clear understanding of all the concepts to do with chemistry. I will be able to help you or your child to understand these concepts, using real life examples, and will also be able to coach you to the techniques which will enable you to answer them every time. I tutor because I love working with children. 20 Subjects: including geometry, chemistry, physics, GRE ...Inverse Trigonometric Functions. Trigonometric Equations. POLAR COORDINATES AND VECTORS. 17 Subjects: including geometry, reading, discrete math, GRE ...The high success rate stems from the one-on-one attention that I devote to each student and my expertise in the area of mathematics. If you want A's and to develop good study habits, then I am your person. What I can offer: I can help you get ahead for next year's math/science, or simply review to solidify the material learned this year. 11 Subjects: including geometry, biology, precalculus, algebra 2 ...While I was in college, I worked at my old high school as a substitue teacher. I also tutored other college students in English and Math. I took a semester off to fill in for a friend of mine who taught English and Reading at a private school. 19 Subjects: including geometry, reading, algebra 1, algebra 2 Related Itasca, IL Tutors Itasca, IL Accounting Tutors Itasca, IL ACT Tutors Itasca, IL Algebra Tutors Itasca, IL Algebra 2 Tutors Itasca, IL Calculus Tutors Itasca, IL Geometry Tutors Itasca, IL Math Tutors Itasca, IL Prealgebra Tutors Itasca, IL Precalculus Tutors Itasca, IL SAT Tutors Itasca, IL SAT Math Tutors Itasca, IL Science Tutors Itasca, IL Statistics Tutors Itasca, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Itasca_IL_Geometry_tutors.php","timestamp":"2014-04-16T16:04:39Z","content_type":null,"content_length":"24001","record_id":"<urn:uuid:dba1ec7d-2c2d-4b12-b8f0-1885962a2c19>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
Eventologically multivariate extensions of probability theory’s limit theorems Vorobyev, Oleg Yu. and Golovkov, Lavrentyi S. (2009): Eventologically multivariate extensions of probability theory’s limit theorems. Published in: Proc. of VIII Intern. FAM Conf. , Vol. 1, (23. April 2009): pp. 35-39. Download (597Kb) | Preview Eventologically multivariate extensions of probability theory’s limit theorems are proposed. Eventologically multivariate version of limit theorems extends its classical probabilistic interpretation and involves into its structure of dependencies of arbitrary set of events which appears in sequence of independent tests. Item Type: MPRA Paper Original Eventologically multivariate extensions of probability theory’s limit theorems English Eventologically multivariate extensions of probability theory’s limit theorems Language: English Keywords: Event, probability, set of events, Bernoulli univariate test, Bernoulli multivariate test, eventological distribution, multivariate discrete distribution, limit theorem. Subjects: C - Mathematical and Quantitative Methods > C0 - General Item ID: 22576 Depositing Oleg Vorobyev Date 09. May 2010 14:29 Last 16. Feb 2013 04:12 [1] A. A. Borovkov. Probability Theory. Nauka, Moscow, 1986 (in Russian). [2] Korolyuk, S., Portenko, N. I., Skorokhod, A. V., and Turbin, A. F. Handbook on Probability Theory and Mathematical Statistics. Nauka, Moscow (in Russian), 1985. References: [3] A. N. Shiryaev. Probability. Nauka, Moscow, 1980 (in Russian). [4] O. Yu. Vorobyev. Multivariate discrete distributions: eventological extension. In Oleg Vorobyev, editor, Proc. of the VI All-Russian FAM Conf. on Financial and Actuarial Mathametics and Related Fields, volume 1, pages 72–97. Siberian Federal University, Krasnoyarsk, Russia, [5] O. Yu. Vorobyev and L. S. Golovkov. Multivariate discrete distributions. Journal of Siberian Federal University. Mathematics & Physics, 1:68–77, 2008. URI: http://mpra.ub.uni-muenchen.de/id/eprint/22576
{"url":"http://mpra.ub.uni-muenchen.de/22576/","timestamp":"2014-04-16T04:28:00Z","content_type":null,"content_length":"18511","record_id":"<urn:uuid:7b939f59-8695-4fbb-9100-3edd01974b45>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
Number Bases and Leading Zeros One thing that can cause problems when writing Javascript is where have a date that has been entered and you are splitting it up to convert the separate day and month portions into numbers to use for subsequent processing. Where these numbers are always entered as two digits the '08' and '09' values often do not get converted into numbers in the way that you would expect. Using parseInt('09') givers you zero as the number that is returned rather than the 9 that you expect. Everyone is familiar with the base 10 (or decimal) way of counting where we have ten separate symbols to represent different values - 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. To get values greater than 9 (the largest number having a separate symbol) we start combining our symbols together and use 10 to represent the number one bigger than 9. At this point you are probably wondering what the above two paragraphs have in common and why I have explained something so obvious as our numbering system the way that I have. The answer is that the decimal numbering system is not the only possible numbering system and is in fact not the one that computers themselves use. While we prefer to count using the decimal system, computers prefer to count using the binary (base 2) system. Binary has only two sysmbols to represent different values - 0 and 1. To get values greater than 1 we combine sysbols together the same way that we do in our decimal system and so in binary 10 represents the number one bigger than 1 (which is represented with 2 in our decimal system). Even 1001 is not a very big number in binary as it can still be represented with just one symbol in decimal since it is the binary equivalent of 9. The biggest problem with binary numbers is that we can end up with large combinations of 1s and 0s that can take us a long time to work out what the decimal equivalent is so that we will know what the number really represents. To help us to solve this problem we introduced two further number bases that computers can easily convert to from binary that are also easier for us to work with. These two number bases are octal (base 8) which uses 0, 1, 2, 3, 4, 5, 6, and 7 and hexadecimal (base 16) which uses 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F (allowing numbers up to 15 to be represented by a single symbol). Computers can easily convert binary to one of these because it is simply a matter of taking three or four binary digits (usually abbreviated to bits) and representing them as the one equivalent symbol in the octal or hexadecimal system. Since octal and hexadecimal numbers are approximately the same length as the equivalent decimal numbers they are much easier for us to work with than binary. Javascript (like most programming languages) allows us to work directly with both octal and hexadecimal numbers, all we need is a way to tell which number base we are using when we specify a number. To identify octal and hexadecimal numbers we add something to the front of numbers using those bases to indicate which base we are using. A leading 0 on the front of a number indicates that the number following is octal while a leading 0x indicates a hexadecimal number. The decimal number 18 can therefore also be represented as 022 (in octal) and 0x12 (in hexadecimal). We don't put a special symbol on the front of decimal numbers so any number that doesn't start with 0 or 0x is assumed to be decimal. When we use parseInt() we can specify the number base that we want the resulting number to be in by specifying the number base in the second (optional) parameter. If we don't specify a number base in that parameter then the number base will be worked out from the content of the number itself. With parseint('09') we are asking the function to extract the number from this string in octal. Since 9 is not a number in the octal numbering system the function drops it and returns the number portion preceding it giving us 0. Of course the function treats 01 through 07 as octal numbers as well but since these octal numbers are identical to their decimal equivalents we don't notice when these numbers are returned that the function is using the wrong number base. To force the parseInt function to extract decimal numbers from the string all we need to do is to pass 10 in the second parameter. The following will return the 9 that we expect./p> var num = parseInt('09',10); Note that the ECMAscript standards do not require browsers to support octal numbers in Javascript and so you should probably avoid using them just in case there are any browsers out there that don't support it. Since you probably had no idea of what octal numbers were before reading this article this should not be a problem.
{"url":"http://javascript.about.com/library/blnumbase.htm","timestamp":"2014-04-20T10:47:01Z","content_type":null,"content_length":"36475","record_id":"<urn:uuid:91137e00-3334-44ae-ab9a-ef5a21ab2839>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
Bolinas Geometry Tutor Find a Bolinas Geometry Tutor ...I use a variety of resources and teaching styles depending on the individual needs of the child. I have a California Multiple Subjects Credential. I have been an Independent Study Teacher for grades K-12 for over 10 years. 17 Subjects: including geometry, reading, English, GED ...If you want to improve your proofreading ability, then you probably already know that you should ALWAYS read the paper you are about to turn in to your teacher or the letter or application you are about to submit. Technically speaking, proofreading is considered to be the very last step in the w... 42 Subjects: including geometry, reading, English, GRE ...This course builds on algebraic and geometric concepts for success in higher level math courses. In tutoring algebra, it is important to understand the method but also to see how the subject can be applied to the real world. I have taught geometry for over 26 years as a high school math teacher, primarily with the San Francisco Unified School District. 6 Subjects: including geometry, statistics, algebra 1, prealgebra Hello, my name is Starfire, did you know that the elements in your body come from exploding stars? I'm full of curiosity and love sharing the facts, methods and philosophy of science. I have ten years of experience teaching high school math and physics. 12 Subjects: including geometry, chemistry, physics, calculus ...I have experienced years of Professional Development around how to make learning accessible and rigorous for young people. I have worked countless hours with young people, both one-on-one and in groups, to support comprehension and academic success. To me, it is important that I guide and suppo... 11 Subjects: including geometry, biology, elementary math, algebra 1
{"url":"http://www.purplemath.com/Bolinas_Geometry_tutors.php","timestamp":"2014-04-21T15:18:16Z","content_type":null,"content_length":"23831","record_id":"<urn:uuid:8b41ad18-a568-4c11-b0c6-a156164223f9>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: February 2009 [00014] [Date Index] [Thread Index] [Author Index] Re: Mathematica and LyX - Graphics and equations • To: mathgroup at smc.vnet.net • Subject: [mg96010] Re: Mathematica and LyX - Graphics and equations • From: JUN <noeckel at gmail.com> • Date: Sun, 1 Feb 2009 04:40:44 -0500 (EST) • References: <glj8bm$dln$1@smc.vnet.net> <200901261001.FAA22263@smc.vnet.net> On Jan 28, 3:33 am, TL <la... at shaw.ca> wrote: > Sorry I didn;t write an example > Here's the issue - I have variables such as P_AB - that is P and AB as > an index. Mathematica writes \text{} whenever it sees the AB which is > unwanted - I want the whole thing to appear in italic form. May be it's > a standard convention I'm not really sure, but it just looks ugly and I > looked in some math books and saw it all written in italic font there. > So is there a way to turn this off? Actually Mathematica is doing the stylistically correct thing here because AB is a single name according to what you describe. In LaTeX, authors sometimes get lazy and type names in math environments without surrounding them by \text or similar things, but then LaTeX not only sets them in italics but can get fooled into typesetting them the wrong way because it doesn't know all characters belong to one name. So you should in fact declare textual names as such. The best approach depends a lot on what you're actually trying to do with your expression in Mathematica itself (manipulate it algebraically, use it as a label, etc.). To "emulate" the lazy italic LaTeX text, you could mislead Mathematica with the same thing that fools LaTeX: TeXForm [Subscript[p, A B]]. Of course if you actually meant for A and B to be separate variables, then you might also want to consider using Subscript[p, A, B] to avoid any ambiguities.
{"url":"http://forums.wolfram.com/mathgroup/archive/2009/Feb/msg00014.html","timestamp":"2014-04-20T13:39:39Z","content_type":null,"content_length":"26595","record_id":"<urn:uuid:af42b2df-f3c9-4a46-a2c1-57f9f174d7f1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
Foundational Courses 1501. Introduction to Actuarial Science (3 s.h.) F S. (Formerly: ACT SCI 0001.) Prerequisite: Mathematics 1041 (C085)/1941 (H095) or equivalent.Co-Requisite: Mathematics 1042 (0086)/1942 (H096) or equivalent. In this course, probability theory and its application to insurance and risk management problems are discussed. Among the topics to be covered: counting techniques, conditional probability, Bayes’ Theorem, discrete random variables, specific discrete distributions such as Binomial, Poisson, Negative Binomial and Uniform, moment generating functions and functions of two random variables. Note: Students need to earn a grade of C or better in this course to be eligible to register for all other required courses in the Actuarial Science major. Mode: Lecture and problem solving. 1901. Honors Introduction to Actuarial Science (3 s.h.) RCI: HO. (Formerly: ACT SCI 0091.) Prerequisite: Prerequisite: Mathematics 1041 (C085)/1941 (H095) or equivalent.Co-Requisite: Mathematics 1042 (0086)/1942 (H096) or equivalent. Honors version of Actuarial Science 1501 (0001). Note: Students need to earn a grade of C or better in this course to be eligible to register for all other required courses in the Actuarial Science major. 2101. Actuarial Probability and Statistics (3 s.h.) F S. (Formerly: ACT SCI 0262.) Prerequisite: Mathematics 1041 (C085)/1941 (H095), 1042 (0086)/1942 (H096), and 2043 (0127) (or equivalents); and Actuarial Science 1501 (0001) with a minimum grade of C in Actuarial Science 1501/ 1901 (0001/0091). This course covers the tools for quantitatively assessing risk as presented on Society of Actuaries Exam P/Casualty Actuarial Society Course 1. Topics include: general probability (set functions, basic axioms, independence); Bayes’ Theorem; univariate probability distributions (probabilities, moments, variance, mode, percentiles, and transformations); multivariate probability distributions (joint, conditional, and marginal distributions—probabilities, moments, variance, and covariance); and basic asymptotic results. Note: This course replaces the Statistics 2101 (C021) Business Core requirement for Actuarial Science majors. 2501. Basic Actuarial Mathematics (3 s.h.) S. (Formerly: ACT SCI 0061.) Prerequisite: Mathematics 1041 (C085)/1941 (H095), 1042 (0086)/1942 (H096), 2043 (0127); Actuarial Science 1501 (0001), Actuarial Science 2101 (0262); and Risk Management & Insurance 2101 (0001)/2901 (0091) or equivalents. A minimum grade of C in Actuarial Science 2101 and Risk Management 2101 is required. This course is an intensive review for the Society of Actuaries Exam P/Casualty Actuarial Society Course 1 exam. Actuarial foundations from calculus-based probability theory are covered, with an emphasis on applications to risk management and insurance. Mode: Problem solving. 2502. Theory of Interest (3 s.h.) F S. (Formerly: ACT SCI 0101.) Prerequisite: Mathematics 1041 (C085)/1941 (H095) and 1042 (0086)/1942 (H096); Actuarial Science 1501/1901 (0001/0091) with a minimum grade of C.Co-Requisite: Mathematics 2043 (0127). In this course, simple, compound and effective interest functions are analyzed and used in the calculation of present value and future values of various investments. Annuities, loan amortization and bonds are discussed and techniques for computing their values at various dates are explored. Note: Students will need to earn a minimum grade of C in this course to be eligible to take Actuarial Science 3501. Mode: Lecture and problem solving. Upper Division Courses 3501. Actuarial Modeling I (3 s.h.) S. (Formerly: ACT SCI 0305.) Prerequisite: Actuarial Science 2502 (0101), Actuarial Science 2101 (0262) and Risk Management & Insurance 2101 (0001)/2901 (0091), all with a minimum grade of C. This course introduces the discrete and continuous random variables measuring the future lifetime of a person. Among the topics covered are calculation of the mean, variance and probability functions for these random variables, introduction of a present value random variable measuring the present value of a life insurance and annuity benefit, calculation of premiums for life insurance and annuities using interest rates and calculation of reserves for insurance companies, examining future liabilities and inflow. Note: A grade of C or better is required in this course to be eligible to take Actuarial Science 3502. Mode: Lecture and problem solving. 3502. Actuarial Modeling II (3 s.h.) F. (Formerly: ACT SCI 0306.) Prerequisite: Actuarial Science 3501 (0305) with a grade of C or better.Co-Requisite: Statistics 2512 (0212). This course introduces multiple life functions that require the use of joint probability functions and the calculation of marginal probability distributions. Additional topics include the calculation of mean and variance for these joint random variables and multiple decrement theory. Various topics from Loss Models are also discussed including computation of mixed distributions through compounding of frequency distributions with severity distributions and the calculation of premiums for insurance policies with deductibles, limits and coinsurance. Note: A minimum grade of C in this course is required to be eligible to take Actuarial Science 3503 (0333) Mode: Lecture and problem solving. 3503. Actuarial Modeling III (3 s.h.) S. (Formerly: ACT SCI 0316.) Prerequisite: Actuarial Science 3502 (0306).Co-Requisite: Mathematics 4033 (0333). Estimation and fitting of survival, frequency and severity, and compound distribution loss models; credibility methods. 3580. Special Topics: Actuarial Science (3 s.h.) F S. Prerequisite: Actuarial Science majors with junior standing or permission of the program director. Special topics in current developments in the field of Actuarial Science and exam preparation. 3582. Independent Study (1 to 6 s.h.) (Formerly: ACT SCI 0396.) Prerequisite: Consultation with faculty member and approval of department chair. Readings and/or research paper under the supervision of a faculty member. 3596. Casualty Contingencies (3 s.h.) F. RCI: WI. (Formerly: ACT SCI W218.) Prerequisite: Mathematics 1041 (C085)/1941 (H095), 1042 (0086)/1942 (H096), 2043 (0127), Actuarial Science 1501/1901 (0001/0091) and Risk Management & Insurance 2101 (0001)/2901 (0091), Accounting 2101 (0001)/2901 (0091), Accounting 2102 (0002)/2902 (0092) and Junior status. This highly participative course is designed to broaden perspectives on the business environment in which actuaries work. In addition to analyzing the issues behind daily events, several continuing issues will be analyzed including insurance pricing cycles, regulatory developments, the role of the actuary as an educator, advisor, objective information source and problem solver, insurance company financial rating and solvency issues, accounting fraud and questionable financial transactions, insurance and the financial markets managing insurance operations, professional ethics, and the impact of current developments in underwriting, and reinsurance on the actuarial function. Note: This is the writing-intensive course for Actuarial Science majors. Mode: In addition to homework and exams, there will be significant writing assignments and a major group presentation project. 3999. Honors Thesis I (1 to 3 s.h.) F S. Prerequisite: Approval of instructor, Fox School Research Scholar Director, and Fox School Honors Director. The first of a two-part sequence of courses in which independent research is conducted under the supervision of a thesis advisor from the Actuarial Science department resulting in a substantial piece of original research, roughly 30 to 50 pages in length upon completion of Actuarial Science 4999. The student must publicly present his/her findings at a Temple University Research Forum session or the equivalent during one of the two semesters during which these courses are undertaken. 4999. Senior Honors Thesis (3 s.h.) F S. (Formerly: ACT SCI 0397.) Prerequisite: Approval of instructor and Fox School Honors Director. Independent research conducted under the supervision of a thesis advisor from the Actuarial Science Department resulting in a substantial piece of original research, roughly 30 to 50 pages in length. Student must publicly present his/her findings at a Temple University Research Forum session or the equivalent. [Back] [Top]
{"url":"http://www.temple.edu/bulletin/archive/webarchive/bulletin2008/ugradbulletin/ucd/ucd_actuarialscience.html","timestamp":"2014-04-17T08:31:44Z","content_type":null,"content_length":"15693","record_id":"<urn:uuid:63cb4f9e-a74d-4b33-89fa-fe85881ae3c8>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
Just in case there are any doubts about anthropogenic influence in atmospheric CO2 You would think this is the least controversial aspect of the global warming debate, but you'd be surprised. I realized this after reading some of the comments in a post by Anthony Watts about a recent correction in the way Mauna Loa data is calculated (see also reactions by Tamino subsequently wrote an interesting on differences in CO2 trends as observed in three different sites: Mauna Loa (Hawaii), Barrow (Alaska) and South Pole station. Most notably, there's a pronounced difference in the annual cycle between these stations, which according to Tamino, is explained by there being more land mass in the Northern Hemisphere. I would imagine higher CO2 emissions in the Northern Hemisphere might also play a role, but I'm speculating. In this post I want to show that available data is quite clear about anthropogenic influence in atmospheric CO2. Additionally, I want to discuss how we can tell that excess CO2 stays in the atmosphere for a long time. I will use about 170 years of data for this. There's a reconstruction of CO2 concentrations from 1832 to 1978 made available by CDIAC, and derived by Etheridge et al. (1998) from the Law Dome DE08, DE08-2, and DSS ice cores. You will note that there's an excellent match between these data and Mauna Loa data for the period 1958 to 1978. Mauna Loa data has an offset of 0.996 ppmv relative to Etheridge et al. (1998), so I applied this simple adjustment to it in order to end up with a dataset that goes from 1832 to 2004. CDIAC also provides data on global CO2 emissions . What we need, however, is an estimate of excess anthropogenic CO2 that would be expected to remain in the atmosphere at any given point in time. We could simply calculate cumulative emissions since 1751 for any given year, but this is not necessarily accurate. Some excess CO2 is probably reclaimed by the planet every year. What I will do is make an assumption about the atmospheric half-life of CO2 in order to obtain a dataset of presumed excess CO2. I will use a half-life of 24.4 years (i.e. 0.972 of excess CO2 remains after 1 year). I should note that I have tried this same analysis with half-lifes of 50, 70 and 'infinite' years, and the general results are the same. Figure 1 shows the time series of the two data sets. The trends are clear enough. CO2 emissions appear to accumulate in the atmosphere and are then observed in ice cores (and at various other sites like Mauna Loa). Every time we compare time series, though, there's a possibility that we're looking at coincidental trends. A technique that can be used to control for potentially coincidental trends is called detrended cross-correlation analysis Podobnik & Stanley, 2007 ). In our case, the detrended cross-correlation is obvious enough graphically, and we'll leave it at that. See Figure 2. Basically, we take the time series and remove their trends, which are given by third-order polynomial fits. You can do the same thing with linear fits or second-order first. The third-order fit is a better fit and produces more fluctuations around the trend, which makes the correlation more obvious and less likely to be explained by coincidence. With that out of the way, how do we know that excess CO2 stays in the atmosphere for a long time? First, let's check what the scientific literature says on the subject, specifically, Moore & Braswell (1994) If one assumes a terrestrial biosphere with a fertilization flux, then our best estimate is that the single half-life for excess CO2 lies within the range of 19 to 49 years, with a reasonable average being 31 years. If we assume only regrowth, then the average value for the single half-life for excess CO2 increases to 72 years, and if we remove the terrestrial component completely, then it increases further to 92 years. In general, it is widely accepted that the atmospheric half-life of CO2 is measured in decades, not years. One type of analysis that I have attempted is to select the half-life hypothesis that maximizes the Pearson's correlation coefficient of the series from Figure 1. If I do this, I find that the best half-life is about 24.4 years. Nevertheless, I had attempted the same exercise with the Mauna Loa series (1958-2004) previously, and the best half-life then seems to be about 70 years. It varies depending on the time frame, and there's not necessarily a trend in the half life. This just comes to show that there's uncertainty in the calculation, and that the half-life model is a simplification of the real world. Another approach we can take is to try to estimate the weight of excess CO2 currently in the atmosphere, and see how this compares to data on emissions. The current excess of atmospheric CO2 is agreed to be roughly 100 ppmv. If by 'atmosphere' we mean 20 Km above ground (this is fairly arbitrary) then the volume of the atmosphere is about 1.03x10 . This would mean that the total volume of excess CO2 is 1.03x10 , or 1.03x10 . The density of CO2 is 1.98 kg/m , so the total weight of excess CO2 should be about 2.03x10 Kg, or 2,030,000 millions of metric tons. Something is not right, though. If we add all annual CO2 emissions from 1751 to 2004, we come up with 334,000 millions of metric tons total. This can't be. I'd suggest that CDIAC data does not count all sources of anthropogenic emissions of CO2. It obviously can't be considering feedbacks either. Furthermore, our assumptions in the calculations above might not be accurate (specifically that a 100 ppmv excess is maintained up to an altitude of 20Km). In any case, it's hard to see how these numbers would support the notion that the half-life of CO2 is low. 5 comments: A couple comments: (1) Indeed as you note, the characterization of CO2 as having a single halflife is a large simplification. There is a good post by David Archer at RealClimate on the subject: http:// His conclusion is "A better shorthand for public discussion might be that CO2 sticks around for hundreds of years, plus 25% that sticks around forever." (2) There is indeed something not right about your estimates of the total amount of CO2 we have released into the atmosphere and the total amount by which CO2 in the atmosphere has increased. The "correct answer" is that roughly half of what we have released has remained in the atmosphere, with the other half going into the biosphere and oceans. One clear mistake in your calculation of the amount of excess CO2 in the atmosphere is assuming a density of 1.98 kg/m^3. This is the density at some specific temperature and pressure (presumably 1 atmosphere and roughly room temperature). By the time one gets up to 20 km, the density would be only a small fraction of that since the pressure would be a lot lower. This is probably not enough of an effect to explain the full discrepancy in your calculation but should at least get you closer to the right answer. Thanks Joel. You might be right about (2). The relative volume of CO2 could still be close to 380 ppmv, say, 10 Km high (Mauna Loa is 4 Km high), but the weight of CO2 will not be the same. Actually you should instead of 20 km use the (base-e) scale height of the atmosphere, 8 km. Then you get 800,000 Mt of CO2. An alternative computation is considering the weight of the atmosphere, 1 kg/cm^2. 100 ppmv is then (44/29)*10^-4 mass fraction. Multiply by Earth surface area 512 million km^2 gives 777,000 Mt. > Something is not right, though. > If we add all annual CO2 > emissions from 1751 to 2004, we > come up with 334,000 millions > of metric tons total. This > can't be. Don't mix up carbon with CO2 -- you have to multiply with (44/12), yielding 1,130,000 Mt. Still seems a bit low, but deforestation is missing. So CDIAC data is on Carbon emissions? They sometimes say CO2 and sometimes Carbon. I thought that was confusing.
{"url":"http://residualanalysis.blogspot.com/2008/08/just-in-case-there-are-any-doubts-about.html","timestamp":"2014-04-21T04:31:25Z","content_type":null,"content_length":"75268","record_id":"<urn:uuid:dd2c2394-f98b-4020-bc22-3f88a46d693b>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
Warning on specialisations when compiling Haskell Code with ghc up vote 7 down vote favorite I get the following error when trying to compile $ ghc --make -O2 -Wall -fforce-recomp [1 of 1] Compiling Main ( isPrimeSmart.hs, isPrimeSmart.o ) SpecConstr Function `$wa{v s2we} [lid]' has two call patterns, but the limit is 1 Use -fspec-constr-count=n to set the bound Use -dppr-debug to see specialisations Linking isPrimeSmart ... My code is: {-# OPTIONS_GHC -O2 -optc-O2 #-} import qualified Data.ByteString.Lazy.Char8 as StrL -- StrL is STRing Library import Data.List -- read in a file. First line tells how many cases. Each case is on a separate -- line with the lower an upper bounds separated by a space. Print all primes -- between the lower and upper bound. Separate results for each case with -- a blank line. main :: IO () main = do let factors = takeWhile (<= (ceiling $ sqrt (1000000000::Double))) allPrimes (l:ls) <- StrL.lines `fmap` StrL.getContents let numCases = readInt l let cases = (take numCases ls) sequence_ $ intersperse (putStrLn "") $ map (doLine factors) cases -- get and print all primes between the integers specified on a line. doLine :: [Integer] -> StrL.ByteString -> IO () doLine factors l = mapM_ print $ primesForLine factors l ---------------------- pure code below this line ------------------------------ -- get all primes between the integers specified on a line. primesForLine :: [Integer] -> StrL.ByteString -> [Integer] primesForLine factors l = getPrimes factors range range = rangeForLine l -- Generate a list of numbers to check, store it in list, and then check them... getPrimes :: [Integer] -> (Integer, Integer) -> [Integer] getPrimes factors range = filter (isPrime factors) (getCandidates range) -- generate list of candidate values based on upper and lower bound getCandidates :: (Integer, Integer) -> [Integer] getCandidates (propStart, propEnd) = list list = if propStart < 3 then 2 : oddList else oddList oddList = [listStart, listStart + 2 .. propEnd] listStart = if cleanStart `rem` 2 == 0 then cleanStart + 1 else cleanStart cleanStart = if propStart < 3 then 3 else propStart -- A line always has the lower and upper bound separated by a space. rangeForLine :: StrL.ByteString -> (Integer, Integer) rangeForLine caseLine = start `seq` end `seq` (start, end) [start, end] = (map readInteger $ StrL.words caseLine)::[Integer] -- read an Integer from a ByteString readInteger :: StrL.ByteString -> Integer readInteger x = case StrL.readInteger x of Just (i,_) -> i Nothing -> error "Unparsable Integer" -- read an Int from a ByteString readInt :: StrL.ByteString -> Int readInt x = case StrL.readInt x of Just (i,_) -> i Nothing -> error "Unparsable Int" -- generates all primes in a lazy way. allPrimes :: [Integer] allPrimes = ps (2:[3,5 .. ]) ps (np:candidates) = -- np stands for New Prime np : ps (filter (\n -> n `rem` np /= 0) candidates) ps [] = error "this can't happen but is shuts up the compiler" -- Check to see if it is a prime by comparing against the factors. isPrime :: [Integer] -> Integer -> Bool isPrime factors val = all (\f -> val `rem` f /= 0) validFactors validFactors = takeWhile (< ceil) factors ceil = ((ceiling $ sqrt $ ((fromInteger val)::Double))) :: Integer I have no idea how to fix this warning. How do I start? Do I compile to assembly and match the error up? What does the warning even mean? haskell warnings ghc add comment 1 Answer active oldest votes These are just (annoying) warnings, indicating that GHC could do further specializations to your code if you really want to. Future versions of GHC will likely not emit this data by default, since there's nothing you can do about it anyway. They are harmless, and are not errors. Don't worry about them. up vote 7 down vote To directly address the problem, you can use -w (suppress warnings) instead of -Wall. E.g. in a file {-# OPTIONS_GHC -w #-} will disable warnings. Alternately, increasing the specialization threshold will make the warning go away, e.g. -fspec-constr-count=16 I see. My particular problem is I'm trying to submit this to SPOJ and it says I have a compilation error. Is there a way around this? Can I isolate the offending code and re-write it to avoid this issue? SPOJ uses ghc 10.4.2. – Tim Perry May 5 '11 at 20:52 1 Can you use -w (suppress warnings) instead of -Wall? E.g. in a file {-# OPTIONS_GHC -w #-}. Alternately, increase the threshold, e.g. -fspec-constr-count=16 – Don Stewart May 5 '11 at 20:57 If I take out the -O2 flag then there are no warnings. The time goes from 6.5 seconds to 10.5 seconds on my test file and I don't make the time limit on SPOJ. – Tim Perry May 5 '11 at 21:02 I guess if my code design is probably the root problem. Some of the answers online use way less time and memory than I did. Back to the drawing board. Thanks for your help though. If you post your comment about -w as an answer I will mark it accepted. – Tim Perry May 5 '11 at 21:17 2 {-# OPTIONS_GHC -fspec-constr-count=16 -O2 #-} uses 5.8 seconds for my test file while {-# OPTIONS_GHC -w #-} uses 10.5 seconds. Thus, -fspec-constr-count=16 is definitely preferred. Thanks for the help. – Tim Perry May 5 '11 at 22:57 add comment Not the answer you're looking for? Browse other questions tagged haskell warnings ghc or ask your own question.
{"url":"http://stackoverflow.com/questions/5903563/warning-on-specialisations-when-compiling-haskell-code-with-ghc","timestamp":"2014-04-18T01:47:37Z","content_type":null,"content_length":"74049","record_id":"<urn:uuid:e69b5d69-0c93-4dab-80f1-f67919c60b4f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
Class List CAccuracyMeasure Class AccuracyMeasure used to measure accuracy of 2-class classifier CAlphabet The class Alphabet implements an alphabet and alphabet utility functions CANOVAKernel ANOVA (ANalysis Of VAriances) kernel CArray< T > Template class Array implements a dense one dimensional array CArray2< T > Template class Array2 implements a dense two dimensional array CArray3< T > Template class Array3 implements a dense three dimensional array CAsciiFile A Ascii File access class CAttenuatedEuclidianDistance Class AttenuatedEuclidianDistance CAttributeFeatures Implements attributed features, that is in the simplest case a number of (attribute, value) pairs CAUCKernel The AUC kernel can be used to maximize the area under the receiver operator characteristic curve (AUC) instead of margin in SVM training CAveragedPerceptron Class Averaged Perceptron implements the standard linear (online) algorithm. Averaged perceptron is the simple extension of Perceptron CAvgDiagKernelNormalizer Normalize the kernel by either a constant or the average value of the diagonal elements (depending on argument c of the constructor) CBALMeasure Class BALMeasure used to measure balanced error of 2-class classifier CBesselKernel Class Bessel kernel CBinaryClassEvaluation The class TwoClassEvaluation, a base class used to evaluate binary classification labels CBinaryFile A Binary file access class CBinaryStream< T > Memory mapped emulation via binary streams (files) CBitString String class embedding a string in a compact bit representation CBrayCurtisDistance Class Bray-Curtis distance CCache< T > Template class Cache implements a simple cache CCanberraMetric Class CanberraMetric CCanberraWordDistance Class CanberraWordDistance CCauchyKernel Cauchy kernel CChebyshewMetric Class ChebyshewMetric CChi2Kernel The Chi2 kernel operating on realvalued vectors computes the chi-squared distance between sets of histograms CChiSquareDistance Class ChiSquareDistance CCircularKernel Circular kernel CCombinedDotFeatures Features that allow stacking of a number of DotFeatures CCombinedFeatures The class CombinedFeatures is used to combine a number of of feature objects into a single CombinedFeatures object CCombinedKernel The Combined kernel is used to combine a number of kernels into a single CombinedKernel object by linear combination CCommUlongStringKernel The CommUlongString kernel may be used to compute the spectrum kernel from strings that have been mapped into unsigned 64bit integers CCommWordStringKernel The CommWordString kernel may be used to compute the spectrum kernel from strings that have been mapped into unsigned 16bit integers CCompressor Compression library for compressing and decompressing buffers using one of the standard compression algorithms, LZO, GZIP, BZIP2 or LZMA CConstKernel The Constant Kernel returns a constant for all elements CContingencyTableEvaluation The class ContingencyTableEvaluation a base class used to evaluate 2-class classification with TP, FP, TN, FN rates CConverter Class Converter used to convert data CCosineDistance Class CosineDistance CCplex Class CCplex to encapsulate access to the commercial cplex general purpose optimizer CCPLEXSVM CplexSVM a SVM solver implementation based on cplex (unfinished) CCrossCorrelationMeasure Class CrossCorrelationMeasure used to measure cross correlation coefficient of 2-class classifier Base class for cross-validation evaluation. Given a learning machine, a splitting strategy, an evaluation criterium, features and correspnding labels, this CCrossValidation provides an interface for cross-validation. Results may be retrieved using the evaluate method. A number of repetitions may be specified for obtaining more accurate results. The arithmetic mean of different runs is returned along with confidence intervals, if a p-value is specified. Default number of runs is one, confidence interval combutation is disabled CCustomDistance The Custom Distance allows for custom user provided distance matrices CCustomKernel The Custom Kernel allows for custom user provided kernel matrices CDecompressString< ST > Preprocessor that decompresses compressed strings CDiagKernel The Diagonal Kernel returns a constant for the diagonal and zero otherwise CDiceKernelNormalizer DiceKernelNormalizer performs kernel normalization inspired by the Dice coefficient (see http://en.wikipedia.org/wiki/Dice's_coefficient) CDiffusionMaps CDiffusionMaps used to preprocess given data using diffusion maps dimensionality reduction technique CDimensionReductionPreprocessor Class DimensionReductionPreprocessor, a base class for preprocessors used to lower the dimensionality of given simple features (dense matrices) CDistance Class Distance, a base class for all the distances used in the Shogun toolbox CDistanceKernel The Distance kernel takes a distance as input CDistanceMachine A generic DistanceMachine interface CDistantSegmentsKernel The distant segments kernel is a string kernel, which counts the number of substrings, so-called segments, at a certain distance from each other CDistribution Base class Distribution from which all methods implementing a distribution are derived CDomainAdaptationSVM Class DomainAdaptationSVM CDomainAdaptationSVMLinear Class DomainAdaptationSVMLinear CDotFeatures Features that support dot products among other operations CDotKernel Template class DotKernel is the base class for kernels working on DotFeatures CDummyFeatures The class DummyFeatures implements features that only know the number of feature objects (but don't actually contain any) CDynamicArray< T > Template Dynamic array class that creates an array that can be used like a list or an array CDynamicObjectArray< T > Template Dynamic array class that creates an array that can be used like a list or an array CDynInt< T, sz > Integer type of dynamic size CDynProg Dynamic Programming Class CEmbeddingConverter Class EmbeddingConverter used to create embeddings of features, e.g. construct dense numeric embedding of string features CErrorRateMeasure Class ErrorRateMeasure used to measure error rate of 2-class classifier CEuclidianDistance Class EuclidianDistance CEvaluation Class Evaluation, a base class for other classes used to evaluate labels, e.g. accuracy of classification or mean squared error of regression CExplicitSpecFeatures Features that compute the Spectrum Kernel feature space explicitly CExponentialKernel The Exponential Kernel, closely related to the Gaussian Kernel computed on CDotFeatures CF1Measure Class F1Measure used to measure F1 score of 2-class classifier CFeatures The class Features is the base class of all feature objects CFile A File access base class CFirstElementKernelNormalizer Normalize the kernel by a constant obtained from the first element of the kernel matrix, i.e. CFixedDegreeStringKernel The FixedDegree String kernel takes as input two strings of same size and counts the number of matches of length d CFKFeatures The class FKFeatures implements Fischer kernel features obtained from two Hidden Markov models CGaussian Gaussian distribution interface CGaussianKernel The well known Gaussian kernel (swiss army knife for SVMs) computed on CDotFeatures CGaussianMatchStringKernel The class GaussianMatchStringKernel computes a variant of the Gaussian kernel on strings of same length CGaussianNaiveBayes Class GaussianNaiveBayes, a Gaussian Naive Bayes classifier CGaussianShiftKernel An experimental kernel inspired by the WeightedDegreePositionStringKernel and the Gaussian kernel CGaussianShortRealKernel The well known Gaussian kernel (swiss army knife for SVMs) on dense short-real valued features CGCArray< T > Template class GCArray implements a garbage collecting static array CGeodesicMetric Class GeodesicMetric CGHMM Class GHMM - this class is non-functional and was meant to implement a Generalize Hidden Markov Model (aka Semi Hidden Markov HMM) CGMM Gaussian Mixture Model interface CGMNPLib Class GMNPLib Library of solvers for Generalized Minimal Norm Problem (GMNP) CGMNPSVM Class GMNPSVM implements a one vs. rest MultiClass SVM CGNPPLib Class GNPPLib, a Library of solvers for Generalized Nearest Point Problem (GNPP) CGNPPSVM Class GNPPSVM CGPBTSVM Class GPBTSVM CGridSearchModelSelection Model selection class which searches for the best model by a grid- search. See CModelSelection for details CGUIClassifier UI classifier CGUIDistance UI distance CGUIFeatures UI features CGUIHMM UI HMM (Hidden Markov Model) CGUIKernel UI kernel CGUILabels UI labels CGUIMath UI math CGUIPluginEstimate UI estimate CGUIPreprocessor UI preprocessor CGUIStructure UI structure CGUITime UI time CHammingWordDistance Class HammingWordDistance CHash Collection of Hashing Functions CHashedWDFeatures Features that compute the Weighted Degreee Kernel feature space explicitly CHashedWDFeaturesTransposed Features that compute the Weighted Degreee Kernel feature space explicitly CHashSet Class HashSet, a set based on the hash-table. w: http://en.wikipedia.org/wiki/Hash_table CHessianLocallyLinearEmbedding Class HessianLocallyLinearEmbedding used to preprocess data using Hessian Locally Linear Embedding algorithm described in CHierarchical Agglomerative hierarchical single linkage clustering CHingeLoss CHingeLoss implements the hinge loss function CHistogram Class Histogram computes a histogram over all 16bit unsigned integers in the features CHistogramIntersectionKernel The HistogramIntersection kernel operating on realvalued vectors computes the histogram intersection distance between sets of histograms. Note: the current implementation assumes positive values for the histograms, and input vectors should sum to 1 CHistogramWordStringKernel The HistogramWordString computes the TOP kernel on inhomogeneous Markov Chains CHMM Hidden Markov Model CIdentityKernelNormalizer Identity Kernel Normalization, i.e. no normalization is applied CImplicitWeightedSpecFeatures Features that compute the Weighted Spectrum Kernel feature space explicitly CIndirectObject< T, P > Array class that accesses elements indirectly via an index array CInputParser< T > Class CInputParser is a templated class used to maintain the reading/parsing/providing of examples CIntronList Class IntronList CInverseMultiQuadricKernel InverseMultiQuadricKernel CIOBuffer An I/O buffer class CIsomap Class Isomap used to preprocess data using K-Isomap algorithm as described in CJensenMetric Class JensenMetric CKernel The Kernel base class CKernelDistance The Kernel distance takes a distance as input CKernelLocallyLinearEmbedding Class KernelLocallyLinearEmbedding used to preprocess data using kernel extension of Locally Linear Embedding algorithm as described in CKernelLocalTangentSpaceAlignment Class LocalTangentSpaceAlignment used to preprocess data using kernel extension of the Local Tangent Space Alignment (LTSA) algorithm CKernelMachine A generic KernelMachine interface CKernelNormalizer The class Kernel Normalizer defines a function to post-process kernel values CKernelPCA Preprocessor KernelPCA performs kernel principal component analysis CKMeans KMeans clustering, partitions the data into k (a-priori specified) clusters CKNN Class KNN, an implementation of the standard k-nearest neigbor classifier CKRR Class KRR implements Kernel Ridge Regression - a regularized least square method for classification and regression CLabels The class Labels models labels, i.e. class assignments of objects CLaplacianEigenmaps Class LaplacianEigenmaps used to preprocess data using Laplacian Eigenmaps algorithm as described in: CLaRank LaRank multiclass SVM machine CLBPPyrDotFeatures Implement DotFeatures for the polynomial kernel CLDA Class LDA implements regularized Linear Discriminant Analysis CLibLinear Class to implement LibLinear CLibSVM LibSVM CLibSVMMultiClass Class LibSVMMultiClass CLibSVMOneClass Class LibSVMOneClass CLibSVR Class LibSVR, performs support vector regression using LibSVM CLinearHMM The class LinearHMM is for learning Higher Order Markov chains CLinearKernel Computes the standard linear kernel on CDotFeatures CLinearLocalTangentSpaceAlignment LinearLocalTangentSpaceAlignment converter used to construct embeddings as described in: CLinearMachine Class LinearMachine is a generic interface for all kinds of linear machines like classifiers CLinearStringKernel Computes the standard linear kernel on dense char valued features CList Class List implements a doubly connected list for low-level-objects CListElement Class ListElement, defines how an element of the the list looks like CLocalAlignmentStringKernel The LocalAlignmentString kernel compares two sequences through all possible local alignments between the two sequences CLocalityImprovedStringKernel The LocalityImprovedString kernel is inspired by the polynomial kernel. Comparing neighboring characters it puts emphasize on local features CLocallyLinearEmbedding Class LocallyLinearEmbedding used to preprocess data using Locally Linear Embedding algorithm described in CLocalTangentSpaceAlignment LocalTangentSpaceAlignment used to embed data using Local Tangent Space Alignment (LTSA) algorithm as described in: CLogKernel Log kernel CLogLoss CLogLoss implements the logarithmic loss function CLogLossMargin Class CLogLossMargin implements a margin-based log-likelihood loss function CLogPlusOne Preprocessor LogPlusOne does what the name says, it adds one to a dense real valued vector and takes the logarithm of each component of it CLoss Class which collects generic mathematical functions CLossFunction Class CLossFunction is the base class of all loss functions CLPBoost Class LPBoost trains a linear classifier called Linear Programming Machine, i.e. a SVM using a CLPM Class LPM trains a linear classifier called Linear Programming Machine, i.e. a SVM using a CMachine A generic learning machine interface CManhattanMetric Class ManhattanMetric CManhattanWordDistance Class ManhattanWordDistance CMatchWordStringKernel The class MatchWordStringKernel computes a variant of the polynomial kernel on strings of same length converted to a word alphabet CMath Class which collects generic mathematical functions CMeanAbsoluteError Class MeanAbsoluteError used to compute an error of regression model CMeanSquaredError Class MeanSquaredError used to compute an error of regression model CMemoryMappedFile< T > Memory mapped file CMinkowskiMetric Class MinkowskiMetric CMKL Multiple Kernel Learning CMKLClassification Multiple Kernel Learning for two-class-classification CMKLMultiClass MKLMultiClass is a class for L1-norm multiclass MKL CMKLOneClass Multiple Kernel Learning for one-class-classification CMKLRegression Multiple Kernel Learning for regression CModelSelection Abstract base class for model selection. Takes a parameter tree which specifies parameters for model selection, and a cross-validation instance and searches for the best combination of parameters in the abstract method select_model(), which has to be implemented in concrete sub-classes CModelSelectionParameters Class to select parameters and their ranges for model selection. The structure is organized as a tree with different kinds of nodes, depending on the values of its member variables of name and CSGObject CMPDSVM Class MPDSVM CMulticlassAccuracy The class MulticlassAccuracy used to compute accuracy of multiclass classification CMultiClassSVM Class MultiClassSVM CMultidimensionalScaling Class Multidimensionalscaling is used to perform multidimensional scaling (capable of landmark approximation if requested) CMultiquadricKernel MultiquadricKernel CMultitaskKernelMaskNormalizer The MultitaskKernel allows Multitask Learning via a modified kernel function CMultitaskKernelMaskPairNormalizer The MultitaskKernel allows Multitask Learning via a modified kernel function CMultitaskKernelMklNormalizer Base-class for parameterized Kernel Normalizers CMultitaskKernelNormalizer The MultitaskKernel allows Multitask Learning via a modified kernel function CMultitaskKernelPlifNormalizer The MultitaskKernel allows learning a piece-wise linear function (PLIF) via MKL CMultitaskKernelTreeNormalizer The MultitaskKernel allows Multitask Learning via a modified kernel function based on taxonomy CNeighborhoodPreservingEmbedding NeighborhoodPreservingEmbedding converter used to construct embeddings as described in: CNode A CNode is an element of a CTaxonomy, which is used to describe hierarchical structure between tasks CNormOne Preprocessor NormOne, normalizes vectors to have norm 1 COligoStringKernel This class offers access to the Oligo Kernel introduced by Meinicke et al. in 2004 COnlineLibLinear Class implementing a purely online version of LibLinear, using the L2R_L1LOSS_SVC_DUAL solver only COnlineLinearMachine Class OnlineLinearMachine is a generic interface for linear machines like classifiers which work through online algorithms COnlineSVMSGD Class OnlineSVMSGD Class that holds ONE combination of parameters for a learning machine. The structure is organized as a tree. Every node may hold a name or an instance of a CParameterCombination Parameter class. Nodes may have children. The nodes are organized in such way, that every parameter of a model for model selection has one node and sub-parameters are stored in sub-nodes. Using a tree of this class, parameters of models may easily be set. There are these types of nodes: CParseBuffer< T > Class CParseBuffer implements a ring of examples of a defined size. The ring stores objects of the Example type CPCA Preprocessor PCACut performs principial component analysis on the input vectors and keeps only the n eigenvectors with eigenvalues above a certain threshold CPerceptron Class Perceptron implements the standard linear (online) perceptron CPlif Class Plif CPlifArray Class PlifArray CPlifBase Class PlifBase CPlifMatrix Store plif arrays for all transitions in the model CPluginEstimate Class PluginEstimate CPolyFeatures Implement DotFeatures for the polynomial kernel CPolyKernel Computes the standard polynomial kernel on CDotFeatures CPolyMatchStringKernel The class PolyMatchStringKernel computes a variant of the polynomial kernel on strings of same length CPolyMatchWordStringKernel The class PolyMatchWordStringKernel computes a variant of the polynomial kernel on word-features CPositionalPWM Positional PWM CPowerKernel Power kernel CPRCEvaluation Class PRCEvaluation used to evaluate PRC (Precision Recall Curve) and an area under PRC curve (auPRC) CPrecisionMeasure Class PrecisionMeasure used to measure precision of 2-class classifier CPreprocessor Class Preprocessor defines a preprocessor interface CPruneVarSubMean Preprocessor PruneVarSubMean will substract the mean and remove features that have zero variance CPyramidChi2 Pyramid Kernel over Chi2 matched histograms CQPBSVMLib Class QPBSVMLib CRandomFourierGaussPreproc Preprocessor CRandomFourierGaussPreproc implements Random Fourier Features for the Gauss kernel a la Ali Rahimi and Ben Recht Nips2007 after preprocessing the features using them in a linear kernel approximates a gaussian kernel CRationalQuadraticKernel Rational Quadratic kernel CRealDistance Class RealDistance CRealFileFeatures The class RealFileFeatures implements a dense double-precision floating point matrix from a file CRecallMeasure Class RecallMeasure used to measure recall of 2-class classifier CRegulatoryModulesStringKernel The Regulaty Modules kernel, based on the WD kernel, as published in Schultheiss et al., Bioinformatics (2009) on regulatory sequences CRidgeKernelNormalizer Normalize the kernel by adding a constant term to its diagonal. This aids kernels to become positive definite (even though they are not - often caused by numerical problems) CROCEvaluation Class ROCEvalution used to evaluate ROC (Receiver Operator Characteristic) and an area under ROC curve (auROC) CrossValidationResult Type to encapsulate the results of an evaluation run. May contain confidence interval (if conf_int_alpha!=0). m_conf_int_alpha is the probability for an error, i.e. the value does not lie in the confidence interval CSalzbergWordStringKernel The SalzbergWordString kernel implements the Salzberg kernel CScatterKernelNormalizer Scatter kernel normalizer CScatterSVM ScatterSVM - Multiclass SVM CSegmentLoss Class IntronList CSerializableAsciiFile Serializable ascii file CSerializableFile Serializable file CSet< T > Template Set class CSGDQN Class SGDQN CSGObject Class SGObject is the base class of all shogun objects CSigmoidKernel The standard Sigmoid kernel computed on dense real valued features CSignal Class Signal implements signal handling to e.g. allow ctrl+c to cancel a long running process CSimpleDistance< ST > Template class SimpleDistance CSimpleFeatures< ST > The class SimpleFeatures implements dense feature matrices CSimpleFile< T > Template class SimpleFile to read and write from files CSimpleLocalityImprovedStringKernel SimpleLocalityImprovedString kernel, is a ``simplified'' and better performing version of the Locality improved kernel CSimplePreprocessor< ST > Template class SimplePreprocessor, base class for preprocessors (cf. CPreprocessor) that apply to CSimpleFeatures (i.e. rectangular dense matrices) CSmoothHingeLoss CSmoothHingeLoss implements the smooth hinge loss function CSNPFeatures Features that compute the Weighted Degreee Kernel feature space explicitly CSNPStringKernel The class SNPStringKernel computes a variant of the polynomial kernel on strings of same length CSortUlongString Preprocessor SortUlongString, sorts the indivual strings in ascending order CSortWordString Preprocessor SortWordString, sorts the indivual strings in ascending order CSparseDistance< ST > Template class SparseDistance CSparseEuclidianDistance Class SparseEucldianDistance CSparseFeatures< ST > Template class SparseFeatures implements sparse matrices CSparseKernel< ST > Template class SparseKernel, is the base class of kernels working on sparse features CSparsePolyFeatures Implement DotFeatures for the polynomial kernel CSparsePreprocessor< ST > Template class SparsePreprocessor, base class for preprocessors (cf. CPreprocessor) that apply to CSparseFeatures CSparseSpatialSampleStringKernel Sparse Spatial Sample String Kernel by Pavel Kuksa <pkuksa@cs.rutgers.edu> and Vladimir Pavlovic <vladimir@cs.rutgers.edu> CSpecificityMeasure Class SpecificityMeasure used to measure specificity of 2-class classifier CSpectrumMismatchRBFKernel Spectrum mismatch rbf kernel CSpectrumRBFKernel Spectrum rbf kernel CSphericalKernel Spherical kernel CSplineKernel Computes the Spline Kernel function which is the cubic polynomial CSplittingStrategy Abstract base class for all splitting types. Takes a CLabels instance and generates a desired number of subsets which are being accessed by their indices via the method generate_subset_indices(...) CSqrtDiagKernelNormalizer SqrtDiagKernelNormalizer divides by the Square Root of the product of the diagonal elements CSquaredHingeLoss Class CSquaredHingeLoss implements a squared hinge loss function CSquaredLoss CSquaredLoss implements the squared loss function CStatistics Class that contains certain functions related to statistics, such as the student's t distribution CStratifiedCrossValidationSplitting Implementation of stratified cross-validation on the base of CSplittingStrategy. Produces subset index sets of equal size (at most one difference) in which the label ratio is equal (at most one difference) to the label ratio of the specified labels CStreamingAsciiFile Class StreamingAsciiFile to read vector-by-vector from ASCII files CStreamingDotFeatures Streaming features that support dot products among other operations CStreamingFeatures Streaming features are features which are used for online algorithms CStreamingFile A Streaming File access class CStreamingFileFromFeatures Class StreamingFileFromFeatures to read vector-by-vector from a CFeatures object CStreamingFileFromSimpleFeatures< T > Class CStreamingFileFromSimpleFeatures is a derived class of CStreamingFile which creates an input source for the online framework from a CSimpleFeatures object CStreamingFileFromSparseFeatures< T > Class CStreamingFileFromSparseFeatures is derived from CStreamingFile and provides an input source for the online framework. It uses an existing CSparseFeatures object to generate online examples CStreamingFileFromStringFeatures< T > Class CStreamingFileFromStringFeatures is derived from CStreamingFile and provides an input source for the online framework from a CStringFeatures object CStreamingSimpleFeatures< T > This class implements streaming features with dense feature vectors CStreamingSparseFeatures< T > This class implements streaming features with sparse feature vectors. The vector is represented as an SGSparseVector<T>. Each entry is of type SGSparseVectorEntry<T> with members `feat_index' and `entry' CStreamingStringFeatures< T > This class implements streaming features as strings CStreamingVwCacheFile Class StreamingVwCacheFile to read vector-by-vector from VW cache files CStreamingVwFeatures This class implements streaming features for use with VW CStreamingVwFile Class StreamingVwFile to read vector-by-vector from Vowpal Wabbit data files. It reads the example and label into one object of VwExample type CStringDistance< ST > Template class StringDistance CStringFeatures< ST > Template class StringFeatures implements a list of strings CStringFileFeatures< ST > File based string features CStringKernel< ST > Template class StringKernel, is the base class of all String Kernels CStringPreprocessor< ST > Template class StringPreprocessor, base class for preprocessors (cf. CPreprocessor) that apply to CStringFeatures (i.e. strings of variable length) CSubGradientLPM Class SubGradientSVM trains a linear classifier called Linear Programming Machine, i.e. a SVM using a CSubGradientSVM Class SubGradientSVM CSubset Class for adding subset support to a class. Provides an interface for getting/setting subset_matrices and index conversion. Do not inherit from this class, use it as variable. Write wrappers for all get/set functions CSVM A generic Support Vector Machine Interface CSVMLight Class SVMlight CSVMLightOneClass Trains a one class C SVM CSVMLin Class SVMLin CSVMOcas Class SVMOcas CSVMSGD Class SVMSGD CSVRLight Class SVRLight, performs support vector regression using SVMLight CSyntaxHighLight Syntax highlight CTanimotoDistance Class Tanimoto coefficient CTanimotoKernelNormalizer TanimotoKernelNormalizer performs kernel normalization inspired by the Tanimoto coefficient (see http://en.wikipedia.org/wiki/Jaccard_index ) CTaxonomy CTaxonomy is used to describe hierarchical structure between tasks CTensorProductPairKernel Computes the Tensor Product Pair Kernel (TPPK) CTime Class Time that implements a stopwatch based on either cpu time or wall clock time CTOPFeatures The class TOPFeatures implements TOP kernel features obtained from two Hidden Markov models CTrie< Trie > Template class Trie implements a suffix trie, i.e. a tree in which all suffixes up to a certain length are stored CTron Class Tron CTStudentKernel Generalized T-Student kernel CVarianceKernelNormalizer VarianceKernelNormalizer divides by the ``variance'' CVowpalWabbit Class CVowpalWabbit is the implementation of the online learning algorithm used in Vowpal Wabbit CVwAdaptiveLearner VwAdaptiveLearner uses an adaptive subgradient technique to update weights CVwCacheReader Base class from which all cache readers for VW should be derived CVwCacheWriter CVwCacheWriter is the base class for all VW cache creating classes CVwEnvironment Class CVwEnvironment is the environment used by VW CVwLearner Base class for all VW learners CVwNativeCacheReader Class CVwNativeCacheReader reads from a cache exactly as that which has been produced by VW's default cache format CVwNativeCacheWriter Class CVwNativeCacheWriter writes a cache exactly as that which would be produced by VW's default cache format CVwNonAdaptiveLearner VwNonAdaptiveLearner uses a standard gradient descent weight update rule CVwParser CVwParser is the object which provides the functions to parse examples from buffered input CVwRegressor Regressor used by VW CWaveKernel Wave kernel CWaveletKernel Class WaveletKernel CWDFeatures Features that compute the Weighted Degreee Kernel feature space explicitly CWDSVMOcas Class WDSVMOcas CWeightedCommWordStringKernel The WeightedCommWordString kernel may be used to compute the weighted spectrum kernel (i.e. a spectrum kernel for 1 to K-mers, where each k-mer length is weighted by some coefficient CWeightedDegreePositionStringKernel The Weighted Degree Position String kernel (Weighted Degree kernel with shifts) CWeightedDegreeRBFKernel Weighted degree RBF kernel CWeightedDegreeStringKernel The Weighted Degree String kernel CWRACCMeasure Class WRACCMeasure used to measure weighted relative accuracy of 2-class classifier CZeroMeanCenterKernelNormalizer ZeroMeanCenterKernelNormalizer centers the kernel in feature space DynArray< T > Template Dynamic array class that creates an array that can be used like a list or an array Example< T > Class Example is the container type for the vector+label combination MKLMultiClassGLPK MKLMultiClassGLPK is a helper class for MKLMultiClass MKLMultiClassGradient MKLMultiClassGradient is a helper class for MKLMultiClass MKLMultiClassOptimizationBase MKLMultiClassOptimizationBase is a helper class for MKLMultiClass Model Class Model Parallel Class Parallel provides helper functions for multithreading Parameter Parameter class ParameterMap Implements a map of ParameterMapElement instances ParameterMapElement Class to hold instances of a parameter map. Each element contains a key and a value, which are of type SGParamInfo. May be compared to each other based on their SerializableAsciiReader00 Serializable ascii reader SGIO Class SGIO, used to do input output operations throughout shogun SGMatrix< T > Shogun matrix SGNDArray< T > Shogun n-dimensional array SGParamInfo Class that holds informations about a certain parameter of an CSGObject. Contains name, type, etc. This is used for mapping types that have changed in different versions of shogun. Instances of this class may be compared to each other. Ordering is based on name, equalness is based on all attributes SGSparseMatrix< T > Template class SGSparseMatrix SGSparseVector< T > Template class SGSparseVector SGSparseVectorEntry< T > Template class SGSparseVectorEntry SGString< T > Shogun string SGStringList< T > Template class SGStringList SGVector< T > Shogun vector ShogunException Class ShogunException defines an exception which is thrown whenever an error inside of shogun occurs SSKFeatures SSKFeatures substring Struct Substring, specified by start position and end position TParameter Parameter struct CSerializableFile::TSerializableReader Serializable reader TSGDataType Datatypes that shogun supports v_array< T > Class v_array is a templated class used to store variable length arrays. Memory locations are stored as 'extents', i.e., address of the first memory location and address after the last member Version Class Version provides version information VwExample Example class for VW VwFeature One feature in VW VwLabel Class VwLabel holds a label object used by VW
{"url":"http://www.shogun-toolbox.org/doc/en/1.1.0/annotated.html","timestamp":"2014-04-19T10:12:14Z","content_type":null,"content_length":"92593","record_id":"<urn:uuid:161b5ba9-523e-4c48-bfcc-8b7c3721bbdc>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
data Vec a Source First, let us create a tiny two-dimensional vector class. We make it an instance of Arbitrary to use them later for tests. Typeable1 Vec Eq a => Eq (Vec a) (Eq (Vec a), Ord a) => Ord (Vec a) Show a => Show (Vec a) Arbitrary a => Arbitrary (Vec a) data Mass Source Typeable Mass (Typeable Mass, Typeable (ValType o Mass), Objective o, UseReal o) => Member o Mass data Velocity Source To define a member with compound types like vector of real numbers, we use UnderlyingReal to ask the object which real value it prefers, then put the response into the type constructors. We also give a fallback accessor here. If the velocity field is missing, we attempt to re-calculate it from the mass and momentum. Here is how we can do that. Typeable Velocity (Typeable Velocity, Typeable (ValType o Velocity), Objective o, UseReal o, Fractional (UnderlyingReal o)) => Member o Velocity data Momentum Source Typeable Momentum (Typeable Momentum, Typeable (ValType o Momentum), Objective o, UseReal o, Fractional (UnderlyingReal o)) => Member o Momentum data KineticEnergy Source Typeable KineticEnergy (Typeable KineticEnergy, Typeable (ValType o KineticEnergy), Objective o, UseReal o, Fractional (UnderlyingReal o)) => Member o KineticEnergy fromMassVelocity :: (Objective o, UseReal o, Fractional real, real ~ UnderlyingReal o) => real -> Vec real -> oSource We can write functions that would construct a point particle from its mass and velocity. And we can make the function polymorphic over the representation of the real numbers the objects prefer. laserBeam :: (Objective o, UseReal o, Fractional real, real ~ UnderlyingReal o) => oSource We define an instance of point-like particle. And again, we can keep it polymorphic, so that anyone can choose its concrete type later, according to their purpose. Thus we will achieve the polymorphic encoding of the knowledge of this world, in Haskell. >>> (laserBeam :: Object DIT) ^? kineticEnergy Just 1631.25 >>> (laserBeam :: Object Precise) ^? kineticEnergy Just (6525 % 4) Moreover, we can ask Ichiro to sign the ball. Usually, we needed to create a new data-type to add a new field. But with 'dynamic-object' we can do so without changing the type of the ball. So, we can put our precious, one-of-a-kind ball into toybox together with less uncommon balls, and with various other toys. And still, we can safely access the contents of the toybox without runtime errors, and e.g. see which toy is the heaviest. >>> let (mySpecialBall :: Object DIT) = laserBeam & insert Autograph "Ichiro Suzuki" >>> let toybox = [laserBeam, mySpecialBall] >>> let toybox2 = toybox ++ [duck, lens, banana, envelope, ghost] >>> maximum $ mapMaybe (^?mass) toybox2
{"url":"http://hackage.haskell.org/package/dynamic-object-0.2.1/docs/Data-Object-Dynamic-Examples-PointParticle.html","timestamp":"2014-04-20T03:22:22Z","content_type":null,"content_length":"27644","record_id":"<urn:uuid:0b59c90a-e430-4e15-b4ba-ef20297dcfe6>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
Braingle: 'Darts' Brain Teaser Probability puzzles require you to weigh all the possibilities and pick the most likely outcome. Puzzle ID: #144 Category: Probability Submitted By: duckrocket Corrected By: sugarnspice4u7 Peter throws two darts at a dartboard, aiming for the center. The second dart lands farther from the center than the first. If Peter now throws another dart at the board, aiming for the center, what is the probability that this third throw is also worse (i.e., farther from the center) than his first? Assume Peter`s skillfulness is constant. Show Hint Since the three darts are thrown independently, they each have a 1/3 chance of being the best throw. As long as the third dart is not the best throw, it will be worse than the first dart. Therefore the answer is 2/3. Hide What Next?
{"url":"http://www.braingle.com/brainteasers/teaser.php?op=2&id=144&comm=0","timestamp":"2014-04-21T09:42:20Z","content_type":null,"content_length":"22959","record_id":"<urn:uuid:cdaa4a24-7ec6-44f6-bab7-4361ac89ad79>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
Universal schemes for Results 1 - 10 of 55 - IEEE Transactions on Information Theory , 1998 "... Abstract — This paper consists of an overview on universal prediction from an information-theoretic perspective. Special attention is given to the notion of probability assignment under the self-information loss function, which is directly related to the theory of universal data compression. Both th ..." Cited by 136 (11 self) Add to MetaCart Abstract — This paper consists of an overview on universal prediction from an information-theoretic perspective. Special attention is given to the notion of probability assignment under the self-information loss function, which is directly related to the theory of universal data compression. Both the probabilistic setting and the deterministic setting of the universal prediction problem are described with emphasis on the analogy and the differences between results in the two settings. Index Terms — Bayes envelope, entropy, finite-state machine, linear prediction, loss function, probability assignment, redundancy-capacity, stochastic complexity, universal coding, universal prediction. I. - IEEE Transactions on Information Theory , 1996 "... We present a sequential investment algorithm, the ¯-weighted universal portfolio with side-information, which achieves, to first order in the exponent, the same wealth as the best side-information dependent investment strategy (the best state-constant rebalanced portfolio) determined in hindsight fr ..." Cited by 85 (3 self) Add to MetaCart We present a sequential investment algorithm, the ¯-weighted universal portfolio with side-information, which achieves, to first order in the exponent, the same wealth as the best side-information dependent investment strategy (the best state-constant rebalanced portfolio) determined in hindsight from observed market and side-information outcomes. This is an individual sequence result which shows that the difference between the exponential growth rates of wealth of the best state-constant rebalanced portfolio and the universal portfolio with side-information is uniformly less than (d= (2n)) log(n + 1) + (k=n) log 2 for every stock market and side-information sequence and for all time n. Here d = k(m \Gamma 1) is the number of degrees of freedom in the state-constant rebalanced portfolio with k states of side-information and m stocks. The proof of this result establishes a close connection between universal investment and universal data compression. Keywords: Universal investment, univ... - Mathematical Finance , 1998 "... We present an on-line investment algorithm which achieves almost the same wealth as the best constant-rebalanced portfolio determined in hindsight from the actual market outcomes. The algorithm employs a multiplicative update rule derived using a framework introduced by Kivinen and Warmuth. Our algo ..." Cited by 80 (10 self) Add to MetaCart We present an on-line investment algorithm which achieves almost the same wealth as the best constant-rebalanced portfolio determined in hindsight from the actual market outcomes. The algorithm employs a multiplicative update rule derived using a framework introduced by Kivinen and Warmuth. Our algorithm is very simple to implement and requires only constant storage and computing time per stock ineach trading period. We tested the performance of our algorithm on real stock data from the New York Stock Exchange accumulated during a 22-year period. On this data, our algorithm clearly outperforms the best single stock aswell as Cover's universal portfolio selection algorithm. We also present results for the situation in which the We present an on-line investment algorithm which achieves almost the same wealth as the best constant-rebalanced portfolio investment strategy. The algorithm employsamultiplicative update rule derived using a framework introduced by Kivinen and Warmuth [20]. Our algorithm is very simple to implement and its time and storage requirements grow linearly in the number of stocks. - Journal of Statistical Physics , 1999 "... Computational mechanics, an approach to structural complexity, defines a process’s causal states and gives a procedure for finding them. We show that the causal-state representation—an E-machine—is the minimal one consistent with ..." Cited by 43 (8 self) Add to MetaCart Computational mechanics, an approach to structural complexity, defines a process’s causal states and gives a procedure for finding them. We show that the causal-state representation—an E-machine—is the minimal one consistent with - IEEE Trans. Inf. Theory , 1997 "... The conditional distribution of the next outcome given the infinite past of a stationary process can be inferred from finite but growing segments of the past. Several schemes are known for constructing pointwise consistent estimates, but they all demand prohibitive amounts of input data. In this pap ..." Cited by 28 (5 self) Add to MetaCart The conditional distribution of the next outcome given the infinite past of a stationary process can be inferred from finite but growing segments of the past. Several schemes are known for constructing pointwise consistent estimates, but they all demand prohibitive amounts of input data. In this paper we consider real-valued time series and construct conditional distribution estimates that make much more efficient use of the input data. The estimates are consistent in a weak sense, and the question whether they are pointwise consistent is still open. For finite-alphabet processes one may rely on a universal data compression scheme like the Lempel-Ziv algorithm to construct conditional probability mass function estimates that are consistent in expected information divergence. Consistency in this strong sense cannot be attained in a universal sense for all stationary processes with values in an infinite alphabet, but weak consistency can. Some applications of the estimates to on-line forecasting, regression and classification are discussed. 1 I. Introduction and Overview , 1991 "... Abstract-Sequential decision algorithms are investigated, under a hmily of additive performance criteria, for individual data sequences, with varieus appliition areas in information theory and signal processing. Simple universal sequential schemes are known, under certain conditions, to approach opt ..." Cited by 28 (11 self) Add to MetaCart Abstract-Sequential decision algorithms are investigated, under a hmily of additive performance criteria, for individual data sequences, with varieus appliition areas in information theory and signal processing. Simple universal sequential schemes are known, under certain conditions, to approach optimality uniformly as fast as n-l log n, where n is the sample size. For the case of finite-alphabet observations, the class of schemes that can be implemented by bite-state machines (FSM’s), is studied. It is shown that Markovian machines with daently long memory exist that are asympboticaily nerrly as good as any given FSM (deterministic or WomhI) for the purpose of sequential decision. For the continuous-valued observation case, a useful class of parametric schemes is discussed with special attention to the recursive least squares W) algorithm. Index Terms-Sequential compound decision pmblem, empirical - IEEE Trans. Inform. Theory , 1998 "... We consider the problem of one-step-ahead prediction of a real-valued, stationary, strongly mixing random process fX i g i=01 . The best mean-square predictor of X0 is its conditional mean given the entire infinite past fX i g i=01 . Given a sequence of observations X1 X2 111 XN, we propose estimato ..." Cited by 26 (1 self) Add to MetaCart We consider the problem of one-step-ahead prediction of a real-valued, stationary, strongly mixing random process fX i g i=01 . The best mean-square predictor of X0 is its conditional mean given the entire infinite past fX i g i=01 . Given a sequence of observations X1 X2 111 XN, we propose estimators for the conditional mean based on sequences of parametric models of increasing memory and of increasing dimension, for example, neural networks and Legendre polynomials. The proposed estimators select both the model memory and the model dimension, in a data-driven fashion, by minimizing certain complexity regularized least squares criteria. When the underlying predictor function has a finite memory, we establish that the proposed estimators are memory-universal: the proposed estimators, which do not know the true memory, deliver the same statistical performance (rates of integrated mean-squared error) as that delivered by estimators that know the true memory. Furthermore, when the underlying predictor function does not have a finite memory, we establish that the estimator based on Legendre polynomials is consistent. - IEEE Trans. Inform. Theory , 2000 "... We consider here an universal predictor based on pattern matching. For a given string x 1 ; x 2 ; : : : ; xn , the predictor will guess the next symbol xn+1 in such a way that the prediction error tends to zero as n ! 1 provided the string x n 1 = x 1 ; x 2 ; : : : ; xn is generated by a mixing s ..." Cited by 23 (1 self) Add to MetaCart We consider here an universal predictor based on pattern matching. For a given string x 1 ; x 2 ; : : : ; xn , the predictor will guess the next symbol xn+1 in such a way that the prediction error tends to zero as n ! 1 provided the string x n 1 = x 1 ; x 2 ; : : : ; xn is generated by a mixing source. We shall prove that the rate of convergence of the prediction error is O(n \Gamma" ) for any " ? 0. In this preliminary version, we only prove our results for memoryless sources and a sketch for mixing sources. However, we indicate that our algorithm can predict equally successfully the next k symbols as long as k = O(1). 1 Introduction Prediction is important in communication, control, forecasting, investment and other areas. We understand how to do optimal prediction when the data model is known, but one needs to design universal prediction algorithm that will perform well no matter what the underlying probabilistic model is. More precisely, let X 1 ; X 2 ; : : : be an infinite ... , 2001 "... We present simple procedures for the prediction of a real valued sequence. The algorithms are based on a combination of several simple predictors. We show that if the sequence is a realization of a bounded stationary and ergodic random process then the average of squared errors converges, almost sur ..." Cited by 19 (7 self) Add to MetaCart We present simple procedures for the prediction of a real valued sequence. The algorithms are based on a combination of several simple predictors. We show that if the sequence is a realization of a bounded stationary and ergodic random process then the average of squared errors converges, almost surely, to that of the optimum, given by the Bayes predictor. We oer an analog result for the prediction of stationary gaussian processes. The work of the second author was supported by DGES grant PB96-0300 0 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=835160","timestamp":"2014-04-20T13:34:13Z","content_type":null,"content_length":"38143","record_id":"<urn:uuid:a7152bc1-d5a5-4936-abcd-337a15ecb60d>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
any shortcut? Re: any shortcut? When you have a series you must guess at the general term. You can play spot the pattern or you can use curve fitting but you will come up with just as Nehushtan did above. You break it into partial fractions as above: Now you telescope it out Now just subtract the bottom from the top, again with massive cancellations. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=291670","timestamp":"2014-04-17T16:03:43Z","content_type":null,"content_length":"29328","record_id":"<urn:uuid:2f72037f-e612-4a40-a121-04bc79263699>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
INTRO_TAC : string -> tactic Breaks down outer quantifiers in goal, introducing variables and named hypotheses. Given a string s, INTRO_TAC s breaks down outer universal quantifiers and implications in the goal, fixing variables and introducing assumptions with names. It combines several forms of introduction of logical connectives. The introduction pattern uses the following syntax: □ ! fix_pattern introduces universally quantified variables as with FIX_TAC □ a destruct pattern introduces and destructs an implication □ juxtaposition introduces a conjunction in the hypothesis □ ... | ... | .... introduces a branch in a disjunction in the hypothesis □ #n selects disjunct n in the goal Fails if the pattern is ill-formed or does not match the form of the goal. Here we introduce the universally quantified outer variables, assume the antecedent, splitting apart conjunctions and disjunctions: # g `!p q r. p \/ (q /\ r) ==> p /\ q \/ p /\ r`;; # e (INTRO_TAC "!p q r; p | q r");; val it : goalstack = 2 subgoals (2 total) 0 [`q`] (q) 1 [`r`] (r) `p /\ q \/ p /\ r` 0 [`p`] (p) `p /\ q \/ p /\ r` Now a further step will select the first disjunct to prove in the top goal: # e (INTRO_TAC "#1");; val it : goalstack = 1 subgoal (2 total) 0 [`p`] (p) `p /\ q` DESTRUCT_TAC, DISCH_TAC, FIX_TAC, GEN_TAC, LABEL_TAC, REMOVE_THEN, STRIP_TAC, USE_THEN.
{"url":"http://www.cl.cam.ac.uk/~jrh13/hol-light/HTML/INTRO_TAC.html","timestamp":"2014-04-20T11:29:44Z","content_type":null,"content_length":"2564","record_id":"<urn:uuid:bc2fad4a-4ce9-44ce-b05b-a21bf170dedc>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re.: 1 Year of Geometry Date: Apr 11, 1995 12:49 AM Author: Michael Keyton Subject: Re.: 1 Year of Geometry On Thu, 6 Apr 1995, Linda Dodge wrote: > Do we really need a full year of geometry, anyway? I ask somewhat facetiously the following: suppose we had three years of geometry and one year of algebra in the curriculum, would not the question "Do we really need a full year of algebra, anyway?" be appropriate. Yes, we need a full year of geometry, but we need a full year of geometry and not some year wasted without mathematics. We need more years of investigations using thought and fewer years of learning meaningless algorithmic processes that are more easily forgotten than learned. We need years of having students learn to think through a probem, to understand, and to develop rather than to mimic. Do we need a full year of geometry? Yes, and more. Let's not bail out the students, let's not make their lives easy, but rather let's get inside their heads and rumage around, expunging the inert while getting them to begin generating fruitful If I had a choice of having students study 3 years of geometry and only 1 of algebra as opposed to the present, I would have guessed heaven had arrived on the wings of a TI-92. Michael Keyton St. Mark's School of Texas
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=1079729","timestamp":"2014-04-19T22:33:05Z","content_type":null,"content_length":"2280","record_id":"<urn:uuid:cd89e020-3cc6-4aad-849d-192284d1fca4>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
standing-wave ratio (SWR, VWSR, IWSR) Standing-wave ratio (SWR) is a mathematical expression of the non-uniformity of an electromagnetic field (EM field) on a transmission line such as coaxial cable. Usually, SWR is defined as the ratio of the maximum radio-frequency (RF) voltage to the minimum RF voltage along the line. This is also known as the voltage standing-wave ratio (VSWR). The SWR can also be defined as the ratio of the maximum RF current to the minimum RF current on the line (current standing-wave ratio or ISWR). For most practical purposes, ISWR is the same as VSWR. Under ideal conditions, the RF voltage on a signal transmission line is the same at all points on the line, neglecting power losses caused by electrical resistance in the line wires and imperfections in the dielectric material separating the line conductors. The ideal VSWR is therefore 1:1. (Often the SWR value is written simply in terms of the first number, or numerator, of the ratio because the second number, or denominator, is always 1.) When the VSWR is 1, the ISWR is also 1. This optimum condition can exist only when the load (such as an antenna or a wireless receiver), into which RF power is delivered, has an impedance identical to the impedance of the transmission line. This means that the load resistance must be the same as the characteristic impedance of the transmission line, and the load must contain no reactance (that is, the load must be free of inductance or capacitance). In any other situation, the voltage and current fluctuate at various points along the line, and the SWR is not 1. When the line and load impedances are identical and the SWR is 1, all of the RF power that reaches a load from a transmission line is utilized by that load. When the load is an antenna, the utilization takes the form of EM-field radiation. If the load is a communications receiver or terminal, the signal power is converted into some other form, such as an audio-visual display. If the impedance of the load is not identical to the impedance of the transmission line, the load does not absorb all the RF power (called forward power) that reaches it. Instead, some of the RF power is sent back toward the signal source when the signal reaches the point where the line is connected to the load. This is known as reflected power or reverse power. The presence of reflected power, along with the forward power, sets up a pattern of voltage maxima (loops) and minima (nodes) on the transmission line. The same thing happens with the distribution of current. The SWR is the ratio of the RF voltage at a loop to the RF voltage at a node, or the ratio of the RF current at a loop to the RF current at a node. In theory, there is no limit to how high this ratio can get. The worst cases (highest SWR values) occur when there is no load connected to the end of the line. This condition, known as an unterminated transmission line, is manifested when the end of the line is either short-circuited or left open. In theory, the SWR is infinite in either of these cases; in practice, it is limited by line losses, but can exceed 100. This can give rise to extreme voltages and currents at certain points on the line. The SWR on a transmission line is mathematically related to (but not the same as) the ratio of reflected power to forward power. In general, the higher the ratio of reflected power to forward power, the greater is the SWR. The converse is also true. When the SWR on a transmission line is high, the power loss in the line is greater than the loss that occurs when the SWR is 1. This exaggerated loss, known as SWR loss, can be significant, especially when the SWR exceeds 2 and the transmission line has significant loss to begin with. For this reason, RF engineers strive to minimize the SWR on communications transmission lines. A high SWR can have other undesirable effects, too, such as transmission-line overheating or breakdown of the dielectric material separating the line conductors. In some situations, such as those encountered at relatively low RF frequencies, low RF power levels, and short lengths of low-loss transmission line, a moderately high SWR does not produce significant SWR loss or line overheading, and can therefore be tolerated. This was last updated in September 2005 Contributor(s): Olivier Cauvin Tech TalkComment Contribute to the conversation All fields are required. Comments will appear at the bottom of the article.
{"url":"http://whatis.techtarget.com/definition/standing-wave-ratio-SWR-VWSR-IWSR","timestamp":"2014-04-17T13:34:15Z","content_type":null,"content_length":"64286","record_id":"<urn:uuid:5773927c-46cd-4986-833c-5d092830d25c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Elizabeth on Saturday, March 13, 2010 at 1:14pm. Find the area of the circle described. Use pi= 22/7. Diameter= 1.4cm Please help! • 7th Grade Math - Ms. Sue, Saturday, March 13, 2010 at 1:21pm Area = pi * r^2 Radius = 0.7 or 7/10 A = 22/7 * 49/100 A = 1078/700 A = 1 54/100 = 1 27/50 square centimeters • 7th Grade Math - shr=er, Sunday, March 14, 2010 at 4:39am the circumference of a circle and the perimeter of a square are both 40 cm. which has a greater are.the circle or the square?find the circumference • 7th Grade Math shr=er - PsyDAG, Sunday, March 14, 2010 at 12:03pm First, if you have a question, it is much better to put it in as a separate post in <Post a New Question> rather than attaching it to a previous question, where it is more likely to be Second, you have already stated the circumference of the circle. If the perimeter of a square = 40 cm, each side must 10 cm, giving you an area 10*10 = 100 cm. Circle circumference = π d = 40, where d = diameter Therefore d = 40/π Circle area = 1/4 π d^2 = 1/4 π (40/π)^2 Insert d value from first equation into the second, solve for area and compare to square area. • 7th Grade Math - hello, Wednesday, March 24, 2010 at 7:19pm if its the diamater you just cut that in half to make the radius then you do that times pie which is 3.14 or 22/7. you welcome(: Related Questions math - Find the area of the partial circle, use 22/7 for pi. 1.) radius = 27 cm ... math - A circle has an area of 25 pi cm^2.If a circle is drawn within that ... 7th grade algebra - Find the area of the circle with the given circumference C ... math - A triangle has an area of 15 cm^2 and a base of 5 cm. If a circle is ... MATH - Find the circumstance of a circle with a diameter of 28 in. Use pi < ... Math - Two perpendicular diamters(cutting each other at right angles) of a ... math - The number of square units in the area of circle P is equal to the number... Physics - I have to design a laboratory activity to answer the question, "What ... 7th grade masth asap hurry plz ms.sue somebody plz - The area of a circle is ... 5th GRADE MATH - Jamie is forming a cylinder out of two circles and a rectangle...
{"url":"http://www.jiskha.com/display.cgi?id=1268504042","timestamp":"2014-04-20T21:50:33Z","content_type":null,"content_length":"9786","record_id":"<urn:uuid:1140bb6c-ddc0-4478-a785-f2edfbaa5e4d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
mathematics class X SA-2 design and blue print of the question paper According to the sample paper issued by cbse there are total 34 questions in which: • 8 from Alzebra • 8 from geometry • 8 from mensuration • 3 from trignometry, • 3 from probability • 4 from coordinate geometry. click the image below to enlarge it: 5 comments: can i get d blue print of class X SA 1 math paper?????? Very informative post indeed.. being enrolled in: http://www.wiziq.com/course/7618-full-preparation-for-class-10-mathematics I was looking for such articles online to assist me and also your post helped me a lot.:) I admire the valuable information you offer in your articles. I will bookmark your blog and have my friends check up here often. I am quite sure they will learn lots of new stuff here than anybody else! CBSE syllabus Thanks for such a useful stuff.These papers will help for the better preparations. If you are willing to get more CBSE Sample Papers for Class 10 visit: Latest CBSE Sample Papers for Class 10
{"url":"http://cbsesamplepapers2011.blogspot.com/2010/12/mathematics-sa-2-design-and-blue-print.html","timestamp":"2014-04-23T09:43:02Z","content_type":null,"content_length":"61507","record_id":"<urn:uuid:a5ffc90f-39a3-43be-bdd7-8a054c0948ad>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
Benjamin Graham Formula Free Stock Valuation Spreadsheet Welcome to the Graham Formula If you haven’t read The Intelligent InvestorGraham Formula Stock Valuation tutorial. Instead, I’ve applied Benjamin Graham’s formula to a free Graham Formula spreadsheet that will allow you to quickly value the intrinsic value of a company the Benjamin Graham way. There are a couple of sites that already do this online, but I wanted something where I have control and be able to make adjustments. A quick quote to start things off. Confronted with a like challenge to distill the secret of sound investment into three words, we venture the following motto, Margin of Safety. – Benjamin Graham The Benjamin Graham Formula Overview Ben Graham formula is as follows: $Intrinsic Value = \displaystyle\frac{EPS\times(8.5+2g)\times4.4}{20yr Corp Bond}$ - EPS refers to earnings over a period of years and not just the previous or current year. Use a normalized version. - 8.5 is the PE of a company with no growth. - g is growth rate of the expected earnings. In the premium stock value spreadsheet, growth rate is user defined. Check out a method to determine growth rate. - Back when Graham wrote the book, he was using a 20 yr AAA corp bond rate of 4.4%. To apply the formula today, we need to normalize it to today’s rate. I like to use the 20yr AA corp bond rate as the denominator since the AA rate is slightly higher than the AAA and will give a slightly conservative number. However, I use a very slight modification to this formula which I detail in an article I wrote titled “How to Value a Stock with the Ben Graham Formula”. How the Expected Earnings in the Graham’s Formula was Calculated A difficulty I had was to figure out how to come up with a reasonable future EPS guide. Here is how I calculated the future EPS. Note, I am a conservative guy. If you feel, the ranges are incorrect, try changing some things yourself. 1. For the 1st future year, I took the constant at which the EPS had linearly increased over 10 years 2. I added the constant to the average increase of EPS throughout the past 10 years 3. I then added an additional “growth sum” to the number I get from step 2 4. For the 2nd future year, I took the constant 5. Added it to the 1st future year 6. Added the “growth sum” 7. And so on How to Download the Free Graham Formula Spreadsheet To download the free Graham’s Formula spreadsheet, simply enter your email in the form at the bottom of the page. Once you have entered your email, you will automatically receive, not just the Graham formula spreadsheet but eight more spreadsheets for your own use. You will also be able to get articles such as this directly to your inbox. How To Use The Free Spreadsheet I’ve tried to make it as user friendly simple to understand. The spreadsheet requires manual inputs for the required data. Follow the instructions in the spreadsheet to use it properly. Premium Stock Valuation Spreadsheets Feel free to check out this free version and then when ready, go to the stock valuation software page and review what you will get with the premium. The premium version includes several valuation models as well as fundamental analysis data, historical data, charts and competitor comparison features. Just by entering one ticker, you can immediately get all that information on your favorite stock which will save you hours in your analysis. Go now and see for yourself why people rave about the spreadsheets. Free Benjamin Graham Formula Spreadsheet Screenshot Additional links to resources Get Tips and Strategies to Achieve Higher Returns Bonus: Get 9 FREE Investing Spreadsheet Calculators This is a great spreadsheet and I would love to use it. I have Microsoft Office 2007 and I tried adding the add-in and it is not working properly with the dirrections you provided. I installed the add-in but it is not working…. the sheet does not update the cells. Can you please help? If you have a specific issue, post a comment or send me an email ([email protected]) and I will get back to you. Great spreadsheet but do you know how I can put the add-in on iWork for Apple OSX? Sorry but I have no idea how excel works in Apple. Doesn’t Apple allow you to run windows as well? If that is the case I assume it is the same as just PC. Unzip it to the correct folder and run it. To improve load time, turn off automatic calculations in excel. Then press F9 everytime you want to update. I love the organization and quality of the worksheet. My only question is, what exactly is user growth and why is it set at 22%? Is it wise to change it? Hi Jay, For the Ben Graham Formula spreadsheet, the growth rate is the EPS growth rate normalized over 5 or 10 years depending on which spreadsheet you are using. If this rate is too high, it’s sensible to adjust the rate yourself but dont use the growth rate that only projects for the next year as it will provide skewed and incorrect results. If you like the quality and organization, you should check out the premium version I’m new to investing and am having trouble plugging the numbers into the formula. I know it’s something simple but I can’t figure it out. In the above example (AAPL) my formula looks like this: 5.16 x (8.5 + 30) x .0597 5.16 x 38.5 x .0597 = $11.86 Would someone please point out where my mistake is? @ MXH (edited my first comment because I misunderstood what you meant) Gotta be careful with the brackets and which operations should be done first. =5.16*(8.5+(2*15)*(4.4/5.5)) = $167.7 Work your way from the inside and out I’m trying to do it all manually so I can understand what I’m doing. 5.16 * 38.5 = 198.66 198.66 * 4.4/5.5 198.66 * .8 = 158.93 I know this is remedial but cant’t figure out Never mind, I got it (lol). Thank you. It seems like youre doing it backwards. Do these steps one at a time and make sure you press = after you do the +8.5 otherwise it will do 8.5×5.16 which isnt what we want. 4.4/5.5 x (2 x 15) + 8.5 = 32.5 32.5 x 5.16 = $167.7 viola. Hope that helps. How can this spreadsheet be easliy converted to Buffett’s ideas on investing? @ Larry What ideas are you referring to? Buying $1 for 50c? Buying good companies at a cheap price? Buying with margin of safety? The spreadsheets on this site actually deal with all three and more. go to http://www.traineetrader.com/importing-stock-quotes-to-excel-using-smf-add-in/ for another look at how to install on excel 2007 i just purchased and downloaded. When I it F9 all cells turn to #NAME?. I checked Morningstar and the financials are there. I reistalled the Ad-in several times. I also checked the macro function in Excel and have it set to ‘disable but notify’. I do not get a notification when I open the spreadsheet. I am using 2007. Your assistance is appreciated. First off, thank you for this invaluable resource. Time you put into this is much appriciated. Bruce: I had a similar problem. One cell in the spreadsheet is linking to the plug-in which you probably stored in your program files. Try changing the location of the plug in to My Documents and changing the link in the cell. That’s how it worked for me. Thanks Ash, I inform people to install it in a certain directory as they seem to find some difficulty in working with excel add-ins but as long as excel knows where the add-in is, it should work. Thanks for the spreadsheet it is great! I did have a question about one of your formulas. You project ‘normal earnings’ by taking the median of the previous three years EPS and the next three years projection. However to get the projection your calculation uses both the user defined growth rate plus the historical trendline(B11:D11). It seems that using both over inflates the projected earnings. I was just wondering why you think it is best to use both. Hi Colton, Since it is taking the normal earnings, if you just use the historical EPS the resulting number is the median value so the EPS would be grossly understated. So that’s why I projected 3 years to try and come up with a realistic EPS for the next few years. I’m new to investing and am reading The Intelligent Investor. On page 155 of the book, Graham states in his example: Our earning power value for American Smelting exceeds twice the asset value by $34 per share. How do you figure out the asset value of American Smelting? Are you asking specifically for American Smelting? Because that company no longer exists. Yes, I was specifically referring to American Smelting. Although it no longer exits, I was wondering if there is enough data given in the book to figure out how he came about his figure for net asset value. If so, how? Well if the book provides the numbers for the assets and the total liabilities, you just have to enter it into his formula. See the post on net net asset value for details. If you don’t bother with the 50% and 75% multiplication, it is just a net asset value formula. May I know the source of this formula? Thanks. The formula is from “The Intelligent Investor” I tried using the OSV_Graham basic spreadsheet. It seems that the morningstar link to get revenues (“http://quicktake.morningstar.com/Stock/Income10.asp?Symbol=”) is no longer working. Does anyone know what should I replace the link with. Alternatively, is there an update to the spreadsheet that has this problem fixed? Sign up using the form on the right and you will receive an email with all the latest working downloads. Hi Jae Are you sure you have the brackets in the right place? = EPS*(8.5+(2*GROWTH*100)*(4.4/AAA BOND YIELD)) Here is the formula as I recall it and as most sites say. Which would have the brackets like this =EPS*(8.5+2*GROWTH*100)*4.4/AAA BOND YIELD or simplified to =EPS*(8.5+200*GROWTH)*4.4/AAA BOND YIELD I look forward to you answer. Yes the article needs to be updated with the proper brackets. Premium spreadsheets all have the proper formulas. Wonderful post Jae. I am researching on the relevance of Benjamin Graham formula to non-American stocks – particular Indian stocks. I would like to hear from you about the relevance of it when I use EPS in Indian rupee, as opposed to American dollar. Mathematically, I see no difference since the resulting valuation will be in rupees. But would like to hear your opinion. It shouldnt matter what the currency is. Earnings is the same everywhere. I have a web site where I research stocks under five dollars. I have many years of experience with these type of stocks. I find that the best measurement of how undervalued a stock is is the price to sales ratio of a companies stock. the price to sales ratio is the market cap of a companies stock compared to the amount of sales the company does on an annual bases.a good example of a company with a low price to sales ratio is carrols restaurant group the company has a market cap of just 200 million dollars but does over 800 million dollars in annual sales the company is solidly profitable. in other words the price that the market is valuing the company at is 200 million dollars this is only one fourth of what the company does in annual sales 800+ million dollars. the stock currently trades at around 9.25 cents a share under the symbol {TAST} I think the stock could get to 50.00 dollars a share over the next five years. I base this on the current net profit margin of around 1.75% or 14 million dollars on sales of 800 hundred million dollars. if the companies sales were to increase by 50% or 400 million dollars to 1.2 billion dollars over the next five years. and if the companies net profit margin were to expand from 1.75% to4.5% or 54 million dollars over the next five years. than if the companies stock increased in price to where it was trading at a price earnings ratio of 15 this would put the stock at 50 dollars a share. this may seem to be a somewhat optimistic scenario but not really that much. there are many stocks that trade at much higher price earnings ratios when they become popular than 20 times earnings. I find that companies like carrols restaurant group are very rare. I also find that companies that have low price to sales ratios that are profitable or of decent quality tend to become takeover targets or get taken private by private equity firms or the management of the company. or other companies in the same business. Hey James, One thing Im concerned about your valuation of TAST is that there are far too many “if’s”. What value do you come up with if TAST is unable to reach your target? What is the value of TAST now? hello jae jun what do you think about the mean reversion method to value stock? can you provide more information on what this is? Jae I was looking at one of your email’s and tried to download the Benjamin Graham formula. I went back to the home page and put my email address in and the response I received was I was already subscribed. I know that but I wasn’t allowed into the site to download the spreadsheet. I already have the plug in from downloading the valuation spreadsheet. That is the one I enjoy the best. Another question on finance stocks are there any plans to in the future to program a similiar valuation spreadsheet regarding bank and other financial stocks. I believe that the financial equities will have a strong upside. Not sure when but they will come back. Thanks Pat Connell @ Pat, Seeing as how I don’t understand the business of financial instituitions, I’m going to stick my head out and say that I probably won’t be able to create what you are looking for. But your question related to the graham spreadsheet was solved I believe Do you have a service where we can simply typemin the equity name and we get a graph of valuation? Also how do you handle non revenue producing biotechs? Hi Sam, The premium spreadsheets graph the historical stock price vs the intrinsic value based on DCF but that’s about it at the moment. Biotechs are out of my league so I wont be able to answer that one. Hi, Can we have this work for Indian equities ? Sorry. No international markets are supported. I tried applying basic Graham formula to few companies. It works for few companies but for few others I get a very high value of growth rate (and of intrinsic value). for example – for JDA, EPS TTM in 2003 and 2012 were .31 and 1.67, G computes to (1.67-0.31)/(9*0.31)= 49% For AAPL this computers to (27.68 -0.09)/(0.09 * 9) = 3406%.. What am I doing wrong? Those companies had huge growth. You need to adjust the growth by entering in a different EPS. Dear Jae. I noticed that downloading the bg equation is far too complicated especially when I am not technically savvy. Can you make it more simpler? Thanks tm Hi Jae, this is a great initiative on your part. Keep up the good work. I am already subscribed to your newsletter. How can I download the Ben Graham DCF spreadsheet from your link? Thanks and
{"url":"http://www.oldschoolvalue.com/investment-tools/benjamin-graham-formula-valuation-spreadsheet/","timestamp":"2014-04-21T12:08:11Z","content_type":null,"content_length":"91956","record_id":"<urn:uuid:220d0a8e-2cd9-4a3f-9573-379da4ead413>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof by Contradiction: e^n.... Date: 7/5/96 at 10:8:22 From: Scott Turner Subject: Proof by Contradiction: e^n.... I'm trying to prove that e^n (n is a natural number) is not O(n^m) for any power of m. Note: e^n = 1 + n + n2/2!+ .... I've begun by attempted that the e^n is O(n^m) in which case there is a constant C such that e^n <= Cn^m. Next I took the natural log of both sides, which resulted in n <= ln C + mln N. From there I don't think that I can say my assumption is false but I'm stuck. Date: 7/8/96 at 9:42:5 From: Doctor Richard Subject: Re: Proof by Contradiction: e^n.... Hi Scott, That idea of taking ln of both sides was a reasonable thing to try, but in this case it leaves you with an equivalent problem because you need some sort of approximation of ln(n) in order to derive a contradiction from your inequality. Instead, I think the simplest way to get this result is to compare n^m with n^(m 1) and realize that the second of these must eventually overwhelm the first, no matter what positive constants those are multiplied by. By "eventually", I mean for large enough n. (Actually, to prove that e^n is not O(n^m), it suffices merely to show that for any constant C > 0, e^n > C n^m for certain arbitrarily high values of n; but in this case showing it for _all_ sufficiently high n just "comes for free"). So, do you see where I'm going? Just observe that n^(m + 1)/(m + 1)! < e^n for all positive n, because that's just one of the terms in the series and all the terms are positive. So if you're assuming that e^n < C n^m for all sufficiently large n, we get n^(m + 1)/(m + 1)! < C n^m for all sufficiently large n. Remember that m here is fixed (though arbitrary). I think you can see how to finish getting a contradiction from this. So e^n can't be O(n^m) after all. In a nutshell: e^n is a "sum of all non-negative integer powers of n (with coefficients)", so no single one of those powers can be the dominant term in the sum. -Doctor Richard, The Math Forum Check out our web site!
{"url":"http://mathforum.org/library/drmath/view/54226.html","timestamp":"2014-04-16T13:17:56Z","content_type":null,"content_length":"6936","record_id":"<urn:uuid:fcba5407-134a-40ac-9f21-97049c26da60>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
History of Set Theory Mathematicians have been using sets since the very beginning of the subject. For example, Greek mathematicians defined a circle as the set of points at a fixed distance r from a fixed point P. However, the concepts of 'infinite set' & 'finite set' eluded mathematicians and philosophers over the centuries. For example, Hindu minds conceived of infinite in their scriptural text Ishavasy-opanishad as follows: "The Whole is there. The Whole is here. From the hole imanates the Whole. Taking away the Whole from the Whole , what remains is still a Whole". Phythagoras(~ 585-500 B.C.), a Greek mathematicians, associated good and evil with the limited and the unlimited, respectively. Aristotle(384-322 B.C.) said, "The infinite is imperfect, unfinished and therefore, unthinkable; it is formless and confused." The Roman Emperor and philosopher Marcus Aqarchus(121-180 A.D.) said infinity is a fathomless gulf, into which all things vanish". English philosopher Thomas Hobbes(1588-1679) said, "When we say anything is infinite, we signify only that we are not able to conceive the ends and bounds of the thing named". The working mathematician, as well as the street, is seldom concerned with the unusal question: what is a number? But the attempt to answer this question precisely has motivated much of the work by mathematicians and philosophers in the foundations of mathematics during the past hundred years. Characterization of the integers, rational numbers and real numbers has been a central problem for the classical researches of Weierstrass, Dedekind, Kronecker, Frege, Peano, Russell, Whitehead, Brouwer, and others. The researches of Georg Cantor around 1870 in the theory of infinite series and related topics of analysis gave a new direction for the development of the set theory. Cantor, who is usually considered the founder of the set theory as a mathematical discipline, was led by usually considered the founder of set theory as a mathematical discipline, was led by his work into a consideration of infinite sets or classes of arbitrary character. However, Cantor's results were not immediately accepted by his contemporaries. Also, it was discovered that his definition of a set leads to a contradictions and logical paradoxes. The most well known among these was given in 1918 by Bertrand Russell(1872-1970), now known as Russell's paradox. In effort to resolve these paradoxes, the first reaction of mathematicians was to 'axiomatize' Cantor's intuitive set theory. Axiomatization means the following: starting with a set of unambiguous statements called axioms, whose truth is assumed, one is able to deduce all the remaining propositions of the theory from these axioms using axioms of logical inference. Russell and Alfred North Whitehead(1861-1974) in 1903 proposed an axiomatic theory of sets in their three-volume work called Principia Mathematicians found it awkward to use. An axiomatic set theory which is workable and is fully logistic was given in 1908 by Ernst Zermello(1871-1953). This wa improved in 1921 by Abraham A. fraenkel (1891-1965) and T. Skolem (1887-1963) and is now known as 'Zermello-frankel(ZF) -axiomatic theory of sets.
{"url":"http://www.mathresource.iitb.ac.in/project/history.htm","timestamp":"2014-04-17T07:49:28Z","content_type":null,"content_length":"7095","record_id":"<urn:uuid:a14af740-27ad-4471-ae5b-10e1f47427c2>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability combinations help! July 24th 2012, 02:13 PM Probability combinations help! Please help, What is the probability of having two aces, two kings, and a queen in a five‐card poker hand? July 24th 2012, 02:25 PM Re: Probability combinations help! July 24th 2012, 02:39 PM Re: Probability combinations help! I understand how you got that that part I get but the answer in the back of the book was (4combination2)(4combination2)(4combination1) that was in the numerator. Then in the denominator it was 52combination5 and the answer was 3 over 216580. I don't understand how they got that answer July 24th 2012, 02:43 PM Re: Probability combinations help! I understand how you got that that part I get but the answer in the back of the book was (4combination2)(4combination2)(4combination1) that was in the numerator. Then in the denominator it was 52combination5 and the answer was 3 over 216580. I don't understand how they got that answer Do you understand that $\binom{N}{k}=\frac{N!}{k!(N-k)!}~?$ July 24th 2012, 04:09 PM Re: Probability combinations help! July 24th 2012, 04:40 PM Re: Probability combinations help!
{"url":"http://mathhelpforum.com/math-topics/201321-probability-combinations-help-print.html","timestamp":"2014-04-18T17:04:54Z","content_type":null,"content_length":"9279","record_id":"<urn:uuid:69c2c57a-7976-4b48-ab97-c61dee55c6d7>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
Next Article Contents of this Issue Other Issues ELibM Journals ELibM Home EMIS Home Integral Representations of Graphs Fernando C. Silva Departamento de Matemática, Faculdade de Ciências, Universidade de Lisboa, Rua Ernesto de Vasconcelos, 1700 Lisboa - PORTUGAL Abstract: Following the definition of graph representation modulo an integer given by Erdös and Evans in [1], we call degree of a representation to the number of prime factors in the prime factorization of its modulo. Here we study the smallest possible degree for a representation of a graph. Full text of the article: Electronic version published on: 29 Mar 2001. This page was last modified: 27 Nov 2007. © 1996 Sociedade Portuguesa de Matemática © 1996–2007 ELibM and FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition
{"url":"http://www.emis.de/journals/PM/53f2/2.html","timestamp":"2014-04-21T10:03:05Z","content_type":null,"content_length":"3265","record_id":"<urn:uuid:545ae9b9-0b4e-43ce-bc08-059651df5950>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
Abstract Heresies I got this from Marshall Spight today. (define (f a b) (if (zero? b) (f (logxor a b) (* (logand a b) 2)))) F is better known as ______ ? You may need these auxiliary functions. (define (logand a b) (cond ((zero? a) 0) ((zero? b) 0) (+ (* (logand (floor (/ a 2)) (floor (/ b 2))) 2) (if (or (even? a) (even? b)) (define (logxor a b) (cond ((zero? a) b) ((zero? b) a) (+ (* (logxor (floor (/ a 2)) (floor (/ b 2))) 2) (if (even? a) (if (even? b) 0 1) (if (even? b) 1 0)))))) And please don't just post a spoiler. If you just want ‘first credit’, email me directly. What's amusing to me is that it isn't obvious if F even terminates. 4 comments: Aren't logand and logxor just bitwise-and and bitwise-xor? This isn't intended to be a spoiler, but the function is pretty much a straightforward translation of what it would look like in hardware. Nice. It should be obvious, but I didn't recognize it until after I tried a few inputs. Stelian Ionescu sent in the solution first at 8:33 PDT. Bob Miller sent his solution in at 9:22 PDT. The function is very similar to what is commonly built in hardware, but not exactly the same. The hardware would do it one bit at a time through combinational logic. The straightforward translation of this algorithm to hardware would involve iteratively applying the steps. logand and logxor are also defined in srfi-60, if your implementation supports it (,open srfi-60 in scheme48).
{"url":"http://funcall.blogspot.com/2009/08/small-puzzle.html","timestamp":"2014-04-17T00:59:25Z","content_type":null,"content_length":"59167","record_id":"<urn:uuid:04a10f59-8450-4d92-8e52-b797046fcae6>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
Joint probability, conditional probability and Bayes' theorem For those of you who have taken a statistics course, or covered probability in another math course, this should be an easy review. For the rest of you, we will introduce and define a couple of simple concepts, and a simple (but important!) formula that follows immediately from the definition of the concepts involved. The result is very widely applicable, and the few minutes you spend to become familiar with these ideas may be the most useful few minutes you spend all year! Sex, Math and English We'll start out by introducing a simple, concrete example, and defining "joint" and "conditional" probability in terms of that example. Table 1 shows the number of male and female members of the standing faculty in the departments of Mathematics and English. We learn that the Math department has 1 woman and 37 men, while the English department has 17 women and 20 men. The two departments between them have 75 members, of which 18 are women and 57 are men. Table 1 │ │ Math │ English │ Total │ │ Female │ 1 │ 17 │ 18 │ │ Male │ 37 │ 20 │ 57 │ │ Total │ 38 │ 37 │ 75 │ Table 2 (below) shows the same information as proportions (of the total of 75 faculty in the two departments). If we wrote the name, sex and department affiliation of each of the 75 individuals on a ping-pong ball, put all 75 balls in a big urn, shook it up, and chose a ball at random, these proportions would represent the probabilities of picking a female Math professor (about .013, or 13 times in a thousand tries), a female English professor (.227), a male Math professor (.493), and so on. In formula form, we would write P(female, math) = .013, P(female, english) = .227, etc. These are called "joint probabilities"; thus P(female, english) is "the joint probability of female and english ". Note that joint probabilities (like logical conjunctions) are symmetrical, so that P(english, female) means the same thing and P(female, english) -- though often we chose a canonical order in which to write down such categories. Table 2 represents the "joint distribution" of sex and department. Table 2 │ │ Math │ English │ Total │ │ Female │ .013 │ .227 │ .240 │ │ Male │ .493 │ .267 │ .760 │ │ Total │ .506 │ .494 │ 1.00 │ The bottom row and rightmost column in Table 2 give us the proportions in the single categories of sex and department: P(female) = .240, P(male) = .760, P(math) = .506, etc. As before, these proportions can also be seen as the probabilities of picking a ball of the designated category by random selection from our hypothetical urn. N.B.: we've chosen this example because the relationship between sex and academic discipline is concrete, simple, easy to remember -- and highly non-random -- but not because we think it is appropriate or inevitable. For information about efforts to improve the numbers of women mathematicians, see the web page for the AWM; see this page for an example of a highly successful effort to improve the representation of women in computer science at the undergraduate level. Now suppose that someone chooses a ball at random from the faculty urn, tells us that the department affiliation is "Math", and invites us to guess the sex. We are then basically dealing with just the first column of Table 1, represented in the non-greyed-out portion of Table 3 below: Table 3 │ │ Math │ English │ Total │ │ Female │ 1 │ 17 │ 18 │ │ Male │ 37 │ 20 │ 57 │ │ Total │ 38 │ 37 │ 75 │ Since 37 out of the 38 Math professors are male, for a proportion of 37/38 or about .974, we could cite very good odds for guessing male: we'd be right about 974 times out of a thousand. But Table 2 told us that P(male) is about .760. Why is the probability of male .974 now? Obviously, because the assumptions are different. With respect to the total set of 75 faculty in Math and English, the proportion of males is about .760; but with respect just to the 38 faculty in Math, the proportion of males is .974. We symbolize that "with respect to" using a vertical line (usually pronounced "given"), so that we write P(male | math) = .974 which we read "the probability of male given math is .974". This is a conditional probability. Specifically, it is "the conditional probability of male given math". Notice also that this is quite different from the "joint probability" P(male, math). And it is also different from the conditional probability P(math | male). The values for these three quantities are (approximate numbers as always in this discussion): P(male | math) = .974 P(male, math) = .493 P(math | male) = .649 If this isn't all obvious to you, spend a few minutes copying down the tables and the formulae, and calculating values, until the interpretation of such formula, at least in concrete cases like this one, is second nature to you. Now suppose that we don't have access to the original counts (as shown in Tables 1 and 3), but only to the probabilities (i.e. the proportions of balls of different sorts in the hypothetical urn), as shown in Table 2 above, or reproduced in Table 4 below, with associated probability formulae. Table 4 │ │ Math │ English │ Total │ │ Female │ P(female, math) │ P(female, english) │ P(female) │ │ │ .013 │ .227 │ .240 │ │ Male │ P(male, math) │ P(male, english) │ P(male) │ │ │ .493 │ .267 │ .760 │ │ Total │ P(math) │ P(english) │ 1.00 │ │ │ .506 │ .494 │ │ Could we still calculate P(male | math) -- that is, the probability that a randomly selected faculty member is male, if we know that he or she is in the math department? Sure. The numbers in Table 4 tell us that 506 times out of a thousand, the chosen faculty member will be from the math department -- and that 493 times out of a thousand, the chosen faculty members will be both male and from the math department. Therefore, if we know that the prof is from math, the chances of maleness are 493/506, or about .974 -- just what it should be! In formulaic terms, P(male | math) = P(male, math) / P(math) (eq. 1) Putting a bit more abstractly, for any values A and B (in a set-up like the one we're talking about): P(A | B) = P(A, B) / P(B) (eq. 2) Plugging in all the other possible values for A and B, relative to our little faculty urn, we can get eight variants on equation 1: P(male | math) = P(male, math) / P(math) P(female | math) = P(female, math) / P(math) P(male | english) = P(male, english) / P(english) P(female | english) = P(female, english) / P(english) P(math | male) = P(math, male) / P(male) P(math | female) = P(math, female) / P(female) P(english | male) = P(english, male) / P(male) P(english | female) = P(english, female) / P(female) If these relations are not obvious to you, try calculating (at least a few of them) from the probabilities given in Table 2 and the counts given in Table 1. Bayes' Theorem Now we're ready for Bayes' theorem, which has recently been called (in the pages of the Economist, no less) "the most important equation in the history of mathematics." This might be a little breathless -- but you should definitely know it! Since equation (2) -- reproduced as (3a) below -- involves arbitrary meta-variables A and B, it's equally true if we swap them, producing equation (3b). And because joint probability is symmetrical, we can re-write equation (3b) as (3c): P(A | B) = P(A, B) / P(B) (eq. 3a) P(B | A) = P(B, A) / P(A) (eq. 3b) P(B | A) = P(A, B) / P(A) (eq. 3c) Multiplying both sides of equation (3a) by P(B) gives use equation (4): P(A | B) P(B) = P(A, B) (eq. 4) And multiplying both sides of equation (3c) by P(A) gives us equation (5): P(B | A) P(A) = P(A, B) (eq. 5) Since the right-hand sides of equations (4) and (5) are the same, we can equate their left-hand sides, giving us equation (6): P(A | B) P(B) = P(B | A) P(A) (eq. 6) And finally, we can divide both sides of equation (6) by P(B), giving us equation (7): P(A | B) = P(B | A) P(A) / P(B) (eq. 7) This is Bayes' Theorem! Sometimes it is called "Bayes' rule", perhaps because it follows so directly from the definitions involved that it seems hardly to count as a theorem. Why this is a big deal The usefulness of Bayes' theorem becomes clearer if we forget about simple tables of sex, academic discipline and the like, and think about the relationship between evidence and theory. Suppose we have a set of alternative theories T[1], T[2], ..., and we've observed some evidence that bears on the choice among these theories, and we'd like to pick the theory that is more likely to be true given our observations. This leads us to want to define the conditional probability P(T | E) i.e. the probability of theory given evidence. Then if we could evaluate this quantity for every possible theory, we would've reduced our problem to the trivial matter of picking the maximum. Of course, the number of possible theories might be inconveniently large, and so we might have to look for a more efficient way to search for the maximum than exhaustive enumeration. However, there is often a more fundamental problem, which is that we can't find a way to estimate the desired conditional probability, at least not directly. For example, when we are transmitting messages subject to various noise and distortion processes, it can be fairly easy to approximate these processes with generative models, and therefore to estimate how likely an output signal is given a particular choice of message; but such models typically don't allow us a direct estimate of how likely a particular message is given an observed signal. And in general, models for the synthesis of signals tend to be a lot easier to build than models for the analysis of signals. Luckily, we can apply Bayes' theorem to re-define what we want as P(T | E) = P(E | T) P(T) / P(E) (equation 8) In other words, if we want to know how probable a particular theory T[i] is, given some particular evidence E, we can calculate how likely evidence E would be if we assume T[i] to hold, multiply by the a priori probability of theory T[i, ]and divide by how likely we think evidence E is in and of itself. If all we care about is finding the most probable theory (which is normal), we can forget about the normalizing factor P(E), because it will be the same for all alternative theories. As a result, the "best theory" (the theory with the largest posterior probability given the evidence) will be ARGMAX[i] P(E | T[i]) P(T[i)] that is, the maximum value over all possible choices of subscript i of the expression P(E | T[i]) P(T[i).] This way of thinking about things is very widely used in engineering approaches to pattern recognition. In particular, equation (8), with theory replace by sentence and evidence replaced by sound, has been called "the fundamental equation of speech recognition." The Bayesian framework is also a natural one for models of the computational problems of perception. As Helmholtz pointed out a century and a half ago, what we perceive is our "best guess" given both sensory data and our prior experience. Bayes' rule shows us how to reconstruct this concept in formal mathematical and computational terms.
{"url":"http://www.ling.upenn.edu/courses/cogs501/Bayes1.html","timestamp":"2014-04-21T08:18:03Z","content_type":null,"content_length":"26714","record_id":"<urn:uuid:935b4c05-d086-47c0-8f49-1c708e161adb>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: polygon creation from lat lon info Date: Feb 15, 2009 11:28 AM Author: ulas Subject: Re: polygon creation from lat lon info This polygon is in a Lambert Conformal projection. i ve the central lat lon and truelat 1 and 2 info of the projection. does this information help? Walter Roberson <roberson@hushmail.com> wrote in message <q1Xll.8946$nu6.8246@newsfe21.iad>... > ulas im wrote: > > i am trying to create a polygon from four corners that i ve the lat lon info of. > > the object is to create the polygon first and then check if a number of > > points fall inside the polygon. > Which geoid model? > http://en.wikipedia.org/wiki/Geoid > Do you have access to the symbolic toolbox? If not, then you will only > be able to run the calculations -approximately-, and you are going to get > the wrong answers for some points, especially if the polygon is extremely > close to or straddles the North Pole or South Pole. > Yes, the implication of my questions *is* that in order to solve this > problem, you *must* take into account the curvature of the Earth. > Latitude and Longitude are -spherical- measures, so the calculation of > what is "inside" or "outside" must be done in spherical space.
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=6612140","timestamp":"2014-04-19T15:07:05Z","content_type":null,"content_length":"2353","record_id":"<urn:uuid:91debc23-4a65-4ce0-8ad0-d92251f82417>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Impulse, momentum and efficiency to find height formula 1. The problem statement, all variables and given/known data A ramp system is set up. Due to friction the ramp has an efficiency of 82%. Two cars of equal mass are allowed to collide and car 1 starts from rest at height h1. The two stick together and coast to a height of h2. Derive a formula relating h1 to h2. 2. Relevant equations I=Δp, p=m*v, eff=(output/input)*100%, GPE=mgh, 3. The attempt at a solution Unfortunately, I cannot figure out almost anything based on this question. Even my friend's father, who is an engineer, could not understand. Just looking for some good help if any one is available.
{"url":"http://www.physicsforums.com/showpost.php?p=3661667&postcount=1","timestamp":"2014-04-18T03:15:50Z","content_type":null,"content_length":"9218","record_id":"<urn:uuid:b30fe720-edac-45b1-85d7-f3341d7787ff>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: How do i type square root numbers on this site? • one year ago • one year ago Best Response You've already chosen the best response. use the square root on the calculater Best Response You've already chosen the best response. Use the Equation function as mentioned, or type in the code manually: \[\sqrt{25}\] |dw:1362343333658:dw| Best Response You've already chosen the best response. how do i add an exponent to that? Best Response You've already chosen the best response. After you type in the square root function, click on the exponential function. It will open up a bracket for you to type in. Or again, you can do it manually by adding ^ { # } after the square root function. \[\sqrt{25}^{5}\] |dw:1362343668002:dw| Best Response You've already chosen the best response. thank you very much! Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5133b56be4b093a1d9490b5a","timestamp":"2014-04-20T08:41:22Z","content_type":null,"content_length":"38674","record_id":"<urn:uuid:e40dac36-8cfa-454d-84b6-66f75002a182>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
expected values October 7th 2009, 07:54 PM expected values A book store orders only 3 copies of a certain math book per month, because the manger does not believe that more will be sold. If the number of requests follows a Poisson distribution with mean 36 per year, 1. what is the expected number of copies sold every months? 2. how many copies should the store manager order such that the probability of running out of copies is less than 5%? October 8th 2009, 12:20 AM mr fantastic A book store orders only 3 copies of a certain math book per month, because the manger does not believe that more will be sold. If the number of requests follows a Poisson distribution with mean 36 per year, 1. what is the expected number of copies sold every months? 2. how many copies should the store manager order such that the probability of running out of copies is less than 5%? 1. Let X be the random variable number of books sold per month. Then $\lambda = E(X) = \frac{36}{12} = 3$. 2. You require the value of x such that $\Pr(X > x) < 0.05 \Rightarrow \Pr(X \leq x) > 0.95$. So find the smallest value of x such that $\sum_{i = 0}^x\frac{e^{-3} 3^i}{i!} > 0.95$. I suggest using technology.
{"url":"http://mathhelpforum.com/advanced-statistics/106775-expected-values-print.html","timestamp":"2014-04-17T04:21:21Z","content_type":null,"content_length":"5433","record_id":"<urn:uuid:3a293c62-8d57-4322-be46-83285baeb5ec>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Qualitative analyses of SIS epidemic model with vaccination and varying total population size. (English) Zbl 1045.92039 Summary: An SIS epidemic model with vaccination, temporary immunity, and varying total population size is studied. Three threshold parameters ${R}_{0}$, ${R}_{1}$, and ${R}_{2}$ are identified. The disease-free equilibrium is globally stable if ${R}_{0}\le 1$ and unstable if ${R}_{0}>1$, the endemic equilibrium is globally stable if ${R}_{0}>1$. The disease cannot break out if ${R}_{1}<1$, the disease may break out when the fractions of the susceptible and the infectious satisfy some condition if ${R}_{1}>1$ and ${R}_{0}\le 1$. The population becomes extinct ultimately and the disease always exists in the population if ${R}_{0}>1$ and ${R}_{2}\le 1$. There is a really endemic disease if ${R}_{0}>1$ and ${R}_{2}>1$. The global stability of the disease-free equilibrium and the existence and global stability of the endemic equilibrium are proved by means of LaSalle’s invariance principle, the method of estimating values and Stokes’ theorem, respectively. The results with vaccination and without vaccination are compared, and the measures and effects of vaccination are discussed. 92D30 Epidemiology 34D23 Global stability of ODE
{"url":"http://zbmath.org/?q=an:1045.92039","timestamp":"2014-04-20T06:05:01Z","content_type":null,"content_length":"22978","record_id":"<urn:uuid:06607f32-1ec1-4393-9ae3-cb63e34a5789>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
Honors Track Departmental Honors Track- Mathematics The UCCS mathematics department offers a special honors track to qualified math undergraduate students who are already pursuing a BA or BS in Math degree at UCCS. Admission to the honors track is by application only; an application form is available from the from the math department website. A letter of recommendation from a faculty in the Math Department is required. Students should normally apply no later than the beginning of their first semester of their junior year. Honors Track Application: Requirements for graduating with Departmental Honors in Mathematics: • Maintain a minimum 3.5 GPA in all Math courses and an overall 3.0 GPA • Complete five 4000 or higher level Math courses with at least 3.3 GPA in these courses • Complete a written report based on an undergraduate research project, a senior project in an advanced course or a senior thesis, under the supervision of a faculty advisor, and approved by the Undergraduate Committee. Please contact the UCCS Math Department Undergraduate Chair: Dr. Radu Cascaval
{"url":"http://www.uccs.edu/math/undergraduate-programs/math-honors-track.html","timestamp":"2014-04-18T21:06:24Z","content_type":null,"content_length":"14281","record_id":"<urn:uuid:2503f105-65d6-45f4-9e39-059bede16d84>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
college,stats/ PLZ HELP:( need it within nxt 6hrs Number of results: 81,550 college,stats/ PLZ HELP:( need it within nxt 6hrs never mind i got it lol Wednesday, April 7, 2010 at 10:50pm by Sandra college,stats/ PLZ HELP:( need it within nxt 6hrs Among Canadian households, 24% have telephone answering machines. If a telemarketing company contacts 2500 households find the probability that between 625 and 650 households, inclusive, have answering machines. Wednesday, April 7, 2010 at 10:50pm by Sandra Stats....plz help.... can u plz help me with the other problem i posted? i need help so bad! thank u!! Sunday, October 16, 2011 at 10:03pm by Jennifer Social Studies I am doing an ABC book to for Florida and i need a word for the letter X to. IT is due MAY 26 , 2010!!!!!!!!!!!!!!!! I need HELP PLZ PLZ PLZ PLZ PLZ THANKS :) Friday, May 7, 2010 at 3:55pm by angel social studies plz plz plz plz i NEED help! awnser this question plz! my poster is do tomorrow! i NEED help! Sunday, March 30, 2008 at 8:15pm by mia In a sample of 500 college students fifty percent of all college students attend schools within 50 miles of their homes. The probability that the population proportion will be between 0.45 and 0.55 is (1) 0.4875 (2) 0.9818 (3) 0.4909 (4) 0.9750 (5) 0.5000 Wednesday, March 31, 2010 at 1:23pm by sam I NEED HELP!!:(!!!! HELP ME PLZ!! I GOT A FEW QUESTIONS SO what is a cell part that has the funcion of a gel-like material inside cells... and what releases energy into the cell.?? plz plz plz plz HELP!!! Tuesday, October 20, 2009 at 8:48pm by i really need help!! life orientation guyz plz....Identify one enviromental or human factor that causes ill, accident,crises,and diseaster within the community or any other community within South Africa.plz Wednesday, June 1, 2011 at 9:17am by Thobeka A human resources manager at a large company wants to estimate the proportion of employees that would be interested in reimbursement for college courses. If she wishes to be 95% confident that her estimate is within 5% of the true proportion, how many employees would need to ... Thursday, April 15, 2010 at 8:53pm by Chad word unscramble plz plz plz plz help me have to give it tommorow. it has to school items only. the four words are that need to be unscramble. 1. EEFILNPPTT 2. AEHIRSSTTW 3.EEHKOSSUY 4. CEGHLMOSTY SO PLZ HELP I REAlly wannt to know what it is plz help. Thursday, September 28, 2006 at 3:34pm by shruti Stats....plz help.... First you need to rank the data in increasing order. Sunday, October 16, 2011 at 10:03pm by Ms. Sue work and time Convert the rates to work/mandays or work/womandays, etc. So rate1=job/12*15*8 manhours rate2=job/12*24*8 femalehours rate2=job/12*36*8 boyhours. Finally, the equation is 9/4 job = Xmen*6hrs/ day*30days*rate1+ 12women*30days*6hrs/day*rate2 + 6 boys*30days*6hrs/day *rate3 solve ... Sunday, March 8, 2009 at 11:55am by bobpursley the ability to grow in size is a characteristic of living organism .Although an icicle may grow in size over time,it is considered nonliving because there is 1)an incerase in matter ,but no increase in the number of icicle. 2)an interaction between the icicle and the ... Tuesday, February 12, 2013 at 9:14pm by tania sharmin Climate Graphs i need a climate graph for japan now plz my teacher will kill me plz plz plz Saturday, January 13, 2007 at 10:51pm by harry college algebra word problem the rate for each is 1job/6hrs,or 1job/7hours timetogehter=1job/combined rate = 1/(1/6 + 1/7) go for it. Monday, October 20, 2008 at 5:01pm by bobpursley Stats....plz help.... ok thank u!! Sunday, October 16, 2011 at 10:03pm by Jennifer Stats....plz help.... Sunday, October 16, 2011 at 10:03pm by Jennifer Stats....plz help.... Sunday, October 16, 2011 at 10:03pm by Ms. Sue Stats....plz help.... You're welcome. Sunday, October 16, 2011 at 10:03pm by Ms. Sue Stats....PLZ HELP.... What does x stand for? Tuesday, October 25, 2011 at 8:37pm by PsyDAG the dean of a college is interested in the proportions of graduates from his college who have a job offer on graduation day Sunday, December 19, 2010 at 6:49pm by Anonymous eng- need it fast plz need a word that has all these letters d, l, b, a , e, e, n, b plz been stuck for a while grade 2 Monday, October 19, 2009 at 6:36pm by nany The college student's average age is 33 and 42% are minority. How do I verify these claims without using a college website. Tuesday, September 6, 2011 at 6:31pm by Kim Plz,plz help! Can you help me write: real-world scenario in which you would write an inequality rather than an equation? Plz, I really need help! :D Tuesday, January 21, 2014 at 2:13pm by Princess Anna Lattice method/math I really really dont like this and i need it drawn out. plz plz u would be my best friend plz! Tuesday, September 19, 2006 at 8:25pm by Mike College stats Please help, I would appreciate an explanation on how you got the answer so I can understand it, being that I am so confused. The Denver Post stated that 80% of all new products introduced in grocery stores fail (are taken off the market) within 2 years. A grocery store chain ... Friday, July 12, 2013 at 6:16pm by penelope Stats....plz help.... Right. Multiply: 0.56 * 9 = ? Sunday, October 16, 2011 at 10:03pm by Ms. Sue You will need to "compute the values" yourself, but I see one thing to include in your comments: Both $75 and $100 are within the "standard deviation of $22." What do you think that means? Monday, December 31, 2012 at 11:17am by Writeacher grade 10 history I have a SUPER HUGE project due called National History day and need lots of help with it, plz plz plz help!!!!! Sunday, January 20, 2013 at 7:06pm by Macy stats. plz help i have exams in two days! Tuesday, May 25, 2010 at 8:36pm by christopher someone plz plz plz plz plz plz plz plz plz help me! Friday, February 8, 2013 at 1:24pm by corie plz plz plz help in science Energy transformation is the process of changing energy from one form to another. This process is happening all the time, both in the world and within people. When people consume food, the body utilizes the chemical energy in the bonds of the food and transforms it into ... Tuesday, December 13, 2011 at 1:45pm by damon sorry i would not do that again but can you please answer these questions because i really need them because i have a exam tommorow at 1 pm. i posted them in a series because i have so less time to study, that i was not even thinking write and i just posted them all but can ... Thursday, January 22, 2009 at 5:59pm by gagan stats. plz help i have exams in two days! how did u come up with it?thanks Tuesday, May 25, 2010 at 8:36pm by shasha Stats....plz help.... 5.04? but how did u come up with multiplying it by 9? because it's the highest number? Sunday, October 16, 2011 at 10:03pm by Jennifer From previous records, a shipping company knows that the cost to deliver a small package within 24 hours is $15.50. The company charges $17.95 for shipping, but guarantees to refund the full charge if the delivery is not made within 24 hours. If the company fails to deliver ... Monday, April 29, 2013 at 6:31pm by Megan stats of biologists a manufacturer claims that the life time of a certain brand of batteries has a varaince of 5000(hours)^2. a sample of 26 has a variance of 7200(hours)^2. assuming that it is reasonable to treat these data as a random sample from a normal population, test the manufacturer's ... Thursday, February 14, 2013 at 8:48am by Aboli hi people actually i need 2 do a project on sociology for 20 marks n i cant decide which on a topic ca u all plz help n v also need 2 have a survey or 30-40 interviews plz plz plz plz help its urgent It would really help us teachers if you would write in standard English so we... Monday, August 13, 2007 at 10:47am by aditi college algebra-plz help Yes, my mistake.find the center, radius and graph of circle. then find intercepts 3(x-5)^2+3y^2=12 plz show work Sunday, December 9, 2012 at 6:50am by James In the past a certain college has found that 35% of the students will grad within 4 years. A random sample of 450 students at this college is taken and going to be followed until graduation. A. What is the sampling distribution of the statistic of interest? B. What is the ... Sunday, June 12, 2011 at 11:16pm by danny What percentage of college students are attending a college in the state where they grew up? Let p be the proportion of college students from the same state as that in which the college resides. If no preliminary study is made to estimate p, how large a sample is needed to be ... Wednesday, August 1, 2012 at 1:40pm by Anonymous career aw Can someone plz get a website all about photography for me I really need it for a project that is due tomorrow that my teacher assigned but i need a little bit more about what their daily tasks are?!?!?!?!?!?!? I REALLY NEED HELP!!!!! PLZ SOMEONE!!! THANXS Tuesday, September 25, 2007 at 6:06pm by Stacy A water balloon is shot into the air so that its height h, in metres, after t seconds is h = —4.9t^2 + 27t + 2.4 a)How high is the balloon after 1 s? b)For how long is the balloon more than 30 m high? c)What is the maximum height reached by the balloon? d)When will the balloon... Friday, January 23, 2009 at 9:21am by gagan To estimate the mean score of those who took the Medical College Admission Test on your campus, you will obtain the scores of an SRS of students. From published information you know that the scores are approximately Normal with standard deviation about 6.4. You want your ... Wednesday, November 6, 2013 at 4:18pm by Aman Math plz fast i really need it okay but can you expain it better plz Thursday, February 21, 2013 at 5:20pm by Angie Math plz fast i really need it i don't get it can you explain it better plz Thursday, February 21, 2013 at 5:20pm by Angie math need help plz 1/3+(-5/6)+(-1/2) can u help me solve this plz by using steps. thank u Monday, August 24, 2009 at 7:13pm by scooby91320002 college stats 1. C 2. D 3. A Friday, July 12, 2013 at 12:57am by Anonymous plz plz answer i am in a true need of it!!!! Friday, January 17, 2014 at 10:02am by assassin 8 times. 90*4=360 minutes=6hrs Thursday, February 5, 2009 at 7:11pm by annmay physics,plz.....give me answer as soon as possible plz plz plz.....help me Wednesday, November 24, 2010 at 11:35am by sweetie college algebra-plz help a circle has the equation 3(x-5)^2_3y^2=12. Find the center and radius and graph the circle. find the intercepts, if any. plz show work Sunday, December 9, 2012 at 6:50am by James i dunno. i think it just is cus deyr like nxt 2 each ova Friday, January 30, 2009 at 3:42pm by Nabiha i need it fast plz plz Monday, October 19, 2009 at 6:28pm by nany Plz, plz help! SS Thanks you. I no longer need help with #2 and #3. Wednesday, February 5, 2014 at 4:29pm by Anonymous Math ....urgent help i just dont get this elimination method ..... plz help me plz.....teach me how to keep the x and y in a parallel line. and which number to multiply.....plz help -5x-4y=-11 10x=-6-y help me plz...... Wednesday, January 7, 2009 at 5:01pm by vero the volume of a rectangular solid is given by expressions given below.in each case find the dimentions of the solid. a)15x^2-51x+18. b)x^3+13x^2+32x+20. c)x^2-5x/2-3/2. d)x^4+x^3-x-1 plz everyone help me out.till 2 o'clock...plz plz plz Wednesday, May 30, 2012 at 6:03am by PLZ HELP ME NOW... Math plz fast i really need it to get a 3, you need 2,1 or a 1,2 to get 4, you need 1,3, 3,1;2,2 b. to get 5, 4,1;1,4;2,3;3,2 to get a 9, 5,4;4,5;6,3,3,6 equal probablility for 5 and 9 c. 5 you need 4,1;1,4; 2,3;3,2 8 tyou need 6,2;2,6, 5,3,3,5, 4,4 d. and so one. Thursday, February 21, 2013 at 5:20pm by bobpursley a boy inherits genes for tallness,but his growth is limited as a result of poor nitrition. this is an example of 1)an inherited disorder. 2)environmental influence on gene expression . 3)expression of a hidden trait 4)a characterstic controlled by more than one pair of genes. ... Friday, February 8, 2013 at 12:44pm by tania sharmin I NEED HELLLLPPPPP!!!! Ive been looking all over and i cant find out if there is such thing as a carbon dioxide deficiency in hemoglobin (red blood cells)...i know theres an oxygen one called anemia but i really need one for carbon dioxide as a waste from the body....plz plz plz plz help me!! See ... Sunday, April 22, 2007 at 3:43pm by said so Statistics- quick! Is this right? 1) looks correct For 2) You have an standard error of 500, and are asked what is likelihood of similar sample being within 1000 or 2.0 standard deviations away from the mean. Look up 2.0 in your cumulative normal distribution table (probably in the back of your stats book). I ... Thursday, November 1, 2007 at 8:20pm by economyst I need to observe and identify variables and I'm in 6th grade!!! Help, this work is killing me....It's scenario 1-4 and I have Mr Jones plz help plz Friday, August 31, 2012 at 8:42pm by Destiny I need to find all solutions of the given equations for the indicated interval. Round solutions to three decimal places if necessary. 1.) 3sin(x)+1=0, x within [0,2pi) 2.) 2sin(sq'd)(x)+cos(x)-1=0, x within R 3.) 4sin(sq'd)(x)-4sin(x)-1=0, x within R 4.) sin(x)+1=cos(x), x ... Sunday, June 21, 2009 at 8:55pm by Emily College Stats Thanks i see where i went wrong Monday, January 19, 2009 at 7:22pm by Alex College Stats Is .09 less than .05? Sunday, May 8, 2011 at 11:46pm by Carol College Stats .09 > .05 Sunday, May 8, 2011 at 11:46pm by PsyDAG stats college Which one has the highest percentage? Friday, July 12, 2013 at 12:52am by PsyDAG Lang. Arts PLZ HELP!!D: I need some info about how the building of dams effect whitewater activities such as kyaking and conoeing..!!!plz help Thursday, February 26, 2009 at 5:46pm by Alysha All the following are measures of variability or dispersion within a set of data except? range, median, standard deviation, variance Saturday, April 13, 2013 at 2:31pm by Linda randhills college life orientation 3 environmental health hazard that cause ill health, crises, and or disasters within your community or any other community within south Africa and globally? Tuesday, February 19, 2013 at 6:11pm by kabelo college stats (incomplete) Lacking needed data. Friday, July 12, 2013 at 6:36pm by PsyDAG This week we practice with Binomial Distribution. You can use Appendix Table E or Excel Function Binomdist. About 30% of adults in United States have college degree. (probability that person has college degree is p = 0.30). If N adults are randomly selected, find probabilities... Thursday, November 10, 2011 at 5:18pm by mary local and state gov. someone plz i really need help plz i cant find it in my text so someone plz!!! For a democracy right??? search google: branches of Government Click Bens Guide to Government for kids (1st or 2nd link) Then when you get there select your grade level in the ballon... This shows ... Sunday, November 12, 2006 at 7:58pm by Cole No the question already post it, plz help me ...plz ...plz ...any one... my native language is not english.. plz help me.. Saturday, July 24, 2010 at 10:46pm by shan i need help putting this para into easier words plz Soldiers and military personnel are too often glorified and widely regarded as heroes, which promotes the idea of war within society. However, its harsh realities are left largely ignored, and many are fooled by the ... Saturday, March 31, 2012 at 8:33pm by Navroz Integrated Physics and Chemistry dang! i need the answer to! someone plz answer this question! plz and thank yu :) Monday, March 21, 2011 at 11:43am by Anonymous science i really really need help 3 is wrong any other answer plz Thursday, December 12, 2013 at 1:01pm by john???urgent plz read carfully I need help on the same question.Can someone help,plz??I know it can't be a simile because it has it have 'like' or 'as' in the sentence and it don't.That pretty much all I know.someone plz help me. Monday, March 11, 2013 at 7:07pm by Youngswagger College Business Stats Put your subject in the "School Subject" space, so experts in that area will answer your post. However, this is not my area of expertise. We do not do your work for you. You need to earn your own grade. However, we would be willing to check your work. Tuesday, April 13, 2010 at 12:48am by PsyDAG will someone plz help with this....thanks Provide the titles, authors, publishers, and copyright dates of two picture books concerning children with disabling conditions. The copyright dates should be within the past five years. Note: The books’ copyright dates should fall ... Tuesday, July 20, 2010 at 3:40pm by Ms.Dionn How many students must we sample if we want to be within 4% of the true proportion of female students at DeVry University when using a 95% confidence interval? Thursday, October 7, 2010 at 9:03am by kevin college Stats If Y has a geometric distribution with success probability p, show that P(Y= an odd integer)= p/(1-(q^2)) Tuesday, October 20, 2009 at 11:13pm by james Math PLZ help gtg to bed i need right NOW!! What is the answer to 10 5/12 divided by 1 3/5???????? plz help me i have to go to bed like in 1 min. thanx Thursday, October 30, 2008 at 10:01pm by Emily 1) what are five skills scientist use to learn about the world? 2) what are inferences based on? 3) why do scientist make models? ¢¾¢¾plz, plz help me.... i totally need help on this¢¾¢¾ Monday, September 21, 2009 at 7:39pm by elenny Sorry, doesn't need turned in within an hour but does need to be turned in asap cause it is late. Tuesday, November 17, 2009 at 10:08pm by LeAnn/Needs turned in within an hour How long will it take an investment to triple in value if the interest rate is 4% compounded continuously? i know its set up as 3x=x(e^rt) but im confused as what to do nxt Friday, November 6, 2009 at 1:25am by helpless How long will it take an investment to triple in value if the interest rate is 4% compounded continuously? i know its set up as 3x=x(e^rt) but im confused as what to do nxt Friday, November 6, 2009 at 1:25am by helpless college stats in a sample of 159,949 first year college students, the national survey of student engagement reported that 39% participated in community service or volunteer work.find the margin of error for 99% confidence. what formula do I use? I know the 99% interval for z is 2.576 n is ... Thursday, December 5, 2013 at 7:49pm by hershi Could someone pretty plz plz plz help me...What are the 2 biggest parts of the human brain?? Wednesday, March 11, 2009 at 3:46pm by Logan:) college stats Post a null hypothesis that would use a t test statistical analysis Thursday, December 5, 2013 at 8:16pm by carla peterson Do you know the rule? For example, 68% are with in one standard deviation of the mean in both directions, 95% are within 2 SD and 99.7% are within 3 SD. a. in a normal distribution, mean = median. What does that tell you? b. Mean + 1 SD = ? I have started you out, but we do ... Monday, September 20, 2010 at 8:38pm by PsyDAG the volume of a rectangular solid is given by expressions given below.in each case find the dimentions of the solid. a)15x^2-51x+18. b)x^3+13x^2+32x+20. c)x^2-5x/2-3/2. d)x^4+x^3-x-1 plz everyone help me out.till 2 o'clock...plz plz plz Wednesday, May 30, 2012 at 4:00am by parul CHemistry....Please double check DrBobb ya i guess so ..can u do me favour plz ... can u plz explain ur answer in detail i didnt either get it .. coz i copy paste from some other website.. plz can u explain me how to do this question .. sine i need to learn , in order to be able to rite my quiz lol ..ok thnks really... Saturday, January 24, 2009 at 1:06pm by Casandara NEED HELP PLZ!!! STATS Three cards are selected, one at a time from a standard deck of 52 cards. Let x represent the number of tens drawn in a set of 3 cards. (A) If this experiment is completed without replacement, explain why x is not a binomial random variable. (B) If this experiment is completed... Sunday, October 23, 2011 at 11:48pm by Jennifer How large a sample should be taken if the population mean is to be estimated with 99% confidence to within $72? The population has a standard deviation of $800 Please Help, im stuck Monday, November 17, 2008 at 1:47pm by ks college statistics Cab you answer l30 questions of bus stats in like 2 hrs central time Tuesday, July 28, 2009 at 9:07pm by hiddenkitten plz answer the question we were talking about plz plz. Go back to the problem. I got 184 but the worksheet answer says 136 which is correct?? plz help!! thx Monday, September 13, 2010 at 5:41pm by Happy Face i need soooo mush help plz help me to do it this is the question:a carpenter needs to cut a plank of wood that is 3.75m long in to 5 equal pieces. what percentage of the plank is each piece? plz feed Tuesday, November 15, 2011 at 11:34am by Bella Olson amy can rake leaves in her front porch in 2hrs. Her younger sister can do it in 6hrs. How long will it take them both to do the job if the work together? Monday, May 7, 2012 at 4:13pm by maram OK, let's look at what you have and what you need in table form: ............ SS ...... df ..... MS ..... F Between..... ? ....... 2 ...... 20 ..... 4 Within...... ? ....... 42 ..... ? Total ...... ? ....... 44 To find SS between, take df between times MS between. To find SS ... Wednesday, April 2, 2008 at 2:08pm by MathGuru Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=college%2Cstats%2F+PLZ+HELP%3A(+need+it+within+nxt+6hrs","timestamp":"2014-04-16T18:07:00Z","content_type":null,"content_length":"36327","record_id":"<urn:uuid:c7e0ae52-1fb5-4f45-a56a-fb6a585baccc>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Bayesian Logistic Regression October 14th 2008, 05:28 AM Bayesian Logistic Regression have the following logistic regression problem I am trying to solve using bayesian approach. " We consider a logistic regression setting where the objective is to model Pij, the probability of an occurence for the jth individual in the ith group,i = 1, 2, ....I, j = 1, 2, ...J. We assume log (Pij/1-Pij) = beta0 + beta1*Xi + beta2*Zij and seek inference regarding beta1, the coefficient of the population level covariate and beta2, the coefficient of the individual level covariate. In particular we set I =2 and let Xi = 01, indicating which of the two groups was sampled Zij = U(0,1) Priors: beta0 = N(0,10), beta1 = U(1,1.5), beta2 = U(1,2) " 1. I am not really sure of the use of variable Xi in the model? Is this a standard way of writing a logistic regression model? Some references would be useful. 2. Given the priors, can I use R to code (Gibbs sampler with adaptive rejective sampling - have to use this) this problem? Are there any standard R packages available to do this? Some pointers would be helpful. Thanks a lot.
{"url":"http://mathhelpforum.com/advanced-statistics/53616-bayesian-logistic-regression-print.html","timestamp":"2014-04-18T23:56:12Z","content_type":null,"content_length":"4133","record_id":"<urn:uuid:eddaac8d-4925-440c-baa8-c6021b35c845>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
GRE Math Quantitative Comparison Practice Test 01 1. The average (arithmetic mean) of four numbers is 36 The sum of the same four numbers 140 A. The quantity on the left is greater B. The quantity on the right is greater C. Both are equal D. The relationship cannot be determined without further information 2. n is an integer >0 A. The quantity on the left is greater B. The quantity on the right is greater C. Both are equal D. The relationship cannot be determined without further information The diagonal of a rectangle Half the perimeter of the same rectangle A. The quantity on the left is greater B. The quantity on the right is greater C. Both are equal D. The relationship cannot be determined without further information 4. x + y = 5 y - x = 3 A. The quantity on the left is greater B. The quantity on the right is greater C. Both are equal D. The relationship cannot be determined without further information The distance between the points with rectangular coordinates (0,5) and (0,10) The distance between the points with rectangular coordinates (1,8) and (-3,5) A. The quantity on the left is greater B. The quantity on the right is greater C. Both are equal D. The relationship cannot be determined without further information A. The quantity on the left is greater B. The quantity on the right is greater C. Both are equal D. The relationship cannot be determined without further information 7. A fair coin is tossed three times The chances of getting 3 heads The chances of getting no heads A. The quantity on the left is greater B. The quantity on the right is greater C. Both are equal D. The relationship cannot be determined without further information The percentage of the multiples of 2 that are also multiples of 5 The percentage of the multiples of 5 that are also multiples of 2 A. The quantity on the left is greater B. The quantity on the right is greater C. Both are equal D. The relationship cannot be determined without further information The area of a right angled triangle with sides 6,8 and 10 Twice the area of a right angled triangle with sides 3,4 and 5 A. The quantity on the left is greater B. The quantity on the right is greater C. Both are equal D. The relationship cannot be determined without further information 10. JL = KM A. The quantity on the left is greater B. The quantity on the right is greater C. Both are equal D. The relationship cannot be determined without further information
{"url":"http://majortests.com/gre/quantitative_comparison_test01","timestamp":"2014-04-17T15:25:46Z","content_type":null,"content_length":"16376","record_id":"<urn:uuid:c7f98f96-8521-4c5a-a7ec-0c07f88d4fc7>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
Inequalities Worksheets Print solving inequalities worksheets. Print absolute value inequalities worksheets. Print quadratic inequalities worksheets. Print linear inequalities worksheets. Inequalities Worksheets • Graphing Quadratic Inequalities 1 Worksheet graphing worksheets, graphing quadratic inequalities worksheets, graphs worksheets, inequalities worksheets, quadratic inequalities worksheets • Graphing Quadratic Inequalities 2 Worksheet graphing worksheets, graphing quadratic inequalities worksheets, graphs worksheets, inequalities worksheets, quadratic inequalities worksheets • Graphing Quadratic Inequalities 3 Worksheet graphing worksheets, graphing quadratic inequalities worksheets, graphs worksheets, inequalities worksheets, quadratic inequalities worksheets This page lists all our printable worksheets on Inequalities In mathematics, an inequality is a statement about the relative size or order of two objects, or about whether they are the same or not. The notation a < b means that a is less than b. The notation a > b means that a is greater than b. The notation a ≠ b means that a is not equal to b, but does not say that one is greater than the other or even that they can be compared in size. In each statement above, a is not equal to b. These relations are known as strict inequalities. The notation a < b may also be read as "a is strictly less than b". In contrast to strict inequalities, there are two types of inequality statements that are not strict: The notation a ≤ b means that a is less than or equal to b (or, equivalently, not greater than b) The notation a ≥ b means that a is greater than or equal to b (or, equivalently, not smaller than b)
{"url":"http://tulyn.com/worksheets/inequalities.htm","timestamp":"2014-04-17T10:05:19Z","content_type":null,"content_length":"12037","record_id":"<urn:uuid:5c68fa41-a64e-4d65-bd44-d87cd4967564>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability Transition Matrices Date: 02/10/99 at 04:50:53 From: Dee Skor Subject: Probability matrix Hi Dr. Math, I have some questions on setting up transition matrices and using them to find probabilities. I would also appreciate it if you could explain how you are doing one or two of them. Here are the questions: 1) There are 3 coffee brands that dominate the market: Brand A, Brand B, and Brand C. People switch from one brand to another all the time. If they use Brand A this week, there is 0.7 probability they will continue to use it next week, 0.2 probability they will switch to Brand B, and 0.1 probability they will switch to brand C. If they are now using Brand B, there is 0.4 probability that they will switch to Brand A, 0.3 probability they will stay with Brand B, and 0.3 probability they will switch to Brand C. If they are now using Brand C, there is 0.2 probability they will switch to Brand A, 0.3 probability they will switch to Brand B, and 0.5 probability they will stay with Brand C. a) How can I express this as a transition matrix? b) If a family is using Brand B, what is the probability they will be using Brand A 3 weeks later? c) If a family starts with Brand B, what is the probability they will be using Brand C 4 weeks later? 2) Three people, John, Bill, and Peter, throw a ball to each other. There is a probability of 1/3 that John will throw the ball to Bill. There is a probability of 1/2 that Bill will throw the ball to Peter. There is 1/4 probability that Peter will throw the ball to John. a) How can I express this as a transition matrix? b) Assuming the ball starts with Bill, what is the probability that he will have it back after 2 throws? c) Assuming the ball starts with Peter, what is the probability Bill will have it after 3 throws? 4) If it rains today, there is a probability of 1/3 that it will rain tomorrow. If it doesn't rain today, there is a probability of 1/4 that it will rain tomorrow. a) How do I set up a transition matrix? b) If it doesn't rain on Wednesday, what is the probability that it will rain on Saturday? Thanks in advance! Date: 02/10/99 at 16:17:55 From: Doctor Anthony Subject: Re: Probability matrix In setting up the transition matrix, we set the columns to be "from" and the rows to be "to." This matrix represents the probabilities that we go from one situation to another situation in one step. To find the probabilities for n steps, we need to raise the original matrix to the nth power. This will be demonstrated in answering your questions. Question 1: The coffee transition matrix is A B C A [0.7 0.4 0.2] | | TO B |0.2 0.3 0.3| | | C [0.1 0.3 0.5] Note that the sum of each column is 1. If a family is using Brand B, to find the probability they will be using Brand A 3 weeks later, we need to cube the above matrix: A B C A [.541 .482 .436] TO B |.241 .254 .264| C [.218 .264 .300] So in 3 weeks the probability that someone will go from B to A is 0.482. If a family starts with Brand B, to find the probability they will be using Brand C 4 weeks later, we raise the matrix to the 4th power. This gives us A B C A [.519 .492 .471] TO B |.246 .252 .256| C [.235 .256 .273] Then the probability that someone starting with brand B will be using brand C 4 weeks later is 0.256. Question 2: Here is the ball-throwing transition matrix. Notice that to fill in the blanks, we use the fact that a person does not throw the ball to himself, and that the columns must sum to 1. John Bill Peter John[ 0 1/2 1/4] TO Bill|1/3 0 3/4| Peter[2/3 1/2 0 ] To find the probabilities after two throws, we need to square the John Bill Peter John [1/3 1/8 3/8] TO Bill |1/2 13/24 1/12| Peter [1/6 1/3 13/24] Note that in this case, none of the entries is 0 because there is a chance that the ball will come back to the original thrower in 2 throws. So assuming the ball starts with Bill, the probability that he will have it back after 2 throws is 13/24. For three throws, we cube the matrix: John BIll Peter John [.292 .354 .177] TO Bill |.236 .292 .531| Peter [.472 .354 .292] Then the probability the ball will go from Peter to Bill in 3 throws is Question 3: The rain transition probability is Rain No Rain TO Rain [1/3 1/4] No Rain[2/3 3/4] Each day is a "step". So if it doesn't rain on Wednesday, to find the probability that it will rain on Saturday (3 days later), we must cube the matrix: Rain No Rain TO Rain[.273 .273] No Rain[.727 .727] So going from no rain on Wednesday to rain on Saturday has a probability of 0.273. - Doctor Anthony, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/54265.html","timestamp":"2014-04-21T12:37:10Z","content_type":null,"content_length":"10331","record_id":"<urn:uuid:55128969-5cf1-4f00-ae36-4b0f2a6ee695>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
This web page documents some interesting properties and solutions for peg solitaire on the square 9x9 board. Table of Contents: It is unfortunate that a standard chess board is just slightly too small to play 9x9 peg solitaire on. You can play the game on a go board, however the best board is really a computer. On a computer game you can easily backtrack and record move sequences, and taking the complement of a board position is trivial once a suitable button has been programmed. Many of the solutions below were discovered by hand using a Javascript program which I modified from a version by JC Meyrignac. Unfortunately this game is rather hacked up and I don't think anyone else would find it useful if they can't program in Javascript. The Central Game Below is a diagram of a 45 move solution to the central game which can be generalized to larger square and rectangular boards. Shortest Solution What is the least number of moves the central game can be solved in? This question is quite difficult to answer computationally. In 1962, Robin Merson found an elegant argument which gives a lower bound for the length of a solution on the 6x6 board. This argument can be generalized to any square (or even rectangular) null class board, but on an n x n board the bound is not very tight if n is odd. The complement problem from a corner must use at least (n/2+1)² moves, where when n is odd one must round n/2 down to the nearest integer. If the problem does not begin in a corner the bound is one less. On the 6x6 board this gives a lower bound of 15 for all problems that do not begin at a corner. In fact one can come up with solutions in 15 moves, so one immediately has a proof that they are optimal. On the 9x9 board the lower bound indicates that any solution must have at least 24 moves. The best solution I have seen was constructed by Alain Maye, by hand, and has 34 moves. It is likely it can be done in fewer than 34 moves, but I doubt the minimum length solution is under 28 moves. On the standard 33-hole board, it has been shown that no solution to the central game can pass through a position with rotational symmetry. This is also true for Weigleb's Board, but not for the 9x9 board. Once one discovers this is possible, how many positions of symmetry could a solution to the central game pass through? The solution below answers this question. After 8 jumps (or 6 moves) the board position becomes square symmetric (shown in red). The next 60 jumps come in sets of 4 moves that are rotational copies of one another, so every 4 moves you pass through a position with rotational symmetry (shown in green). Then the final 11 jumps (or 6 moves) finish at the center. The final solution has 8+60+11=79 jumps (or 72 moves) and passes through 16 positions with rotational symmetry, 5 of them being square symmetric. It is not possible to go through more than 16 positions with rotational symmetry, because one cannot be reached in under 8 jumps from either end. Peg Solitaire Main Page ...
{"url":"http://home.comcast.net/~gibell/pegsolitaire/9x9/9x9index.html","timestamp":"2014-04-17T15:30:54Z","content_type":null,"content_length":"4598","record_id":"<urn:uuid:fb45bb38-5032-4f91-b924-97e855793ca2>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
Chapter 2: Skeletons We begin our study of character animation by examining the skeleton, the underlying foundation of the digital character, upon which body, motion, and personality are built. A character’s skeleton is a pose-able framework of bones connected by articulated joints, arranged in a tree data structure. The skeleton itself is generally not rendered, but instead can be used as an invisible armature to position and orient render-able geometry such as a character’s skin, as we will see in later chapters. The joints allow relative movement within the skeleton, and they are represented mathematically by 4x4 linear transformation matrices. By combining the rotations, translations, scales, and shears possible with these matrices, a variety of joint types can be constructed, including hinges, ball-and-socket joints, sliding joints, and various other custom types. In practice, however, many character skeletons can be set up using only simple rotational joints, as they can adequately model the joints of most real animals. Every joint has one or more degrees of freedom (DOFs), which define its possible range of motion. For example, an elbow joint has one rotational DOF as it can only rotate along a single axis, while a shoulder joint has three DOFs, as it can rotate along three perpendicular axes. Individual joints usually have between one and six DOFs, but all together, a detailed character may have more than a hundred DOFs in the entire skeleton. Specifying values for these DOFs poses the skeleton, and changing these values over time results in movement, and is the essence of the animation process. Given a set of specified DOF values, a joint local matrix can be constructed for each joint. These matrices define the position and orientation of each joint relative to the joint above it in the tree hierarchy. The local matrices can then be used to compute the world space matrices for all of the joints using the process of forward kinematics. These world space matrices are what ultimately place the virtual character into the world, and can be used for skinning, rendering, collision detection, or other purposes. In many ways, a digital character’s skeleton is analogous to the skeleton of a real animal. Real world animals with true bones are called vertebrates, and this group includes humans, mammals, reptiles, fish, and birds. The use of a virtual skeleton to animate these creatures makes perfect sense, but digital bones don’t necessarily have to correspond to actual bones. In addition to animating rigid movement, they can be used to animate facial expressions, soft tissues such as muscles and fat, mechanical parts such as wheels, or even clothing. Skeletons can be used to animate humans, aliens, robots, plants, cartoon characters, insects, vehicles, furniture, and more. [Image: critters with skeletons] In this chapter, we will examine the internal workings of the virtual skeleton. [Section 2.1] discusses the details of forward kinematics and how it is applied to skeletons, starting with a brief review of some 3D geometry and linear algebra topics. [Section 2.2] presents a variety of specific joint types that can be used in a character, as well as the matrix construction needed for these joints. [Section 2.3] introduces the concept of a pose and [section 2.4] presents some implementation details on skeletons and their implications on real time performance. 2.1 Forward Kinematics The term kinematics refers to the mathematical description of motion without considering the underlying physical forces. Kinematics deals primarily with positions, velocities, accelerations, and their rotational counterparts: orientation, angular velocity, and angular acceleration. In this chapter, we are simply considered with computing static poses for skeletons and so we will limit our analysis mainly to positions and orientations. The skeleton itself is usually treated as a purely kinematic structure. Higher-level systems may animate the skeleton with physical forces if desired, but those dynamic systems are typically layered on top of an underlying kinematic framework. We will examine dynamics, or the study of physically based motion later in [chapter 12], but for now, we will concentrate on the kinematics of the Two useful kinematic analysis tools are forward kinematics and inverse kinematics. Within the scope of character animation, forward kinematics refers to the process of computing world space joint matrices based on specified DOF values, whereas inverse kinematics refers to the opposite problem of computing a set of DOF values that position a joint at a desired world space goal. Both forward and inverse kinematics are used in other fields such as robotics and mechanical engineering, and there is extensive literature available on the subject. We study forward kinematics here and will examine inverse kinematics later in [chapter 10]. 2.1.1 Basic Kinematics This section presents a review of some basic linear algebra and is intended mainly as an introduction to the notation and standards used throughout this book. It is not intended as a complete introduction to the subject, however, as there are numerous good books on linear algebra and introductory computer graphics [MOLL99], [BUSS], [LINEAR], [ROGERS]. Coordinate Systems Before delving deeper into the subject of character animation, we must first make a few basic definitions about coordinate systems. Throughout the book, we will use a three dimensional, right handed coordinate system by convention, meaning that the z-axis is the positive cross product of the x- and y-axes, with x pointing to the right, y pointing up, and z pointing to the viewer. Figure [x]: Right-handed coordinate system Because the positive z-axis points outward, the viewer therefore looks in the –z direction. To be consistent with this coordinate system methodology, a character in a ‘default’ orientation would be aligned with the viewer, and would therefore look in the –z direction as well. Lights, cameras, vehicles, and other objects that get positioned with matrices will all be assumed to be facing down the –z axis in their default orientation. [Image: camera, character, light, & vehicle facing in –z direction] Historically, different software and hardware rendering systems have disagreed upon the choice of coordinate systems and many different standards exist. The use of a right-handed system with the positive z-axis facing the viewer is probably the most widely accepted of these standards within the computer graphics industry and so it will be used here. In any case, it is always possible to change from one representation to another with one additional transformation (see [appendix A] for more details). A vector v in 3D space has three individual scalar components representing its coordinates along the x-, y-, and z-axes. Vectors typically represent either a position or a direction, but they can also be used for more abstract constructs. The magnitude of a vector is a scalar representing the Euclidean length and can be computed as: If we are only interested in the direction that a vector is pointing and not its magnitude, it is often more computationally convenient if the length of the vector is exactly 1. We define a normalize operation which returns a unit length vector as follows: Most of the vectors we will use in this book represent some 3-dimensional geometric property, but they are not strictly limited to being 3D. Homogeneous Space It is common practice in computer graphics to perform vector computations using 4D homogeneous space. Doing so allows various different operations (such as rotations and translations) to be combined into a single methodology. For details on homogeneous space, consult an introductory graphics text such as [MOLL99], [BUSS] or review [appendix A]. [more: homogeneous space] Matrix Format The 4x4 homogeneous matrix is a useful tool in computer graphics due to its ability to represent both the position and orientation of an object in space. Matrices can transform geometric data from one space to another and they are used extensively throughout character animation for a variety of purposes. To be consistent with most graphics texts, we choose to define the matrices with the translation along the bottom row, instead of along the right column as in many engineering texts. The right hand column is mainly used for viewing projections and is rarely needed for character animation. In almost every 4x4 matrix used in this book, the right hand column will contain three 0’s starting from the top and a 1 at the bottom. Matrices will generally take the following format: where a, b, and c are the three basis vectors defining the orientation of the matrix and d is the position. Usually, the three basis vectors will be of unit length and will be perpendicular to each other, making the matrix orthonormal or rigid, but this is not a strict requirement and some matrices may break that convention. Figure [x]: Basis vectors a, b, c, and position d of matrix M [more: image: equal sized axes, illustrate ‘d’ better] A vector is transformed by a matrix in the following manner: where v’ is the resulting transformed vector and (.) is the vector dot product. If v is a vertex in an object’s local coordinate system and M is a matrix placing the object in world space, then v’ will be the vertex’s location in world space. The inverse of this transformation is written as: where M-1 is the matrix inverse of M. If M is a matrix that transforms from local to world space, then M-1 will transform from world space to local space. [more: matrix concatenation] For more information on matrix operations and linear algebra, see [appendix A] or [MOLL99][BUSS]. World and Local Space In 3D graphics and animation, we define a fixed coordinate system called the world coordinate system or simply world space, in which all objects, characters, effects, cameras, lights and other entities are ultimately placed. The terms global coordinate system and global space are also commonly used and mean exactly the same thing, but for consistency, we will stick with the use of the word world rather than global. Individual objects are typically defined in their own local space and make use of 4x4 matrices to transform to world space. In a typical interactive graphics application, many different objects will need to co-exist in world space. Some of these objects are simple rigid objects, like a chair, for example. To manipulate the position and orientation of a simple object like this, we could use a single 4x4 matrix to transform the chair’s vertices from its local space to world space. This matrix is called the chair’s world matrix, as it positions the chair into the world. A more complex object, such as a character, will have many moving parts, and so will require many matrices. In order to render the character in the world and perform other operations such as collision detection, we need to know the world space matrices of all of the joints in the character’s skeleton. It is difficult and unintuitive to specify character joint matrices directly in world space, and so skeletons are built up of a hierarchy of local transformations, each defined relative to the one above it. The joint matrices are defined in this local space and are converted to world space by the process of forward kinematics, described below. Cameras and View Space To render a view of the 3D world, we place a virtual camera with a matrix called a camera matrix. The space representing what the camera sees is called view space and objects are transformed into this space by the view matrix, which is the inverse of the camera matrix. 2.1.2 Joint Hierarchy The topology of a skeleton is an open directed graph, or tree (also called a hierarchy). One joint is selected as the root and the other joints are connected up in hierarchical fashion. To keep the definition of a skeleton simple, we will restrict them to being open trees without any closed loops. This restriction doesn’t really prevent kinematic loops in the final animated character, as we will learn about in [chapter 10]. The nodes in the tree represent the joints of the skeleton. They could just as easily represent the bones, and in fact, there is little difference between the concept of a bone and a joint, as the motion of a particular bone is the same as the motion of the joint controlling it. In this book, the two will be treated as the same thing, i.e., we may occasionally refer to a joint such as the shoulder in the exact same way we would refer to the bone directly manipulated by that joint (in this case, the upper arm or humerus bone). For consistency, we will usually describe things in terms of joints unless the situation specifically warrants the use of bones. Figure [x] shows an example skeleton for a simple robot character. The hierarchical structure of the same skeleton is shown in figure [x], with the root located at the top of the figure. Figure [x]: Simple character skeleton Figure [x]: Hierarchical graph of skeleton joints Root Node The choice of which node to make the root is somewhat arbitrary, but it is usually convenient to select something near the center of the character. A common choice on animals is somewhere in the spine, so that both the pelvis and torso can be attached underneath in the tree. The root can be treated as a special joint that capable of unrestricted rotational and translational movement. In most characters, all other joints would have restrictions on their motion. Node Relationships A node directly above another in the tree is that node’s parent. All nodes will have exactly one parent except for the root node, which has none. A node directly below another in the tree is that node’s child, and a node may have zero or more children. Child nodes inherit transformations from their parent nodes, so that if an elbow is rotated, for example, all of the joints in the hand will follow correctly. Nodes at the same level under a common parent are called siblings. Figure [x]: Node hierarchical relationships A child of a child, (etc.) is called a descendant and a parent of a parent (etc.) is called an ancestor. Nodes with no children are called leaf nodes and nodes with children are called interior nodes It is said that a joint down in the tree inherits its transformation from its ancestors, that is, its own transformation builds on the ones that came above it. This concept can also be applied to other properties, such as rendering materials, or other visual properties, but we will not consider any of these other types of inheritance here. The inheritance of the linear transformation information is handled through the process of forward kinematics and relies specifically on matrix concatenation, which is discussed in [section 2.1.3]. Linearization of the Hierarchy An alternative view of the skeleton hierarchy is presented in [figure x]. It contains essentially the same information as the view in [figure x], but it is rearranged to list the joints in a linear fashion. Accessing joints as a linear array can be convenient in many situations, and if a design calls for it, it is easy for both tree and array representations to coexist. 1. Root 2. Torso 3. Neck 4. Head 5. ShoulderL 6. ElbowL 7. WristL 8. ShoulderR 9. ElbowR 10. WristR 11. Pelvis 12. HipL 13. KneeL 14. AnkleL 15. HipR 16. KneeR 17. AnkleR Figure [x]: linear representation of character hierarchy Depth-First Tree Traversal To compute world space joint matrices, we will need to perform a depth-first tree traversal of the skeleton. A depth-first traversal starts at the root node and traverses down through each of the children. When a child node is visited, each of its children are then traversed. Only when all of a node’s children have been visited does control return to the parent node. In this way, all nodes are visited once. When a node is visited, an arbitrary operation can be performed, in the case of a skeleton, it will be forward kinematics computations. Figure [x]: Depth first tree traversal order The linearized hierarchy presented in [figure x] lists the nodes of the skeleton in the same order that they would be accessed in a depth-first traversal. One can see in this representation that before any particular node in the tree is reached, all of its ancestors will have been traversed already. This ensures that the necessary information about the node’s parent will be computed already. It is also possible to traverse the hierarchy in a breadth-first traversal, but this is generally a bit worse on caching performance, and won’t be discussed in this book. 2.1.3 Skeleton Kinematics Joint DOFs A movable joint has one or more degrees of freedom, but typically they won’t have more than three. A free rigid body has 6 DOFs (3 to describe its position and 3 more to describe its rotation), but there isn’t really any reason why a joint couldn’t have 6 or even more DOFs. The root joint of a skeleton can be treated as a 6-DOF joint in most cases, unless the skeleton is somehow constrained to a fixed coordinate system. The term ‘degree of freedom’ is a general term that includes not only joint angles, but also joint translations, scales, or any other types of motion a joint may allow. In the next few chapters, we will see how the concept of a DOF can be extended further to include any other property we may wish to animate. As DOFs can represent different types of quantities, it is important to keep track of the units used. For example, a rotational DOF could use degrees, radians, or any other arbitrary unit, as long as it is used consistently. Throughout the book, we will assume that rotational DOFs use radians and translational DOFs use meters. Joint Local Matrix A joint must take the input DOF values and use them to generate a joint local matrix. This matrix is a 4x4 homogeneous transformation matrix that defines the joint’s current position and orientation relative to its parent joint. Different types of joints will use different methods for generating this matrix. We will examine several common joint types and their corresponding local matrices in [section 2.2] later in this chapter. Joint Offset Joints will typically have a fixed offset position in the parent node’s space, which acts as a pivot point for the joint movement. The pivot point of an elbow, for example, stays at a fixed location relative to the shoulder joint, as the shoulder joint itself moves about. For flexibility, we treat this offset as a general 3D vector, r and will use it for all joint types. To handle this offset mathematically, we add the offset vector r to the bottom row when we construct the joint local matrix. A joint local matrix L that does nothing other than apply this constant offset would be written For a 1-DOF rotational joint that pivots about the x-axis by an angle [theta], the matrix would be: [Figure x] illustrates a rotational joint with a fixed offset. Joint local matrices for other joint types are discussed in [section 2.2]. [Image: rotational joint with fixed offset] Joint Orientation Some 3D animation systems allow a full fixed transformation to apply to the joint instead of just a positional offset. The use of a full transformation means that we must apply a matrix multiplication to compute the complete joint local matrix instead of simply adding a translation to the bottom line. The purpose of this full transformation is to allow joints to rotate or translate around arbitrary axes, but as we will see throughout [section 2.2], there are other straightforward ways to achieve this. Still, a full fixed orientation change can be supported for individual joints if desired. We will avoid this extra matrix, however, as we prefer other means for achieving the same results. Joint Limits Joint DOFs can have limits on their range of movement. For example, the human elbow can bend to about +150 degrees (about 2.1 radians) and hyperextend back as much as –10 degrees (about -.17 radians). Limits should be able to be set on a DOF-by-DOF basis. In practice, it is common to have minimum and maximum limits for each DOF that can be enabled or disabled independently. [Image: joint limits] For most joints, the DOF values are completely independent and can easily be clamped to within legal limits on an individual basis. It may, sometimes, be desirable to describe joint limits with a more geometric or general-purpose method. This is especially true for quaternion rotational joints where joint limits can’t be implemented by simple clamping. We will examine geometric joint limits in more detail in [section 2.2.2]. Matrix Concatenation [more: clean up] Concatenating the local matrices to make the world space matrices is straightforward and makes use of matrix algebra and the very useful properties of 4x4 homogeneous matrices. To compute all of the world space matrices for a skeleton, we begin at the root and perform a depth-first tree traversal. For each joint visited in the traversal, we compute its world space matrix Wjoint by multiplying its local matrix Ljoint by its parent’s world space matrix Wparent: The root node has no parent, and so Wparent is just the identity matrix, which causes its world space matrix to be equal to its joint local matrix. Many modern CPU and graphics processors are equipped with vector floating point units that are designed specifically to handle 4x4 matrix concatenation and similar computations. Taking advantage of features like these should result in significant performance gains. Skeleton Forward Kinematics Algorithm The end result of the forward kinematics process for a skeleton is a set of world space matrices- one for each joint. If we assume, for now, that the character is posed by some higher level system and its joint DOF values are all specified, then the two main computational steps needed per joint to compute the world space matrices are: 1. Generate joint local matrix 2. Concatenate joint local matrix to compute world space matrix There are a variety of different local matrix constructions depending on the types of joints used, but the matrix concatenation phase where the world space matrices are computed is simple and [more: elaborate on algorithm] 2.2 Joint Types In this section, we examine several different joint types and present formulas for constructing their joint local matrices. 2.2.1 Rotational Joints In realistic characters, most or all joints will be rotational. Both 1-DOF and 3-DOF rotational joints are common, and 2-DOF joints are used occasionally as well. 1-DOF Rotation Perhaps the most useful joint type in computer character animation is the 1-DOF rotational joint, sometimes called a hinge joint. Elbows and knees are good examples of hinge joints. Multiple 1-DOF hinge joints can be combined together to construct 2- or 3-DOF joints if desired, but those joints can also be treated as unique types. The hinge joint can be specified to rotate about any axis. Most often, animation systems allow users to create joints that rotate about the local x, y, or z axes, but it is also possible to define joints that rotate about any arbitrary axis. By definition, a positive rotation about an axis will cause an object to rotate counterclockwise when viewed from the direction that the axis is pointing. A general 1-DOF rotational joint is illustrated in [figure x]. [Figure x]: 1-DOF hinge joint To formulate the complete joint local matrix for an x-axis rotational joint, we add in the positional offset vector r to the bottom line of the matrix to get: Similarly, the joint local matrix for a hinge joint that rotates about the positive y-axis is: and for rotation about the positive z-axis: It is often desirable to allow hinge joints to rotate about an arbitrary axis. Given an arbitrary unit vector a defining the desired axis of rotation, the formula for the joint local matrix becomes: where []=[] and []=[]. 2-DOF Rotation 2-DOF rotational joints can be found in some places such as the wrist, the clavicle-sternum joint, and the first joint in the thumb. A universal joint in an automobile drive shaft is another example, and indeed, 2-DOF rotational joints are sometimes referred to as universal joints. 2-DOF rotational joints can be constructed as a combination of two sequential rotations about different axes. Usually, two principle axes are chosen such as xy, xz, or yz, but one could create a 2-DOF joint out of any two arbitrary axes if desired. Joint local matrix formulas for xy, xz, and yz joints are presented below: where []=[], []=[], etc. It should be noted that a 2-DOF joint could simply be constructed by connecting together two 1-DOF joints, or even by using a 3-DOF joint and just setting one of the DOFs to zero. Either of these options is of course possible, however, it is likely that if 2-DOF joints are required, supporting them explicitly would be slightly faster at the expense of some minor additional code complexity. 3-DOF Rotation 3-DOF rotational joints are found in important joints in the body such as the ball-and-socket joints of the hips and shoulders. As mentioned earlier, it is possible to construct a 3-DOF rotational joint out of 3 independent 1-DOF joints, but it is still worth considering a 3-DOF joint as a unique type. [Image: 3-DOF joint] Rotation Order With matrix algebra, multiplication is not commutative, that is AB is not equal to BA. This means that attention must be paid to the order that rotations are performed in multiple DOF rotational joints. Often, commercial animation packages allow the user to specify an arbitrary rotation order for each joint (such as xyz, xzy, yxz…). Sometimes, this extra flexibility is useful in interactive applications as well. where []=[], []=[], etc. [more: rotation order, Euler angles, problems, multiple representations, gimbal lock…] 2.2.2 Quaternion Joints Rotational joints can also be implemented with quaternions. A quaternion is a mathematical construct that can represent an arbitrary 3D orientation without some of the complications that Euler angles are prone to. Quaternions were first introduced by William Hamilton in 1843 and further developed by Arthur Cayley, Josiah Gibbs and others. They have more recently become a popular method in computer graphics for handling orientations since their introduction to the graphics literature in [SHOE85], and have been an active research topic in modern graphics, physics, engineering and mathematics. Quaternion Definition and Mathematics A quaternion is a vector in 4D space that can be used to define a 3D rigid body orientation: Usually, they are constrained to be of unit length, and we will apply this constraint to all quaternions used in this book. A quaternion can be thought of as a rotation about an arbitrary axis. Any orientation can be represented by a single rotation around some unit length axis a by some angle theta, and quaternions are related to this axis and angle by the following formula: The formula for the joint local matrix of a quaternion joint is: A brief introduction to quaternions is provided in [appendix A] and quaternion interpolation is discussed in [chapter 6]. For more information about quaternion mathematics and its uses, see [KUIP99], [BUSS], and [SHOE85]. Joint Limits with Quaternions Because the 4 variables in the quaternion don’t correspond to intuitive geometric values, it is not practical to implement quaternion joint limits by just clamping the 4 variables to some pre-defined range, as with joint limits for other DOF types. This makes it necessary to take a more geometric approach to defining the joint limits for quaternion joints, which can actually be more powerful and general purpose than the simple DOF clamping approach. [more: cone joint limits] Quaternions vs. Euler Angles Although quaternions are certainly a powerful way to store and manipulate arbitrary orientations, they are not necessarily a total replacement for the more traditional Euler angle approach. Clearly, for 1-DOF rotational joints, a single axis rotation matrix would be faster and simpler than using a quaternion, and so a general purpose skeleton system should be prepared to support a variety of joint types and configurations. If multiple joint types were allowed, then it wouldn’t be very difficult for an animation system to support both Euler angle joints and quaternion joints, selectable on a joint-by-joint basis. It may even be desirable to allow rotational joints to be handled as quaternions in certain situations and as Euler angles in others. For the purposes of this book, however, we will just consider a quaternion joint as a unique joint type. Quaternions tend to be particularly useful when there is a need to support interpolation between arbitrary orientations without suffering from gimbal lock and the order dependent problems we find with Euler angles. The human shoulder is a good example of a joint that often needs to interpolate between widely varying orientations. However, there are occasions when the less sophisticated Euler interpolation scheme actually works better and may even look more natural. The human hip might make a good example of this. Even though the hip is a ball-and-socket joint like the shoulder, it has a more limited range of motion, making it less prone to Euler interpolation problems. The important point is that there isn’t one method that works best in all situations. The following chart is provided to compare some of the relevant issues between the two: │Quaternions │Euler Angles │ │Handle arbitrary interpolations well │Don’t handle arbitrary interpolation well │ │One consistent representation │12 (or even more) possible representations │ │Joint limits require custom handling │Joint limits very simple │ │4 interrelated variables, which may require special handling by higher level animation code│3 totally independent variables │ │Interpolation computations are slower │Interpolation computation is fast │ │Conversion to matrix format is very fast │Conversion to matrix format requires 3 sin() and 3 cos() functions (or table lookups)│ │May require occasional renormalization │No normalization required │ Both quaternions and Euler angles have their place in computer animation, and so it helps to study and understand the properties of both schemes. 2.2.3 Translational Joints Translational joints are not as common in computer character animation as rotational joints, but they still are important to consider. Generally, real-life creatures don’t have translational joints but mechanical creatures such as robots or other animated mechanisms may contain them. [Image: translational joint] Like rotational joints, translational joints can be specified to translate along any axis. Translational joints with a single degree of freedom are called prismatic joints. A shock absorber for a car’s suspension system is a good example of a prismatic joint. A general definition of a prismatic joint would consist of a fixed offset vector r and a unit vector a representing the axis of translation, and the joint local matrix for a translation DOF value t would be constructed like this: Translational joints require very little computation and obviously, in cases where a is a principle axis this math reduces even further. It is also possible to make 2-DOF and 3-DOF translational joints. For example, the joint local matrix for a 3-DOF translational joint that takes a translation vector t and has a fixed offset r would 2.2.4 Compound Joints Sometimes it may be desirable to create joints that combine several different types of motion under the control of relatively few DOFs. We will define these as compound joints, which can include any linear transformations we may choose to use for joints. We will briefly look at three examples: 6-DOF joints, screw joints, and curve joints, but one could create their own compound joints by creating a function that takes a set of DOFs and generates a linear transformation using any rules desired. 6-DOF Joint A free rigid body has 6 DOFs and can be constructed from the rotational and translational joint types already discussed. However, because 6-DOF joints are common, both for rigid objects, and for the root node of articulated objects, it may be convenient to define a single joint type that incorporates all the necessary DOFs. To do this, one can use any of the 3-DOF rotational joint types defined (including the quaternion joint) and combine it with the 3-DOF translational joint type. Treating a 6-DOF joint as a single joint will reduce the matrix computations in the kinematics process. An example of a 6-DOF joint local matrix that uses the xyz Euler rotation order would be: Screw Joints Another example of a compound joint is a screw joint, which combines a rotation and a translation. These two operations are not independent however, and are controlled by a single degree of freedom, in a way similar to the motion of a screw. The translation along the x-axis is related to the rotation angle [theta] by a simple scalar rate: The joint local matrix for a screw joint rotating and translating about the positive x-axis would be: It is left to the reader to construct other screw matrices. [Image: screw joint] Curve Constraint Joint Another example of compound joint type could be a translation along a curve. It combines translation in two or three dimensions, but they are grouped under a single degree of freedom. [Image: curve constraint] A formula for generating the joint local matrix for a curve constraint joint could work like this: where c(u) is some function that returns the 3D position of the curve at parameter u. All of these examples of compound joints ultimately generate linear transformations, and so they are compatible with our definition of a joint. Like the other joint types, they take a small number of input DOF values and use them to generate a local joint matrix. A train car going along a winding train track can be thought of as another elaboration on the curve joint concept. Even though the train car may actually translate and rotate in all three dimensions as it moves along the track, it is still constrained to a single degree of freedom and can be represented as a 1-DOF compound joint. 2.2.5 Non-Rigid Joints Rotations and translations are examples of orthonormal or rigid transformations. A rigid transformation can move an object but it does not distort the shape in any way. Although less commonly used, it is possible to create joints out of non-rigid linear transformations such as scales and shears. Non-rigid transformations can come in handy when one is trying to construct and animate cartoon characters that may deform in ways that a real character would not. Non-rigid matrices can be treated in the exact same way as rigid matrices in many situations, but there are a few cases where additional considerations must be made. Dealing with Normals [more: clean up] When geometry is transformed by a non-rigid matrix, the normals can no longer be transformed by the upper 3x3 portion of the matrix, as with a rigid transformation. An illustration of this problem is shown in [figure x]. Figure [x]: Object is subjected to a non-uniform scale showing that the transformed normals are no longer perpendicular to the surface To properly transform a normal through with a non-rigid 3x3 matrix W, we must use the inverse transpose of W: Explicitly computing W-1T per joint can be expensive due to the full matrix inversion required.W-1T can be usually computed much cheaper if it is computed as part of the forward kinematics process. If one is willing to store a copy of W-1T per joint, then it can be computed as This involves construction of an inverse joint local matrix L-1 and a matrix multiplication. If these are only going to be used for transforming normals, they only need to be 3x3 matrices. In most cases the construction of L-1 is straightforward. Rotational and translational joints simply negate the joint angles or translations used in the construction of L, and scale matrices can replace a scale value s with 1/s. [more: details of L-1 construction] [more: 3x3 vs. 4x4, fast inversion] Adding these extra steps to the forward kinematics algorithm and storing an additional matrix per joint are two of the costs that must be paid when one wants to support non-rigid matrices properly. Supporting non-rigid joint transformations puts additional costs and complexities on the system that will also be further apparent when we examine skinning in [chapter 3] and inverse kinematics in [chapter 10]. Scale Joints There are a variety of ways to create scale matrices. Scales may be along 1, 2, or 3 axes. The joint local matrix for a uniform scale that affects all axes equally by a factor s is: For scaling along only the x-axis: And for scaling independently along all three axes: It is sometimes desirable to have scales that preserve the overall volume of an object. This can be a simple way to achieve a stretching effect and can be a useful joint type to have on cartoon style characters. A volume preserving joint that scales along the x-axis by a factor sx is: Different variations of these scale matrices can be formulated for different axes. [more: scale joints] [Image: scale] Shear Joints Shears are another type of non-rigid linear transformation that can be used for interesting cartoon-like character effects and simple shape distortions. A shear is a transformation with the following [more: shear] [Image: shear] Nonlinear Joints Typically in computer character animation, joints are limited only to doing linear transformations (such as rotations, translations, scales, and shears). This is usually to keep things simple and fast and to be able to take advantage of built in matrix support on modern CPUs. It is possible, however, to allow joints to do nonlinear transformations, such as bends, twists, or any other local or global deformations. In this book, we choose to treat these nonlinear functions as skinning operations that do not affect the underlying skeleton, and deal with them in [chapter 3]. 2.3 Implementation Issues There are two main computational parts to the skeletal forward kinematics system: local joint matrix construction, and world matrix concatenation. There are several subtle details that must be worked out in the implementation of a skeleton framework that will have impact on the system’s flexibility, performance, and memory usage. We will look at some of those details here. A very simple humanoid skeleton model might contain around 20 joints. A more complex model with articulated fingers and other details might have up to 50, 100 or even more joints. Applications with a wide variety of different characters or a large number of similar characters will definitely have to consider memory and performance issues relating to the number of active joints in the world. Hardware Vector Units Many modern CPUs and graphics processors have considerable hardware support for floating point vector and matrix routines. Skeleton kinematics algorithms can typically be implemented very efficiently on modern systems in very few clock cycles, and so making the most of available vector processing should definitely pay off when performance is critical. 3x4 Matrices For software matrix math implementations, it may be perfectly acceptable to use 3x4 matrices instead of 4x4. We can just assume that the right hand column is always [0,0,0,1]T, and implement the matrix algebra routines accordingly. Generally, character animation systems can get away with using 3x4 matrices throughout for everything except for final view projection. Using 3x4 matrices can save a significant amount of computations in matrix operations. For example, multiplying two 4x4 matrices requires 64 multiplies and 48 adds, while multiplying two 3x4 matrices only requires 36 multiplies and 27 adds. Local Matrix Storage One issue to consider is how matrices are stored per joint. A simple implementation might just store one local matrix and one world matrix for every joint in a character. But is it really necessary to allocate permanent storage space for this data or can these be created only when needed? Often, the local matrix is only needed for a short time and can be very temporary. On modern vector CPUs, the local joint matrix might only need to exist in vector registers. Local matrix construction may involve different types of operations depending on the joint type (rotational, translational…), but for the more common rotational joints, the matrix construction involves a few sin() and cos() functions and possibly several multiplications and additions. World Matrix Storage Unlike the temporary local matrices, world space matrices are usually needed for a bit longer. Usually, an application would at least need all of the world matrices for a single character to exist at one time, to be used for skinning and rendering, however, for very simple applications that don’t use skinning, the need to retain world matrices can be eliminated by the use of a matrix stack. More commonly, however, the application might require all world space matrices for all active characters to exist at the same time to be used for collision detection or other environment interactions. The software engineer will have to make the appropriate tradeoffs between memory usage, performance, flexibility, and code complexity. Transforming to View Space It should be noted that for some applications, additional performance may be gained by directly transforming the skeleton into view space, the space defined relative to the camera viewing the world. This may be effective for some situations, but not always in others which may require objects to be placed in a consistent world space for collision detection, AI, or other purposes. Also, in applications that involve multiple camera views of the same character, it is better to transform everything to a common world space before transforming to individual camera spaces. Recursion and Function Calls The simplicity of the forward kinematics algorithm means that it can be coded in very few cycles per joint on modern graphics hardware. In this situation, the overhead for calling an actual recursive function per joint can start to add up. This can be worse if a virtual function call is issued per joint to support multiple joint types. If function call overhead becomes a measurable bottleneck in the forward kinematics implementation, it can be eliminated entirely in most cases. The forward kinematics algorithm exhibits a property called tail recursion, which means that it does all of a joint’s processing before moving onto its children and therefore doesn’t actually require a local variable stack. The entire forward kinematics for a skeleton can therefore be computed within a single small function. Support for a wide variety of joints and other options can complicate this and in these situations, one may have to exchange some performance for flexibility. [more: array processing] Separation of Constant and Variable Data An important area where memory can be saved is in the separation of constant and variable data. Constant data is data that does not change, while variable data may actively change over time. In skeletal terms, constant data might include data relating to joint offsets, joint types, joint limits, while variable data would include the DOF values and global matrices. While it is tempting to store all of this joint data in a single class, it may be advisable to separate them to allow for character types to be instanced or shared among many active characters. Again, the software engineer must be aware of these issues in order to make informed decisions about the implementation. Visualizing the Skeleton The skeleton itself is usually not drawn in an animation; it is an invisible structure that exists for the convenience of the animators. Still, no interactive character animation system would be complete without some method of visualizing the actual skeleton, for debugging purposes if nothing else. Supporting the ability to turn off the characters skin and display the underlying skeleton can be very helpful to people using the system. The bones of the skeleton can be drawn as simple lines, boxes, cylinders, or any other geometric representation desired. It is also useful to draw the three vectors forming the basis of each joint matrix as well. Additional useful features include drawing data specific to each different joint type, such as joint axes or joint limits. Skeleton Data Files [more: Skeleton files] [more: single joint type vs. derived joitns] A C++ pseudocode algorithm for computing the forward kinematics for a skeleton is presented below. It is a recursive function that first generates the local matrix for a joint, and then concatenates it with its parent’s matrix to compute the joint’s world matrix. It then proceeds to recursively call the same function on all of its children, thus transforming the entire skeleton. The ComputeLocalMatrix() function could use any of the techniques presented in [section 2.2] to generate a local matrix. Joint::ComputeWorldMatrix(Matrix44 parentMtx) { Matrix44 localMtx=ComputeLocalMatrix(); WorldMtx= localMtx * parentMtx; for i = 1 to NumChildJoints { The process is started by calling ComputeWorldMatrix() on the root joint and passing in the identity matrix as the parent. [more: pseudocode] 2.4 Summary In this chapter, we examined the mathematics of the underlying kinematic framework for the virtual character, the skeleton. Skeletons in character animation are typically built from a hierarchy of rigid bones connected by articulated joints. Each joint has one or more degrees of freedom (DOFs), which describe its articulation. These DOF values are set by higher-level systems that pose the skeleton, and animation is the process of changing the DOF values over time. The skeleton system uses the process of forward kinematics to take these specified DOF values and compute the final world space matrices of the joints, which can then be used for skinning, collision detection, or other purposes. The forward kinematics computational process involves a depth-first traversal through the skeleton hierarchy. For each bone traversed, the two main computations that need to take place are construction of the joint local matrix and then computation of the joint world matrix by concatenating the local matrix with the parent’s world matrix. Many options exist for generating local matrices for different joint types. Some real time animation applications may do just fine limiting their characters to simple 1-DOF rotational joints and can implement the entire skeleton kinematics system in a few lines of code. Other systems may require more generality and may need to support a wider variety of joint types and other options. If the system needs to support non-rigid transformations such as scales and shears, additional complexities must be dealt with, both in the skeleton layer and in higher-level systems that use it such as inverse kinematics and skinning. As the skeleton is an essential foundation for the animated character, the remaining chapters in this book will build upon the basic framework that the skeleton provides. In the next chapter, we will see how to attach a deformable skin to the skeleton.
{"url":"http://graphics.ucsd.edu/courses/cse169_w05/2-Skeleton.htm","timestamp":"2014-04-16T21:52:32Z","content_type":null,"content_length":"130909","record_id":"<urn:uuid:1ade2359-2973-4a5c-a815-4d090f93b9a4>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Colby, WA Math Tutor Find a Colby, WA Math Tutor ...Please feel free to contact me for more information. As I would not need to cover transportation costs, my hourly rate will be lower and more consistent.*** I have a been tutoring for most of my life, from kindergarten-aged children up to college and university-level classes. I mainly tutored accounting classes for five years at Green River Community College and Washington State 12 Subjects: including algebra 1, algebra 2, ASVAB, prealgebra ...I have a Bachelors of Science degree in Psychology from the University of Washington. I have completed over four years of University level coursework in Psychology ranging from Child Development to Neural basis of behaviors to Cognitive Psychology to Brain anatomy laboratories. I have tutored University level Psychology courses for over two years and I truly enjoy teaching the subject. 27 Subjects: including precalculus, reading, study skills, trigonometry ...Also, I have tutored physics, mathematics, chemistry, and English to three students who later graduated as high school valedictorians. I am highly qualified to tutor chemistry, with a PhD in Aeronautical and Astronautical Engineering from the University of Washington and more than 40 years of pr... 21 Subjects: including algebra 1, algebra 2, calculus, chemistry ...I taught four Algebra 1 classes during my last year of full-time teaching. I have a Bachelor's degree in Mathematics and a Master's Degree in Secondary Math Education. I taught Algebra 2 in high school for 2 years and I have tutored many college students in College Algebra (which typically covers the same topics as Algebra 2 and Pre-Calculus). I taught high school Geometry for 5 16 Subjects: including SAT math, algebra 1, algebra 2, geometry ...Most of what I know has been through self instruction and application. When I cover something, and start learning it, my mission is to learn it through and through. My next objective is to take what I have learned in theory and put it into practice. 38 Subjects: including prealgebra, calculus, algebra 2, algebra 1 Related Colby, WA Tutors Colby, WA Accounting Tutors Colby, WA ACT Tutors Colby, WA Algebra Tutors Colby, WA Algebra 2 Tutors Colby, WA Calculus Tutors Colby, WA Geometry Tutors Colby, WA Math Tutors Colby, WA Prealgebra Tutors Colby, WA Precalculus Tutors Colby, WA SAT Tutors Colby, WA SAT Math Tutors Colby, WA Science Tutors Colby, WA Statistics Tutors Colby, WA Trigonometry Tutors Nearby Cities With Math Tutor Annapolis, WA Math Tutors Colchester, WA Math Tutors Enetai, WA Math Tutors Fernwood, WA Math Tutors Forest City, WA Math Tutors Harper, WA Math Tutors Manchester, WA Math Tutors Marine Drive, WA Math Tutors Orchard Heights, WA Math Tutors Overlook, WA Math Tutors Rocky Point, WA Math Tutors South Colby Math Tutors Waterman, WA Math Tutors Wautauga Beach, WA Math Tutors West Park, WA Math Tutors
{"url":"http://www.purplemath.com/Colby_WA_Math_tutors.php","timestamp":"2014-04-20T23:45:38Z","content_type":null,"content_length":"24010","record_id":"<urn:uuid:a9340d78-2a21-4d50-a802-60ded4e7ce07>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: latex in axis figure Replies: 0 latex in axis figure Posted: Mar 28, 2013 2:30 AM Hello forum, do you know if all latex features are available in matlab? For example I want to write something like this xlabel('$$r_{\text{eff}} (\mu \text{m})$$','interpreter','tex'); but it does not work. In other words how do you include text in latex interpreter inside axis?
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2443853","timestamp":"2014-04-19T20:36:22Z","content_type":null,"content_length":"13710","record_id":"<urn:uuid:1d012c85-3c0e-4cee-b8e4-ee6691e57e1f>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
Some people get uncomfortable every time they see a letter of the alphabet mixed with numbers. They understand what to do with the numbers, but not those letters. The letters represent numbers that change in value or are unknown. They are called variables since the numbers those letters represent vary from time to time. A good example is pay. If a person is paid $20 per hour, to find his weekly pay, you would multiply $20 by the number of hours he works each week. The problem is that you may not know how many hours he will work each week. It may be 40 hours one week and 37 the next. However, if you represent the amount of hours with an h, you can say The h is a variable that represents a real number that varies from week to week. The $20 represents the hourly rate of pay. To work with any variable, you must know exactly what it represents. There are many variables in this book. As you work with them, you will learn what they represent.
{"url":"http://mathforum.org/sarah/hamilton/ham.variables.html","timestamp":"2014-04-16T19:48:16Z","content_type":null,"content_length":"4523","record_id":"<urn:uuid:802e3326-b5ef-4e0d-b19e-420b2d137efe>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
This problem popped up in another thread and can be solved by standard methods. For the curve y=x^2+5x a) find the gradient of the chord PQ where P is the point (2.14) and Q is the point (2+h,(2+h)^2+5(2+h)) Let's see what geogebra can do. 1) Draw the curve by entering f(x) = x^2 + 5x. 2) Enter the point (2,14). Call it P. 3) Create a slider called h. Range it from -5 to 5. 4) Create a new point ( 2+ h, h^2 + 9h +14 ). Call it Q. 5) Draw a line between P and Q. Get the slope m of that line using the slope tool. 6) Slide h back and forth and notice the value of m in the algebra pane. 7) Record those values like this: 8) Conjecture the obvious relationship of m = h + 9. Your geogebra worksheet should look something like this. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=17656","timestamp":"2014-04-17T03:57:48Z","content_type":null,"content_length":"10427","record_id":"<urn:uuid:564f29ee-8bd1-42d2-8d7d-3b18bfc03d3e>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
Calumet City Statistics Tutor ...I have a good knowledge of the theoretical and applied applications of the subject. I have taken several Probability courses as an undergrad and graduate student. I have learned the basics of Probability Theory, as well as the applications. 5 Subjects: including statistics, algebra 1, prealgebra, probability ...Over the last seven years I have tutored hundreds of high school and college students in chemistry. PRE-ALGEBRA TOPICS: Order of Operations Introduction to Algebra The Commutative, Associative, and Distributive Laws Fraction Notation Positive and Negative Real Numbers Addition of Real Numbers ... 22 Subjects: including statistics, English, chemistry, biology ...I have also taught PLTW in basic electronics. I have 20-plus years of experience in heavy industry (steel mills and the like) and I interpret/translate English/Greek and vice-versa. Because of my experience with the steel industry, I can bring real world problems for displaying where the math is used. 12 Subjects: including statistics, calculus, geometry, algebra 2 ...I have dozens of creative ways to help students get past their barriers in understanding it. I'm very capable in this subject - proof-based geometry requires students to think like mathematicians or lawyers. I already have a mathematicians' intuitions, and I know so many ways to push students to greater understanding. 21 Subjects: including statistics, chemistry, calculus, geometry ...As a result, I have become proficient in differentiating instruction to meet the needs of every learner by using techniques such as: small group reteach, effective academic feedback, hands on activities, active learning and engaging lessons. I use student data from activities and tests to plan curriculum. This method allows me to determine student mastery. 70 Subjects: including statistics, English, chemistry, reading
{"url":"http://www.purplemath.com/Calumet_City_Statistics_tutors.php","timestamp":"2014-04-20T13:22:13Z","content_type":null,"content_length":"24256","record_id":"<urn:uuid:4cad6218-010e-44a5-8a41-dc8da6c18edd>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
190 kph to mph You asked: 190 kph to mph kilometres per hour miles per hour 118.060526525093 miles per hour the speed 118.060526525093 miles per hour Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/190_kph_to_mph","timestamp":"2014-04-17T18:41:41Z","content_type":null,"content_length":"51830","record_id":"<urn:uuid:b9d1e1d1-c912-4594-af35-16d4c5776a78>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
Citizendium - building a quality general knowledge free online encyclopedia. Click here to join and contribute—free Many thanks March 2014 donors; special to Darren Duncan. April 2014 donations open; need minimum total $100. Let's exceed that. Donate here. Donating gifts yourself and CZ. From Citizendium, the Citizens' Compendium (Difference between revisions) ← Older edit (→Units) Newer edit → Line 78: Line 78: \alpha'_\mathrm{SI} = 10^{-6} \, \alpha_{\mathrm{Gaussian}}, \alpha'_\mathrm{SI} = 10^{-6} \, \alpha_{\mathrm{Gaussian}}, </math> </math> - where the power of ten is due to converting from m to cm. + where the power of ten is due to converting from m to cm + . ==Energy== ==Energy== Revision as of 14:41, 29 November 2008 In physics, the polarizability of an electric charge-distribution ρ describes the ease by which ρ can be polarized under the influence of an external electric field E. To explain the concept of polarization of a charge distribution, it is noted that an electric field E is a vector, which by definition "pushes" a positive charge in the direction of the vector and "pulls" a negative electric charge in opposite direction (against the direction of E). Because of this "push-pull" effect the field will distort the charge-distribution ρ, with a build-up of positive charge on that side of ρ to which E is pointing and a build-up of negative charge on the other side of ρ. One calls this distortion the polarization of the charge-distribution. Since it is implicitly assumed that ρ is stable, there are internal forces that keep the charges together. These internal forces resist the polarization and determine the magnitude of the polarizability. The concept of polarizability is very important in atomic and molecular physics. In atoms and molecules the electronic charge-distribution is stable, as follows from quantum mechanical laws, and an external electric field polarizes the electronic charge cloud. The amount of shifting of charge can be quantitatively expressed in terms of an induced dipole moment. A dipole moment of a continuous charge-distribution $\rho\,$ is defined by $\mathbf{p} \equiv \iiint \; \mathbf{r}\, \rho(\mathbf{r}) \, \mathrm{d}x\mathrm{d}y\mathrm{d}z .$ If there is no external field we call the dipole permanent, written as p^perm. A permanent dipole moment may or may not be equal to zero. For highly symmetric charge-distributions (for instance those with an inversion center), the permanent moment is zero. Under influence of an electric field the charge-distribution will distort and the dipole moment will change, $\mathbf{p}^{\mathrm{ind}} \equiv \mathbf{p}- \mathbf{p}^{\mathrm{perm}}$ where p^ind is the induced dipole moment, i.e., the change in dipole due to the polarization of the charge-distribution. Assuming a linear dependence in the field, we define the polarizability $\ alpha\,$ by the following expression $\mathbf{p}^{\mathrm{ind}} = \alpha \, \mathbf{E}.$ This relation can be generalized to higher powers in E (in the general case one uses a Taylor series), the polarizabilities arising as factors of E^2, and E^3 are called hyperpolarizabilities and hyper-hyperpolarizabilities, respectively. We assumed that p is parallel to E, i.e., that α is a single real number, a scalar. It can happen that the two vectors are non-parallel, in that case the defining relation takes the form $p_i^\mathrm{ind} = \sum_{j=1}^3 \alpha_{ij} \, E_j,$ $\mathbf{p}^\mathrm{ind} = \begin{pmatrix}p_1^\mathrm{ind}\\p_2^\mathrm{ind}\\p_3^\mathrm{ind}\end{pmatrix} \quad\hbox{and}\quad \mathbf{E} = \begin{pmatrix}E_1\\E_2\\E_3\end{pmatrix}.$ By writing these two vectors in component form we implicitly assumed the presence of a Cartesian coordinate system. The polarizability α is expressed with respect to the very same coordinate system by a matrix, $\boldsymbol{\alpha} = \begin{pmatrix} \alpha_{11} & \alpha_{12} & \alpha_{13} \\ \alpha_{21} & \alpha_{22} & \alpha_{23} \\ \alpha_{31} & \alpha_{32} & \alpha_{33} \\ \end{pmatrix}\quad\hbox {and}\quad \begin{pmatrix}p_1^\mathrm{ind}\\p_2^\mathrm{ind}\\p_3^\mathrm{ind}\end{pmatrix} = \begin{pmatrix} \alpha_{11} & \alpha_{12} & \alpha_{13} \\ \alpha_{21} & \alpha_{22} & \alpha_{23} \\ \alpha_{31} & \alpha_{32} & \alpha_{33} \\ \end{pmatrix} \begin{pmatrix}E_1\\E_2\\E_3\end{pmatrix}.$ We know that choice of another Cartesian basis (coordinate system) changes the column vectors p^ind and E, while the physics of the situation is unchanged, neither the electric field, nor the induced dipole changes, only their representation by column vectors changes. Similarly, upon choice of another basis the polarizibility α is represented by another 3×3 matrix. This means that α is a second rank (because there are two indices) Cartesian tensor, the polarizability tensor of the charge-distribution. From the defining equation follows that p has the dimension charge times distance, which in SI units is Cm (coulomb times meter). In Gaussian units this is statCcm (statcoulomb times centimeter). An electric field has dimension voltage divided by distance, so that in SI units E has dimension V/m and in Gaussian units statV/cm. Hence the dimension of α is SI: Cm^2V^−1 Gaussian: statCcm^2V^−1 = cm^3, where we used that in Gaussian units the dimension of V is equal to statC/cm (because of Coulomb's law). In Gaussian units the polarizability has dimension volume, and accordingly polarizability is often considered as a measure for the size of the charge-distribution (usually an atom or a molecule). The conversion between the two units is: $\alpha_{\mathrm{SI}} = \tfrac{10 }{c^2}\;\alpha_{\mathrm{Gaussian}} = 4\pi \epsilon_0\; 10^{-6}\; \alpha_{\mathrm{Gaussian}},$ here c is the speed of light (≈ 3×10^8 m/s), 4πε[0] = 10^7/c^2 (see electric constant) and the suffix on the symbol α indicates the unit in which the polarizability is expressed. Sometimes one defines the polarizability in SI units by the equation $\mathbf{p} \equiv 4\pi \epsilon_0\; \alpha'_\mathrm{SI}\; \mathbf{E}.$ This definition has the advantage that α'[SI] has dimension volume (m^3). Clearly $\alpha'_\mathrm{SI} = 10^{-6} \, \alpha_{\mathrm{Gaussian}},$ where the power of ten is due to converting from m to cm. Sometimes one also encounters the definition $\mathbf{p} \equiv \epsilon_0\; \alpha''_\mathrm{SI}\; \mathbf{E},$ which gives a polarizability α" with dimension volume and a factor 4π larger than α′. The energy of a dipole in an infinitesimal field is given by $dU = - \mathbf{p}\cdot \mathrm{d}\mathbf{E} = -(\mathbf{p}^\mathrm{perm} + \mathbf{p}^\mathrm{ind})\cdot\mathrm{d}\mathbf{E},$ where the dot indicates a dot product between the vectors. Integration to finite E gives $U = - \int_0^{\mathbf{E}} \mathbf{p}\cdot \mathrm{d}\mathbf{E} = -\mathbf{p}^\mathrm{perm} \cdot\mathbf{E} - \frac{1}{2} \alpha \mathbf{E} \cdot\mathbf{E} \equiv U^\mathrm{perm} + U^\mathrm The second term becomes for a non-isotropic polarizibility in three different, but fully equivalent, notations, $U^\mathrm{ind} \equiv -\frac{1}{2} \sum_{i,j=1}^3 E_i \alpha_{ij} E_j = -\frac{1}{2}\begin{pmatrix} E_1 & E_2 & E_3 \end{pmatrix} \begin{pmatrix} \alpha_{11} & \alpha_{12} & \alpha_{13} \\ \ alpha_{21} & \alpha_{22} & \alpha_{23} \\ \alpha_{31} & \alpha_{32} & \alpha_{33} \\ \end{pmatrix} \begin{pmatrix} E_1 \\E_2 \\ E_3 \end{pmatrix} = -\frac{1}{2}\;\mathbf{E}^\mathrm{T} \;\ boldsymbol{\alpha}\; \mathbf{E}.$ Quantum mechanical expression Classically, electric charge distributions, such as atoms and molecules, were known to exist, but the classical Maxwell theory could not explain their stability. The empirically known polarizability was likewise unexplainable. This changed after the advent of quantum mechanics. By means of the quantum mechanical technique of perturbation theory one can derive an expression for the induction energy U^ind. One introduces a perturbation operator for a system of N particles: $V = - \mathbf{E}\cdot \left( \sum_{k=1}^N q_k \mathbf{r}_k \right) \equiv -\mathbf{E}\cdot\boldsymbol{\mu} = - \sum_{i=1}^3 E_i \mu_i ,$ where q[k] is the charge of the kth particle and r[k] its position vector (expressed with respect to some Cartesian coordinate system). Clearly, the dipole operator is defined by $\boldsymbol{\mu} \equiv \sum_{k=1}^N q_k \mathbf{r}_k .$ In perturbation theory one assumes that the unperturbed (without external field) Schrödinger equations are solved $H^{(0)} \; \Phi_n = \mathcal{E}_n \Phi_n, \quad n=0,1,\ldots, \quad\hbox{and}\quad \mathcal{E}_0 < \mathcal{E}_1 <\mathcal{E}_2 < \ldots$ That is, we assume that all states $\Phi_n\,$ and corresponding energies $\mathcal{E}_n$ are known. Further it is assumed that the states constitute an orthonormal basis for the vector space they belong to. The second-order perturbed energy is $U^{(2)} = \sum_{n>0} \frac{ \langle \Phi_0 | V | \Phi_n\rangle \langle \Phi_n | V | \Phi_0\rangle}{\mathcal{E}_0 - \mathcal{E}_n} = \sum_{i,j=1}^3 E_i E_j \sum_{n>0} \frac{ \langle \Phi_0 | \ mu_i | \Phi_n\rangle \langle \Phi_n | \mu_j | \Phi_0\rangle}{\mathcal{E}_0 - \mathcal{E}_n}.$ Comparing the second-order energy U^(2) with the induction energy U^ind gives a quantum mechanical expression for the polarizability tensor: $\alpha_{ij} = 2 \sum_{n>0} \frac{ \langle \Phi_0 | \mu_i | \Phi_n\rangle \langle \Phi_n | \mu_j | \Phi_0\rangle}{\mathcal{E}_n - \mathcal{E}_0} .$ Frequency-dependent polarizability When a charge-distribution is hit by a monochromatic electromagnetic wave with electric component Ecosωt the polarizibility becomes a function of the angular frequency $\boldsymbol{\alpha}(\omega) \quad\hbox{with}\quad \omega = 2\pi u = kc,$ where ν the frequency, k the modulus of the wave vector and c the speed of light. The interaction of the wave with the charge distribution is described by the quantum mechanical operator: $V(t) = -\mathbf{E}\cdot\boldsymbol{\mu} \; \cos\omega t,$ where the dipole operator μ is defined above. Time-dependent perturbation theory leads to the following expression, $\alpha_{ij}(\omega) = \sum_{n>0} \left[ \frac{ \langle \Phi_0 | \mu_i | \Phi_n\rangle \langle \Phi_n | \mu_j | \Phi_0\rangle}{\Delta\mathcal{E}_n - \hbar\omega} + \frac{ \langle \Phi_0 | \mu_i | \Phi_n\rangle \langle \Phi_n | \mu_j | \Phi_0\rangle}{\Delta\mathcal{E}_n + \hbar\omega} \right] = \sum_{n>0} \frac{\Delta\mathcal{E}_n \langle \Phi_0 | \mu_i | \Phi_n\rangle \langle \Phi_n | \ mu_j | \Phi_0\rangle}{\Delta\mathcal{E}_n^2 - (\hbar\omega)^2},$ $\Delta\mathcal{E}_n \equiv \mathcal{E}_n - \mathcal{E}_0.$ The quantity |α(ω)|^2 is proportional to the cross section for elastic light scattering (Rayleigh scattering), and with a small modification it also gives the cross section for inelastic light scattering (Raman scattering). The index of refraction n of a charge-distribution is related by the Lorentz-Lorenz relation to its frequency-dependent polarizability α(ω) and hence it follows that n is a function of ω. This leads to the phenomenon of dispersion of light (occurrence of rainbows). The function α(iω) of imaginary frequency gives rise to one of the components of intermolecular forces, namely dispersion (London) forces.
{"url":"http://en.citizendium.org/wiki?title=Polarizability&diff=100415989&oldid=prev","timestamp":"2014-04-24T00:09:52Z","content_type":null,"content_length":"46056","record_id":"<urn:uuid:4fd0bb09-5fb0-4a3d-b42f-f77594bdf151>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
Forest Knolls Algebra Tutor ...I hadn't taken a math class in over 10 years and he was able to refresh my memory of important concepts and equations. He even lent me a GRE prep book to use in conjunction with the one I had purchased. With his assistance I was able to develop a successful test day strategy and my performance ... 41 Subjects: including algebra 1, algebra 2, calculus, geometry ...I am a certified College Reading & Learning Association (CRLA) tutor with a breadth of experience with high school through college-aged students. I developed The Math Cheat Sheet for Apple devices to help students with common equations and formulas needed in algebra, geometry, trig, and calculus. I’ve also completed contract work as a solution author for math textbooks. 4 Subjects: including algebra 1, algebra 2, calculus, precalculus ...I hold an M.S. in math and a Ph.D. in aerospace engineering from Stanford University. I can help you in upper level high school and college level math as well as algebra, precalculus and SAT math prep. I have taught math at the high school and college level. 7 Subjects: including algebra 1, algebra 2, calculus, SAT math ...I am a credentialed teacher in the state of CA. I Have been substituting for 20 years. I also teach homeschool and hospital students throughout the the district and tutor privately. 6 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...I can use what I learned to help others, and I often learn more as I teach. Last summer I taught pre-algebra to middle school students, and this summer I taught general math to elementary school students. I like working with kids since they are energetic. 12 Subjects: including algebra 2, algebra 1, statistics, calculus
{"url":"http://www.purplemath.com/Forest_Knolls_Algebra_tutors.php","timestamp":"2014-04-18T06:19:55Z","content_type":null,"content_length":"23947","record_id":"<urn:uuid:c3be655f-f33b-4092-8009-4923eec0984c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
Minimizing problem. December 12th 2012, 09:36 AM Minimizing problem. Well I'll write my problem down. In my discrete maths homework I had to find minimal DNF through Karnaugh map and got: -X[1]-X[2 ]v -X[3]X[4] v -X[1]X[3]-X[4] v X[1]X[2]X[4 ]( "-" infront of X means inversion) Then I needed to find CNF via McCluskey's method which I got: (-X[2] v X[3] v X[4])&(X[1] v -X[3] v -X[4])&(X[2] v -X[3] v -X[4])&(-X[1] v X[4]). The next task was to transform the CNF from McCluskey method to DNK. After I did that (lots of work opening the brackets) I got: -X[1]-X[2]-X[3] v -X[1]X[3]-X[4] v X[1]X[2]X[4] v -X[1]-X[2]-X[4] v -X[3]X[4. ]Now I did the prime implicant chart and finished up with this: -X[1]-X[2]-X[3] v -X[1]X[3]-X[4] v X[1]X[2]X[4] v -X[3]X[4. ]The task was that the DNF I fould from Karnaugh map and the one I got from making CNF to DNF must be logically equal, but however many times I tried, they werent, and one implicant wasnt needed. They are almost the same, only in the -x1-x2 from karnaugh map the result after implicant chart was -x1-x2-x3 and I have no idea what to do. Done it throught 3 times already and checked for errors. any help? I made a truth table, and yes 1 of 16 was different. The f(0011) which is 3[10 ]and is in the [- ]zone. Lots of thanks, if someone can help me :)!
{"url":"http://mathhelpforum.com/discrete-math/209677-minimizing-problem-print.html","timestamp":"2014-04-18T20:59:22Z","content_type":null,"content_length":"4793","record_id":"<urn:uuid:32cc1fdd-c34e-4e87-a3ff-8c0711f1a67b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
Gainesville, Florida B.S., Mathematics, Stanford University Research interests At the moment, I am interested in both algebraic geometry and low dimensional topology/geometry. Aspects on algebraic number theory that interest me in particular are results that draw parallels between algebraic and analytic structures. In low dimensional topology and geometry, I am interested in topological computation, knot theory and geometric structures on low dimensional manifolds. Why did you choose Boston College? I chose Boston College to be in a tightly knit department and to be part of a close community of graduate students. I also chose Boston College for the location; the Boston area is perhaps the best place to pursue mathematical research and pedagogy.
{"url":"http://www.bc.edu/content/dam/files/schools/cas/slideshows/mathphdfall10/5.html","timestamp":"2014-04-19T12:33:54Z","content_type":null,"content_length":"2522","record_id":"<urn:uuid:8753968d-36aa-4a40-bd4a-4f230f31441c>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
De Forest Math Tutor Find a De Forest Math Tutor ...My goal is to make this an enjoyable experience for my students, working to help them reach their full potential. I have the ability to make what seems difficult and complex easily understood. My experience includes tutoring at the Omega School in Madison. 24 Subjects: including calculus, GRE, SAT math, algebra 1 ...I also will provide negotiated discounts for sessions/time spent over the phone/internet. This can be particularly productive for writing, math or science assignments to be reviewed ahead of time, or proofread, so that any face-to-face time can be spent on helping you understand how and why the ... 65 Subjects: including prealgebra, ACT Math, probability, differential equations ...Lessons will be varied to keep students interested, engaged, and growing week after week. This is coupled with student feedback and assessment in order to help guarantee a worthwhile academic experience. I look forward to hearing from you and working as a team to help you achieve success!I have... 52 Subjects: including algebra 1, American history, biology, vocabulary ...I really enjoy the process of learning and helping people work through difficult problems. I have considerable experience tutoring and teaching, from elementary students to professional workshops. My focuses are math and science but I also have experience with ACT/SAT test prep. 13 Subjects: including SAT math, algebra 1, algebra 2, ACT Math ...I believe this is when one-on-one tutoring can be most effective - when a syllabus can be ignored and the student gets to focus on what they most want or need to learn. I'm a graduate of The Ohio State University with a Bachelor's in Mechanical Engineering and a Master's in Industrial Engineerin... 30 Subjects: including algebra 1, algebra 2, American history, vocabulary Nearby Cities With Math Tutor Arlington, WI Math Tutors Cross Plains, WI Math Tutors Dane Math Tutors Deerfield, WI Math Tutors Doylestown, WI Math Tutors Fall River, WI Math Tutors Lodi, WI Math Tutors Maple Bluff, WI Math Tutors Marshall, WI Math Tutors Merrimac, WI Math Tutors Poynette Math Tutors Rio, WI Math Tutors Springfield, WI Math Tutors Windsor, WI Math Tutors York, WI Math Tutors
{"url":"http://www.purplemath.com/De_Forest_Math_tutors.php","timestamp":"2014-04-19T09:58:48Z","content_type":null,"content_length":"23668","record_id":"<urn:uuid:bf542bf4-4158-42bd-a87a-67bb6564bbca>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
Villas Del Parque, PR Math Tutor Find a Villas Del Parque, PR Math Tutor ...I have been and still is the unofficial tutor for my own friends who are studying for various standardized tests especially in the area of Math. I am most passionate about you as the student understanding the concepts and feeling competent in any area of difficulty or significant challenge. I love Math, History, and Writing and work diligently to simplify those concepts for students. 23 Subjects: including prealgebra, reading, writing, ASVAB ...I am an effective tutor because of my skill in assessing my student's needs, but also because of my ability to empathize with young learners. I believe there is always an unseen angle that each learner can use to make the subject material more accessible and interesting. When tutoring, I look f... 14 Subjects: including trigonometry, probability, algebra 1, algebra 2 ...So although it is not certified tutoring experience, my fellow classmates' consistent appreciation and gratitude for my help makes me confident in my ability to help those who are struggling. My tutoring style is committed, patient, and thorough. I believe that it is not always the student's fault for their lack of understanding and/or confusion. 10 Subjects: including trigonometry, linear algebra, probability, algebra 1 ...Study skills are critical to success in school and they are not adequately taught early in student’s school career. This leaves students to fend for themselves until later in their school life when the skill is expected and that often puts the student in a difficult come from behind position. These are skills that need to be taught, learned, and mastered early in a student’s school 20 Subjects: including logic, English, reading, writing ...I can tutor any subject with the given class book and notes, and I have extended experience in schooling and testing skills, I know what gets an A and what gets a C, I intend to begin with visible examples and show how math moves from step one to the final answer. Being bilingual I can reach stu... 8 Subjects: including algebra 2, geometry, precalculus, trigonometry Related Villas Del Parque, PR Tutors Villas Del Parque, PR Accounting Tutors Villas Del Parque, PR ACT Tutors Villas Del Parque, PR Algebra Tutors Villas Del Parque, PR Algebra 2 Tutors Villas Del Parque, PR Calculus Tutors Villas Del Parque, PR Geometry Tutors Villas Del Parque, PR Math Tutors Villas Del Parque, PR Prealgebra Tutors Villas Del Parque, PR Precalculus Tutors Villas Del Parque, PR SAT Tutors Villas Del Parque, PR SAT Math Tutors Villas Del Parque, PR Science Tutors Villas Del Parque, PR Statistics Tutors Villas Del Parque, PR Trigonometry Tutors Nearby Cities With Math Tutor 100 Palms, CA Math Tutors Balboa Island, CA Math Tutors Balboa, CA Math Tutors Bombay Beach, CA Math Tutors Desert Shores, CA Math Tutors Holcomb Village, CA Math Tutors North Shore, CA Math Tutors One Hundred Palms, CA Math Tutors Pinyon Pines, CA Math Tutors San Luis Rey Math Tutors South Laguna, CA Math Tutors Torres Martinez Indian Reser, CA Math Tutors Valerie, CA Math Tutors Vista Del Lago, PR Math Tutors Vista Santa Rosa, CA Math Tutors
{"url":"http://www.purplemath.com/Villas_Del_Parque_PR_Math_tutors.php","timestamp":"2014-04-16T16:18:21Z","content_type":null,"content_length":"24773","record_id":"<urn:uuid:c146e00f-c0da-42e0-b4ef-a72c4c8642c0>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help Is 2008! divisible by $9^{400}$? I figure this uses mods somewhere......can someone show me a nice place to start? right, I think I getcha! $\left[\frac{2008}{3} \right]+\left[ \frac{2008}{3^2} \right]+\left[ \frac{2008}{3^3}\right]+\left[ \frac{2008}{3^4} \right]+\left[ \frac{2008}{3^5} \right]+\left[ \frac {2008}{3^6} \right]+\left[ \frac{2008}{3^7} \right]+\left[ \frac{2008}{3^8} \right]+....$ $=669+223+74+24+8+2+0+....>800$ (Whose formula is this known as? Is it De pognac's?) So it is possible to have 800 3's in the prime decomposition of 2008! Thanks Moo P.S: How do you get those really cool spoiler windows to appear? Last edited by Showcase_22; June 1st 2009 at 12:31 PM. Yes Whew...I struggled with finding these threads : http://www.mathhelpforum.com/math-he...orization.html http://www.mathhelpforum.com/math-he...ation-2-a.html They may give you further insight on the formula (Whose formula is this known as? Is it De pognac's?) I don't know the name But I looked for de Pognac and didn't find anything significant P.S: How do you get those really cool spoiler windows to appear? With the [spoiler][/spoiler] tags okay, i'll read through those threads. Meanwhile, I found out whose formula this is: De Polignac's formula - Wikipedia, the free encyclopedia
{"url":"http://mathhelpforum.com/number-theory/91412-mods.html","timestamp":"2014-04-20T14:53:08Z","content_type":null,"content_length":"47995","record_id":"<urn:uuid:69cf3668-8c14-43c2-be64-c6dc17c1bf4e>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
Applications of string topology structure up vote 24 down vote favorite Chas and Sullivan constructed in 1999 a Batalin-Vilkovisky algebra structure on the shifted homology of the loop space of a manifold: $\mathbb{H}_*(LM) := H_{*+d}(LM;\mathbb{Q})$. This structure includes a product which combines the intersection product and Pontryagin product and a BV operater $\Delta: \mathbb{H}_*(LM) \to \mathbb{H}_{*+1}(LM)$. I was wondering about the applications of this structure. Has it even been used to prove theorems in other parts of mathematics? A more concrete question is the following: Usually, considering a more complicated structure on topological invariants of a space allows you to prove certain non-existince results. For example, the cup product in cohomology allows you to distinguish between $ S^2 \vee S ^1 \vee S^1 $ and $T^2$. Is there an example of this type for string topology? at.algebraic-topology string-topology gn.general-topology I fixed your latex. You meant to use the wedge product (latex command "\vee"), not the smash product (latex command "\wedge"). Also there was some sort of formatting bug which cut off the "T^2". I tried a few things a couldn't fix it directly, so I just added a line break by hand. I hope these fixes are okay. – Chris Schommer-Pries Apr 1 '10 at 12:43 add comment 6 Answers active oldest votes Hossein Abbaspour gave an interesting connection between 3-manifold topology and the string topology algebraic structure in arXiv:0310112. The map $M \to LM$ given by sending a point $x$ to the constant loop at $x$ allows one to split $\mathbb{H}_*(LM)$ as $H_*(M) \oplus A_M$. He showed essentially that the restriction of the string product to the $A_M$ summand is nontrivial if and only if $M$ is hyperbolic. There are some technical details in the statements in his paper, but it was written pre-Perelman and I believe the statements can be made a bit more elegant in light of the Geometrization Theorem. Philosophically, Sullivan has said that he his goal in inventing string topology was to try to find new invariants of smooth structures on manifolds. His original idea was that if you have to use the smooth structure to smoothly put chains into transversal positions to intersect them then you might hope that the answer will depend on the smooth structure. Unfortunately, we now know that the string topology BV algebra depends only on the underlying homotopy type of the manifold (there are now quite a few different proofs of various parts of this statement). up vote 27 The string topology BV algebra is only a piece of a potentially much richer algebraic structure. Roughly speaking, $\mathbb{H}_*(LM)$ is a homological conformal field theory. This was down vote believed to be true for quite some time but it took a while before it was finally produced by Veronique Godin arxiv:0711.4859. She constructed an action of the PROP made from the accepted homology of moduli spaces of Riemann surfaces with boundary. Restricting this action to pairs of pants recovers the original Chas-Sullivan structure. Unfortunately, for degree reasons, nearly all of the higher operations vanish. In particular, any operation given by a class in the Harer stable range of the homology of the moduli space must act by zero. Hirotaka Tamanoi has a paper that spells out the details, but it is nothing deep. Furthermore, it seems that the higher operations are homotopy invariant as well. For instance Lurie gets this as a corollary of his work on the classification of topological field Last I heard, Sullivan, ever the optimist, believes that there is still hope for string topology to detect smooth structures. He says that one should be able to extend from the moduli spaces of Riemann surfaces to a certain piece of the boundary of the Deligne-Mumford compactification. I've heard that the partial compactification here is meant to be that one allows nodes to form, but only so long as the nodes collectively do not separate the incoming boundary components from the outgoing boundary. Sullivan now has some reasons to hope that operations coming from homology classes related to the boundary of these moduli spaces might see some information about the underlying smooth structure of the manifold. My two cents worth: one of the original motivations of Sullivan in creating string topology was the hope that the string topology operations would detect the smooth structure of the manifold. We now know this to be false (the homology level operations are oriented homotopy invariant). The last time I spoke with Dennis about this, he still held out the hope that the chain level operations would detect the smooth structure. But, it seems to me that Lurie's work (if I properly understand it) implies that the chain level operations are also homotopy invariant. – John Klein Jan 24 '11 at 3:51 add comment Kallel and Salvatore use the string product to help compute the homology of a mapping space in this paper. up vote 6 down vote add comment Let's consider $SU(3)$ and $S^3\times S^5$. These two $8$-manifolds are not homotopy equivalent. You can't see this by looking at their cohomology groups or cohomology rings, but you can see it using the action of the Steenrod algebra: $Sq^2\colon H^3(M;\mathbb{Z}/2\mathbb{Z})\to H^5(M;\mathbb{Z}/2\mathbb{Z})$ is zero for $M=S^3\times S^5$ and nonzero for $M=SU(3)$. We could also use string topology to distinguish between $SU(3)$ and $S^3\times S^5$. Here's how. First, look up the Batalin-Vilkovisky algebras. Tamanoi computed the result for $SU(n)$ and Menichi computed the results for spheres. It's enough to use $\mathbb{Z}/2\mathbb{Z}$ coefficients. Second, compare the two. You can see that they are isomorphic (boo) but that no such up vote 6 isomorphism would preserve the "constant loop summand" $\mathbb{H}_\ast(M)$ that sits inside the BV algebra. So in particular the isomorphism cannot come from a homotopy equivalence between down vote the two manifolds. Of course we already knew that these spaces were not homotopy equivalent, and it would have been much nicer if the BV algebras were not isomorphic at all. In general, computing the BV algebras is rather difficult, and it's probably not an efficient way to distinguish between manifolds. The example you have mentioned works, in the general setting and the string topology setting, as there is a rational homotopy equivalence between $S^3\times S^5$ and $SU(3)$ but no homotopy equivalence. The reason that the fundamental class (the identity of the loop product) is not preserved is basically because that the best possible map $f:S^3\times S^5\to SU(3)$ maps $[S^3]$ to a generator of $H_3(SU(3))$ but $[S^5]$ gets mapped to twice a generator of $H_5(SU(3)$. – Somnath Basu Mar 5 '12 at 3:04 add comment As a shameless plug, I may say that in my thesis we do show that string topology, interpreted in a broader context, is NOT a homotopy invariant. What we do is the following : instead of looking at loops in $M$ we think of them as arcs in $M\times M$ with its boundary in the diagonal $M$ that sits inside $M\times M$. Now we look at the space of such arcs $\mathcal{S}(M)$, which, when they intersect the diagonal at intermediate stages, do so transversely. One can then define a suitable coalgebra structure which is NOT a homotopy invariant. In particular, this up vote structure distinguishes the Lens spaces $L(7,1)$ from $L(7,2)$, which are homotopy equivalent but NOT homeomorphic. 5 down vote Of course, this new structure is not related to the loop product or the BV operator as per the question asked. Moreover, this structure is defined on a much smaller space then $LM$. However, if you take the point of view that string topology is broadly the study of loops in a manifold then this is a new and interesting algebraic structure. add comment Even though one may argue that it is not stepping too much outside the area, in a sense, you might want to look at the paper of Xiaojun Chen, Wee Liang Gan, http://arxiv.org/abs/ up vote 1 down 0804.4748. Neat paper! I don't know why I hadn't looked at it before now. – GS May 12 '10 at 21:21 add comment The string topology of a manifold is isomorphic to the Hamiltonian Floer Homology of the cotangent bundle of the manifold. up vote 1 down vote add comment Not the answer you're looking for? Browse other questions tagged at.algebraic-topology string-topology gn.general-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/20057/applications-of-string-topology-structure/90249","timestamp":"2014-04-16T19:53:09Z","content_type":null,"content_length":"77044","record_id":"<urn:uuid:b05b3d03-3439-4393-8b74-b28843e80706>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
The first snow of the season is always a treat. When the flakes start flying, I want to go out and play. I even want to go out and shovel. Here in the Boston area our first storm left a particularly magical landscape. Glistening flakes fell steadily all night, with little wind, so that every fencepost was topped by a fluffy white pompom in the morning. The decorations were remarkably regular and symmetrical. In the photo below I have carved a cross-section out of the snow lying atop the back-porch railing, which is 3½ inches wide. The profile looks like a fairly good approximation to a semicircle. (Below is a red semicircle fitted to the image by eye.) Later, I began wondering why the snow would assume just this shape. Is it an equilibrium form that would be maintained indefinitely if the snowfall continued? Is there some deep and universal reason for the semicircular form, or is it just an accident, a matter of contingency, something peculiar to this snowfall but nothing to generalize about. Semicircular distributions are not all that common in nature. They turn up in the distribution of eigenvalues of certain random matrices, but that seems a pretty far-fetched explanation for the lumps of snow on my back porch. I would expect the profile to vary somewhat with the properties of the snow—moisture content, size and form of the flakes, etc.—but I’m not at all sure just how it would vary. In the past 20 years there’s been lots written about the self-organized shape of sand piles, which are essentially conical in profile. Clearly, snow is not sand. It’s fluffier and stickier. A flake doesn’t necessarily have to be supported from below. It could adhere to a surface, perhaps with probability given by some function of the cosine of the slope…. That’s as far as I’d gotten when the next storm hit. Perhaps I should note that my boyish joy at the first snowfall tends to dissipate as the winter wears on. 3 Responses to Snowballs 1. It looks more like a normal distribution to me. (with the tail cut off) 2. The top of the snow formation in the first picture goes beyond the semicircle (the shape is more pointy than round). So aside from the truncated normal distribution, another candidate would be (the top half of) a sine distribution. 3. When I got home, I had triangular prisms of snow on my steps, and then I promptly stomped them. This entry was posted in mathematics, physics.
{"url":"http://bit-player.org/2011/snowballs","timestamp":"2014-04-20T13:36:39Z","content_type":null,"content_length":"26837","record_id":"<urn:uuid:d392ba1f-b485-4126-837c-50d958a1b230>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: xt: unit-specific trends Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: xt: unit-specific trends From László Sándor <sandorl@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: xt: unit-specific trends Date Tue, 24 Apr 2012 10:45:41 -0400 Getting back to this: I must thank Bill for his explanation, clear as always. Yet I want to point out what I learnt from this: All of us (a)do-file authors should be careful with by-loops. When we use this device to loop over a few values, there is no problem. Yet if we use it for some panel-like setting, it can be "treacherous." If there is no way out of this but Mata, at least we should be aware that commands that like -egen- should be high on our priority list to rewrite in Mata. In my experience, people use -egen- to generate (many-many) variables in a panel, or "worse", leave-out means and alike. There the loops are definitely on the order of N, which might be a high price in large On Fri, Apr 20, 2012 at 12:30 PM, William Gould, StataCorp LP <wgould@stata.com> wrote: > Laszlo sandorl@gmail.com wrote, > > I am just a bit surprised that the "if" checks slow down operations > > this much. Esp. by-loops. [...] > > But exactly these are the sorts of trade-offs that you are experts in. > I would like to show Lazlo and the many others who I suspect would > express the same sentiment that they should not be surprised. > Let's imagine that we want to perform operations on 20 observations > of a 200,000 obseration dataset, the 20 observations selected by > -if-. > Let's analyze execution time. > As a first approximation, let's assume the time necessary to perform > a linear operation on a set of observations is > T = t_f + t_o*N > By a linear operation, I mean an operation whose execution time is > linear in the number of observations. -generate- and -replace- are > examples of linear operations. -sort- is an example of a non-linear > operation. > In the above formula, t_f is the time to parse the user's input and > set up the problem, which is to say, t_f is small. t_o is the time to > perform the operation on a single observation, which is to say, t_o is > small, too. Obviously different operations require different amounts > of time, but this is an approximaton, so let's just assume t_o is the > same across operations. We'll speculate later about the effects of of > the assumption on our results. > We are going to compare the total time it takes to operate on 20 > observations in a 20-observation dataset, > T_0 = t_f + 20*t_o > and the time it takes to operate on 20 observations on a > 200,000-obseration dataset, such as a -gemnerate- statement with an > additional -if-. The total time for tht would be > T_1 = t_f + 20*t_o + 200,000*t_o > For small datasets, it is approximately the case that t_f = t_o*N -- > the time to parse and setup the problem is about equal to performing > the work of the problem itself. In that caes, the equations can be > rewritten as > T_0 = (20+1)*t_o > T_1 = (20+1)*t_o + 200,000*t_o > The ratio of T_1 to T_0 is then > T_1 (20+1)*(t_o) + 200,000*t_o > ----- = -------------------------- > T_0 (20+1)*t_o > = 1 + 200,000/(20+1) > = (approximately) 9,525 > Many of you -- perhaps Lazlo among them -- think that we "experts" at > StataCorp can achieve results "mere" users cannot. Sometimes, > however, being an expert is about knowing when to give up. At > StataCorp, we make calculations like the agove and then check run > times, and that's one way that we determine which problems deserve > more work. > In the above calculaton, we assumed all operations take roughly the > same time. In particular, in > . generate x = <exp1> if <exp2> > we assumed that <exp1> takes the same amount of time as <exp2>. > Clearly an <exp2> such as -if `touse'- is a light-weight. The ratio > above might be better written by distinguishing between the execution > times for <exp1> and <exp2>: > T_1 (20+1)*(t_exp1) + 200,000*t_exp2 > ----- = -------------------------------- > T_0 (20+1)*t_exp1 > = 1 + 200,000*(t_exp2)/(21*t_exp1) > Actually, the ratio of t_exp2/t_exp1 is probably much closer than 1 > than you expect, at least in interpretive languages like ado. > Nontheless, if it pleases you, substitute 1/2 for the ratio and get > approximately T_1/T_0 = 4763. > By the way, t_exp1 might be approximately equal to t_exp2 in > interpretive languages, but in compiled languages like Mata, > the can be whoppingly different. Had we been analyzing > run times in compiled languages and you were bothered by the > assumption tht t_exp1 == t_exp2, you would have been right. > Lazlo also wrote, > > I would have guessed that the extra cost of not allowing re-sorting > > would have justified a dramatic speedup of the -by- which is pretty > > commonly used. > Thi choice we made in this particular issue is something about which > reasonably people can disagree. Let me outline our thinking in general. > When we make such decisions, our view of ado-files is that > ease-of-programming and likelihood-of-correctness trumps performance > in most cases. I am not saying that ado-files perform poorly or that > it is pure luck that they don't. We work to make them perform well, > but when there is a tradeoff between speed of execution and ease of > programming (which includes likelhood of correctness), we usually make > the decision in favor of of ease of programming. > Simultaneously, we provide a second programming language, Mata, > in which the trade-off is reversed. > That does not mean Mata is better than ado. We at StataCorp write > lots of ado code. We choose the language according the problem. In > some problems, there is little speed difference between Mata and ado > because of the nature of the problem, so we choose ado. In other > problems, there is a difference, but the speed really doesn't matter. > We choose ado. In still other problems, the is a difference is speed, > that does matter, and we choose Mata. There's one more case in which > we choose Mata, which is when the problem is complex and the > organizational aspects of Mata such as structures and classes makes it > is easy for us to write readable code, meaning the code will require > less debugging, and meaning the code will be more modifiable in the > future. > -- Bill > wgould@stata.com > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-04/msg01108.html","timestamp":"2014-04-20T01:17:44Z","content_type":null,"content_length":"15197","record_id":"<urn:uuid:4f2745bf-0715-411f-9312-5d2c2ad7a995>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Looking for someone to check my answer: Find an equation of the tangent line through the given point. x 2 y 3 + 15y = 34x, (3, 2) • one year ago • one year ago Best Response You've already chosen the best response. \[\frac{ 34-2x }{ 2y+15 }\] Best Response You've already chosen the best response. thats the answer i got. Best Response You've already chosen the best response. well that's what i had for y' Best Response You've already chosen the best response. You first need to use implicit differentiation to find an expression for \(\displaystyle\frac{dy}{dx}\). Then substitute x=3 and y=2 into that expression to get the slope of the tangent line at that point. So then you will know the slope of the tangent line and you also know it passes through the point (3,2) - use this information to calculate the equation of the tangent line. Best Response You've already chosen the best response. was my y' equation correct? Best Response You've already chosen the best response. it doesn't look correct to me - can you please list your steps so that I can help spot where you may have made a mistake? Best Response You've already chosen the best response. \[x ^{2}y ^{2}+15y=34x\] \[2x*2yy'+15y'=34\] \[y'(2y+15)=34-2x\] \[y'=\frac{ 34-2x }{ 2y+15 }\] Best Response You've already chosen the best response. I thought you had \(y^3\) in the equation listed in your question? Best Response You've already chosen the best response. lol so it is. Best Response You've already chosen the best response. \[y'=\frac{ 34-2x }{ 3y ^{2}+15 }\] Best Response You've already chosen the best response. did i have it right the second time (right above what you just wrote) Best Response You've already chosen the best response. sorry I meant:\[\frac{d}{dx}(x^2y^3)=(x^2)\frac{d}{dx}(y^3)+y^3\frac{d}{dx}(x^2)\] Best Response You've already chosen the best response. you haven't used the chain rule correctly Best Response You've already chosen the best response. I mean "product rule" Best Response You've already chosen the best response. \[x ^{2}*3y ^{2}y'+y ^{3}2x+15y'=34\] Best Response You've already chosen the best response. that is correct :) Best Response You've already chosen the best response. okay so now i isolate y' Best Response You've already chosen the best response. Best Response You've already chosen the best response. \[y'=\frac{ 34-2xy ^{3} }{ x ^{2}3y ^{2}+15 }\] Best Response You've already chosen the best response. yup - now follow the other steps that I had listed above. Best Response You've already chosen the best response. NOTE: we don't usually write an expression as \(x^23y^2\) - it is better to write it as \(3x^2y^2\) Best Response You've already chosen the best response. Best Response You've already chosen the best response. the general rule of thumb is to write in this order: 1. Constants first 2. Then letters in alphabetical order Best Response You've already chosen the best response. okay so now i sub in my points right? Best Response You've already chosen the best response. Best Response You've already chosen the best response. so i had: -14/123 Best Response You've already chosen the best response. perfect! just a couple of more steps to go now :) Best Response You've already chosen the best response. is the equation: \[y-2=-14/123x+42/123\] Best Response You've already chosen the best response. yes - that looks correct. I wouldn't have separated the two constants here (the -2 and the 42/123) Best Response You've already chosen the best response. you may also want to multiply both sides by 123 to remove the fractions from the final equation. Best Response You've already chosen the best response. so how would you make it look? Best Response You've already chosen the best response. ok, you got to this equation:\[y-2=-14/123x+42/123\]first add 2 to both sides to get:\[y=-14x/123 + 288/123\]then multiply both sides by 123 - what will you get then? Best Response You've already chosen the best response. Best Response You've already chosen the best response. correct - but again, remember to write constants first - so 123y instead of y123 Best Response You've already chosen the best response. that's not really how you write the equation of a line though. Best Response You've already chosen the best response. so I would write the final equation as:\[123y=288-14x\] Best Response You've already chosen the best response. it is still an equation of a line. you can write it in "standard form" as follows:\[14x+123y=288\] Best Response You've already chosen the best response. maybe you are only used to seeing it in the form: \(y=mx +c\) Best Response You've already chosen the best response. One last advice before I leave - you wrote one of the terms in your original equation as: -14/123x this can sometimes be confused for: \[-\frac{14}{123x}\]so it is usually better to write it as: Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50999237e4b02ec0829cd1ac","timestamp":"2014-04-18T08:24:42Z","content_type":null,"content_length":"121351","record_id":"<urn:uuid:a47a4644-f4c0-485e-9dfe-420b748b1f7f>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: ON THE EXISTENCE OF E0-SEMIGROUPS Abstract. Product systems are the classifying structures for semi- groups of endomorphisms of B(H), in that two E0-semigroups are co- cycle conjugate iff their product systems are isomorphic. Thus it is im- portant to know that every abstract product system is associated with an E0-semigrouop. This was first proved more than fifteen years ago by rather indirect methods. Recently, Skeide has given a more direct proof. In this note we give yet another proof by a very simple construction. 1. Introduction, formulation of results Product systems are the structures that classify E0-semigroups up to co- cycle conjugacy, in that two E0-semigroups are cocycle conjugate iff their concrete product systems are isomorphic [Arv89]. Thus it is important to know that every abstract product system is associated with an E0-semigroup. There were two proofs of that fact [Arv90], [Lie03] (also see [Arv03]), both of which involved substantial analysis. In a recent paper, Michael Skeide [Ske06] gave a more direct proof. In this note we present a new and simpler method for constructing an E0-semigroup from a product system. Our terminology follows the monograph [Arv03]. Let E = {E(t) : t > 0} be a product system and choose a unit vector e E(1). e will be fixed
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/069/1570491.html","timestamp":"2014-04-18T05:57:44Z","content_type":null,"content_length":"8384","record_id":"<urn:uuid:97da3a23-3e22-448a-944a-f1c630f4a6a8>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
Waiting For December 13th It is by now public that Rolf Heuer, the Director General of CERN, in announcing for December 13th two back-to-back talks of the CMS and ATLAS experiments on their Higgs search results with 2011 data, warned that the results might not be conclusive yet. Besides, nobody really could expect them to be, since the sensitivity expected by both ATLAS and CMS in the still not excluded region of the Higgs mass, with 5/fb of data per experiment and 7 TeV running conditions, ranges from 2 to 4 standard deviations in the rosiest circumstances. Despite that, blogs around have raised expectations on the possibility that the Higgs boson might be spotted at less-than-observation-level significances in the whereabouts of 125 GeV. I say this without fear of being crucified by my LHC colleagues, since the discussion has been raging on many high-traffic sites for a while. One way to look at it, while we wait for December 13th, is to go back to the results produced by the good-old Tevatron experiments this summer. This is entirely independent information, so it gives us some extra discrimination power, so to speak, on where the Higgs might be hiding. So let us give back a look at this plot: In it, you see how a particular statistic (the log-likelihood ratio LLR) extracted from the combined CDF+DZERO data (black curve) distributes along the Higgs mass hypotheses. This statistic allows to discriminate the no-Higgs from the yes-Higgs hypotheses as powerfully as any other. The red dashed curve shows the value it should take at the correct Higgs mass, if the Higgs were present in the data; the black dashed curve shows instead what its value should be should the Higgs not be there (background-only hypothesis, or null hypothesis if you are a statistics expert). The dashes are only expected values: they show what is the median of the result of combining CDF and DZERO results, given that much data and given the analysis methods employed. The green and yellow band ("brazil band") instead shows the 1-sigma and 2-sigma possible ranges of the value the statistic would take for no Higgs anywhere. Of course the graph should be read by first fixing a mass hypothesis, and then reading off the values along the corresponding vertical line. So we see that the Tevatron data does show to prefer the signal hypothesis to the no-signal hypothesis for masses between 125 and 130 GeV. Of course the discriminating power of the statistic is poor, the signal red dashed line lying just one-point-something sigma away from the dashed black one. There's more to say: the 125 to 130 GeV mass region is the one where the Tevatron's sensitivity is the worst possible, as indicated by the getting together of the two dashed curves... Still, the 1-sigmaish result is an indication, if you believe the Tevatron results to that level of detail. That is all for today - just a teaser, I guess. But expectations for December 13th grow, despite the caution warnings. We will see what happens... Stay tuned! Let me tell you why it is not possible for Higgs boson to be there, because there cannot be any fields in a realistic understanding of the natural world. Fields were devised in the times of Maxwell to comprehend pre quantum phenomena. Every event has to have a particle/wave explanation, no field would fill in the details where a postulation is weak. Anadish Kumar Pal (not verified) | 12/07/11 | 09:33 AM Vladimir Kalitvia... | 12/07/11 | 10:55 AM Hi Tommaso, very trivial (for you) question: could you explain the main reason why the sensitivity (also for ATLAS/CMS) is so bad in the low-mass region? I mean below ~130 GeV or so. Which kind of background is present there? Or thre are other experimental reasons? Anonymous (not verified) | 12/07/11 | 16:59 PM Hi Anon, very trivial answer. It is not that at low mass bad things happen. Only, at higher mass the WW and ZZ decay channels open up, making the Higgs search quite a lot easier. The cross section goes down as the mass increases, but it is not a decisive factor for the integrated luminosities we have. Tommaso Dorigo | 12/07/11 | 17:32 PM I think Mr. Heuer has been hired by a conglomerate of travel agencies. Flights into Europe around the season have been "slow sellers" lately. I mean, appart from panicking bankers. Anonymous (not verified) | 12/07/11 | 19:07 PM There is no need to travel - the seminar will be webcast: Wagon Lits (not verified) | 12/08/11 | 03:24 AM December 13 is a special day in Sweden, because everybody is celebrating Lucia, a napolitanian saint. If you think that is weird, consider that americans celebrate the greek saint Nicholas at Thomas Larsson (not verified) | 12/08/11 | 04:46 AM From Syracuse in Sicily, not from Naple. Anonymous (not verified) | 12/08/11 | 12:56 PM Very much looking forward to this! Are they preparing extra servers at CERN for that webcast? I suspect there will be lots of folks wanting to watch it live, and that would be a pity if it had technical glitches due to overcrowding... Anonymous (not verified) | 12/08/11 | 08:18 AM The slides are normally made public at the same time or a bit earlier, so this could cut down on the traffic... If even because great expectations will be deflated by looking at them ;) tulpoeid (not verified) | 12/08/11 | 09:06 AM Much like radio broadcast for a webcast the amount of traffic for the sending station does not depend on the number of listeners. Wagon Lits (not verified) | 12/08/11 | 10:20 AM Some previous webcasts from CERN have struggled under heavy load, especially the first collisions event. Perhaps they have upgraded since then but demand is going to be very high and it would help if the stream was rebroadcast via other services such as ustream or livestream. PhilG (not verified) | 12/08/11 | 13:10 PM is it normal that they make such a lot of fuzz about two talks (even if they are announcing some new results)? Or should this fact itself tell us something? :-) Cheers, Sven Sven (not verified) | 12/08/11 | 13:48 PM Tommaso Dorigo | 12/08/11 | 17:10 PM Let's say there is a signal around 125 GeV. What kind of work is then required to show that the signal has a particular spin? The Higgs must be spin 0, but if this was some really odd process that was not in the selection code, and it had a spin different from 0, then one see a real signal that was not the Higgs boson. The analysis of this huge amount of data must presume our understanding of what can happen is almost perfectly complete. Doug Sweetser | 12/08/11 | 14:04 PM "What kind of work is then required to show that the signal has a particular spin? " A simple way is the measurement of the angular distribution of the Higgs particles. Higgs boson is a scalar, and so we have to expect a spherical angular distribution. Nick (not verified) | 12/08/11 | 14:57 PM If it decays into two photons you are almost done. There is no angular momentum of the two particles in the centre of mass frame so the spin of the original particle is the sum of the spins on the particles it decayed into. A photon has spin +1 or -1 so you already know the source was spin 0 or 2. If they can get the polarization of the two photons they would know which. PhilG (not verified) | 12/08/11 | 18:07 PM Thank you Nick and PhilG for your very helpful explanations on this score. ohwilleke (not verified) | 12/08/11 | 18:44 PM Good answer. Pi0 (0-) decays into two gamma, and in very rare cases 3 gamma. In my answer, I used partial wave decomposition ;) Nick (not verified) | 12/09/11 | 11:58 AM Tommaso Dorigo | 12/10/11 | 04:48 AM Hi Tommaso, Yes, there are just limits for three and four gamma channels. Thanks. Nick (not verified) | 12/10/11 | 10:21 AM Very nice post! Anonymous (not verified) | 12/08/11 | 15:24 PM Hi Tommaso, I think it should be terrible to know the truth and to be impeded to cry it out. But it is just a matter of days... Marco Frasca (not verified) | 12/08/11 | 18:57 PM Let's put sigma into some perspective. I would like to comment on the probability of a scientific discovery tuesday. Scientists measure their experiments by sigma. There is no scientific discovery about a sigma 2.5 or 3.00 as probably will be the case on tuesday. Maybe even lower. Orr maybe higher, but it's thereabouts. If Newton were sitting under his appletree and made 100 observations, and in one instance the apple didn't hit him in the head, it is a sigma 2.5. A sigma 5.o is EVERY time that the apple falls down, and that includes doing it a million times and more. THAT is a scientific discovery. At sigma 3 the apple fals wrong one out of every 370 times you do the experiment. That is not a scientific discovery either. My prediction is that the Higgs particle never will reach close to a sigma 5. Behind the prediction there is a theory, if you are interested. Google crestroyer theory and find it or visit directly at Otto Krog (not verified) | 12/09/11 | 17:18 PM If GUT has a higher-than-expected cut off, then LHC will struggle to see only the "tail" of the new physics. Nick (not verified) | 12/09/11 | 18:53 PM Hi Tommaso, Nice article - but the full black curve seems to be under the dashed black curve all the way from 110-160 GeV. Maybe the dip at 130 is just a fluctuation on a background signal that has not been estimated correctly ? Paul Wells Anonymous (not verified) | 12/10/11 | 19:19 PM Hi Paul, I don't think that's evidence of systematic underestimation of backgrounds. The correlation length of the curve, given the searches combined in the result shown and their relative strength, is large for masses above 140 GeV, where the H->WW searches totally dominate the result. And also for lower masses it is sizable. In other words, if you have an excess of data over backgrounds at 150 GeV, it is going to affect a wide region, maybe 30 GeV wide. Tommaso Dorigo | 12/11/11 | 15:44 PM Tommaso, do the preferred masses at ATLAS and CMS differ by more than 2%? RRyals (not verified) | 12/12/11 | 19:08 PM Tommaso Dorigo | 12/12/11 | 19:57 PM Its already December 13th in Australia and has been now for 13 hours, I guess that we just have to wait even longer! My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http:// Helen Barratt | 12/12/11 | 21:04 PM 55 minutes to go Wagon Lits (not verified) | 12/13/11 | 07:05 AM
{"url":"http://www.science20.com/quantum_diaries_survivor/waiting_december_13th-85298","timestamp":"2014-04-18T13:06:54Z","content_type":null,"content_length":"68056","record_id":"<urn:uuid:445b7c26-5b88-4ee6-afc0-335a3780656c>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
A Completeness Theorem for Protocols with Honest Majority, - Proceedings of the 22nd Annual ACM Symposium on the Theory of Computing, ACM , 1990 "... ) Mihir Bellare Silvio Micali y Rafail Ostrovsky z MIT Laboratory for Computer Science 545 Technology Square Cambridge, MA 02139 Abstract Statistical zero-knowledge is a very strong privacy constraint which is not dependent on computational limitations. In this paper we show that given a comp ..." Cited by 40 (17 self) Add to MetaCart ) Mihir Bellare Silvio Micali y Rafail Ostrovsky z MIT Laboratory for Computer Science 545 Technology Square Cambridge, MA 02139 Abstract Statistical zero-knowledge is a very strong privacy constraint which is not dependent on computational limitations. In this paper we show that given a complexity assumption a much weaker condition suffices to attain statistical zero-knowledge. As a result we are able to simplify statistical zero-knowledge and to better characterize, on many counts, the class of languages that possess statistical zero-knowledge proofs. 1 Introduction An interactive proof involves two parties, a prover and a verifier, who talk back and forth. The prover, who is computationally unbounded, tries to convince the probabilistic polynomial time verifier that a given theorem is true. A zero-knowledge proof is an interactive proof with an additional privacy constraint: the verifier does not learn why the theorem is true [11]. That is, whatever the polynomial-time verif... - IN PROC. 2ND ISRAEL SYMP. ON THEORY OF COMPUTING AND SYSTEMS (ISTCS93), IEEE COMPUTER , 1993 "... It was known that if one-way functions exist, then there are zero-knowledge proofs for every language in PSPACE. We prove that unless very weak one-way functions exist, Zero-Knowledge proofs can be given only for languages in BPP. For average-case definitions of BPP we prove an analogous result und ..." Cited by 37 (10 self) Add to MetaCart It was known that if one-way functions exist, then there are zero-knowledge proofs for every language in PSPACE. We prove that unless very weak one-way functions exist, Zero-Knowledge proofs can be given only for languages in BPP. For average-case definitions of BPP we prove an analogous result under the assumption that uniform one-way functions do not exist. Thus, very loosely speaking, zero-knowledge is either useless (exists only for "easy" languages), or universal (exists for every provable language). - In 30th Annual Symposium on Foundations of Computer Science , 1989 "... ) Joe Kilian Silvio Micali y Rafail Ostrovsky z Abstract We consider several resources relating to zero-knowledge protocols: The number of envelopes used in the protocol, the number of oblivious transfers protocols executed during the protocol, and the total amount of communication required by ..." Cited by 27 (3 self) Add to MetaCart ) Joe Kilian Silvio Micali y Rafail Ostrovsky z Abstract We consider several resources relating to zero-knowledge protocols: The number of envelopes used in the protocol, the number of oblivious transfers protocols executed during the protocol, and the total amount of communication required by the protocol. We show that after a pre-processing stage consisting of O(k) executions of Oblivious Transfer, any polynomial number of NP-theorems of any poly-size can be proved non-interactively and in zero-knowledge, based on the existence of any one-way function, so that the probability of accepting a false theorem is less then 1 2 k . 1 Minimizing Envelopes 1.1 Envelopes as a resource. [GMR] puts forward the somewhat paradoxical notion of a zero-knowledge proof, and exemplifies it for a few special classes of assertions. The introduction of ideal commitment mechanisms, known as envelopes, allows us to achieve greater generality. Proofs of any NP statements can be accomplished in - IN PROCEEDINGS OF THE 6TH ANNUAL STRUCTURE IN COMPLEXITY THEORY CONFERENCE , 1991 "... In this paper, we study connections among one-way functions, hard on the average problems, and statistical zero-knowledge proofs. In particular, we show how these three notions are related and how the third notion can be better characterized, assuming the first one. ..." Cited by 27 (7 self) Add to MetaCart In this paper, we study connections among one-way functions, hard on the average problems, and statistical zero-knowledge proofs. In particular, we show how these three notions are related and how the third notion can be better characterized, assuming the first one.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2691965","timestamp":"2014-04-17T06:53:00Z","content_type":null,"content_length":"21365","record_id":"<urn:uuid:1d4e8723-6eca-40e0-8b03-b7a691405423>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
Bedford Park Algebra 2 Tutor Find a Bedford Park Algebra 2 Tutor ...I won the Botany award for my genetic research on plants as an undergraduate, and I have done extensive research in Computational Biology for my Ph.D. dissertation. I was a teaching assistant for both undergraduate and graduate students for a variety of Biology classes. I am fluent in a range of Science and History disciplines. 41 Subjects: including algebra 2, chemistry, English, writing ...I have also tutored Geometry and Calculus students. I have a degree in Mathematics from Augustana College. I am currently pursuing my Teaching Certification from North Central College. 7 Subjects: including algebra 2, geometry, algebra 1, trigonometry ...I have a strong educational background in each aspect of the TEAS examination including grades of A at the college level in Chemistry, Biology, Physics, Anatomy & Physiology and a wide variety of Mathematics. I also have considerable experience tutoring other similar nursing examinations including NCLEX and HESI. I'm confident that I can help anyone seeking to excel on the TEAS 49 Subjects: including algebra 2, English, reading, writing ...My best students are those that desire to learn and I seek to cultivate that attitude of growth and learning through a zest and enthusiasm for learning. Eventually, I began teaching ACT Reading /English, began to teach math and reading to all ages, and eventually became a sought after subject tutor. Later, I would become Exam Prep Coordinator and Managing Director of the Learning Center. 26 Subjects: including algebra 2, chemistry, English, reading As a double degree graduate of the University of Rochester and the Eastman School of Music with decades of teaching experience ranging from secondary through the collegiate levels in math and music, tutoring need not be a measure of last resort to merely achieve a passing grade but, even if it is, I... 37 Subjects: including algebra 2, English, geometry, biology
{"url":"http://www.purplemath.com/bedford_park_algebra_2_tutors.php","timestamp":"2014-04-21T02:38:52Z","content_type":null,"content_length":"24289","record_id":"<urn:uuid:d3e9f588-5172-4d35-8a5c-5438928b81c3>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Hierarchial Repair and Quantum State Reduction Hierarchical Repair and Quantum State Reduction Consider a system consisting of two components with constant failure rates l[1] and l[2], and suppose the first component is repaired at the constant rate m[1], and if both components are failed, the system is repaired (or replaced) immediately. In addition, the second component is periodically checked and (if necessary) repaired every T hours. The simple Markov model for this system is shown below, excluding any representation of the periodic inspection/repairs. The fully-failed state is omitted, because that state contains no probability (since it is repaired immediately). The time-dependent state equations for this system are It’s convenient to express this set of equations in matrix notation as Given any initial conditions P(t) at time t, the state probabilities at some later time t+T are given by Now, at the initial time t = 0 we have P(0) = Transpose[1 0 0], and at the end of the first T-hour periodic inspection interval this formula gives the new state probabilities P(T). At this point we wish to inspect and repair any systems found in State 2 back to State 0. This can be accomplished by multiplying P(T) by re-distribution matrix A defined as After the first interval T and carrying out the first inspection/repair, the state probabilities are P(T) = (Ae^M^T)P(0), and after the second inspection/repair the state probabilities are P(2T) = (A e^M^T)^2 P(0), and so on. Therefore, after the nth inspection/repair, the state probabilities are The average probabilities for the interval beginning at time nT is given by where I is the identity matrix. Notice that the numerator on the right hand side is divisible by MT, so it isn’t necessary to invert the M matrix. The system failure rate is l[2]P[1] + l[1]P[2], so if we define the row vector L = [ 0 l[2] l[1] ], the average system failure rate during the interval beginning at t = nT can be expressed as This example involved only one periodic inspection/repair interval, but the solution can be generalized to any number of distinct repair intervals. For example, suppose we have a system for which some of the components and inspected and (if failed) repaired every t hours, whereas the remainder of the components are inspected/repaired only once every T = nt hours. In other words, the longer interval is n times the smaller interval. For this system we have two “re-distribution matrices”, which we may call A and B. The first represents the repair of the t-hour components, and the second represents the repair of all components, because everything is checked and repaired at the longer interval T. It can be shown (see below) that the average system failure rate is We note that the B re-distribution matrix doesn’t appear in this expression, because B represents complete restoration of the system to the full-up state, so, beginning from the full-up state, we need only evaluate the system reliability over n of the t periods using the A matrix. At this point the B re-distribution would restore everything to the full-up state, so all subsequent iterations would be identical to the first. To illustrate, consider again the sample system discussed above, but suppose the first component did not contain continuous health monitoring (so m[1] = 0), and instead it was maintained by a periodic inspection/repair every t = 200 hours. As before, the second component is inspected and repaired periodically every T = 1000 hours. Thus we have n = 5, and the coefficient and re-distribution matrices are Since the B matrix represents complete restoration of the system, we can use the preceding equation to evaluate the average system failure rate of one complete T-hour interval (which consists of five t-hour intervals). There is an interesting analogy between these reliability calculations (with periodic repairs) and quantum mechanics. In both cases the system’s state vector evolves according to linear equations, but this smooth evolution is interrupted by discrete inspections (and repairs) that have the effect of resolving some components of the state vector. To explain this analogy in detail, we will describe in more generality the method for evaluating the reliability of a system subjected to multiple periodic inspection/repair intervals. Consider the Markov model for a hypothetical system illustrated in the figure below. Each node represents one of the observable states of the system, and in general we represent the state of the system at any time t by the state vector P(t), which we define as a column vector consisting of the probabilities of the system being in each of the states. Also, for each pair of states we have two rates, representing the rates at which the system would “decay” from one state to the other. (In general, the rates in the two directions may be different.) The time-dependent system equations are of the form where M is a constant matrix consisting of all the transition rates. Given the state of the system at any time t[0], the state at any other time t is given by assuming the system is not disturbed from the outside during that interval of time. However, it is typical for systems to be periodically inspected (and repaired if necessary) at specified intervals of time. In fact, there may be a hierarchy of maintenance actions, such as cursory inspections every t hours, and more complete inspections every nt hours, and totally complete inspections every mnt hours, as indicated schematically below. The letters A, B, and C denote the three levels of inspections. Since the combination of the A, B, and C checks is complete, the system is placed in an identical “full-up” state once every mnt hours. In the diagram above we have taken n = 5 and m = 3, meaning that a B check is performed once every five A checks, and a C check is performed once every three B checks. Each of these checks is analogous to a measurement or observation in quantum mechanics, and they have the effect of reducing the state vector, placing the system (or a subset of the system components) into a definite state, just as in quantum mechanics the state vector of a system becomes one of the eigenvectors after a measurement has been performed. Suppose an A check consists of inspecting the system to determine if it is in one of the states 5, 8, or 9, and if it is found to be in one of those states, repairs are made so the system is placed in state 1, 4, or 7 respectively. (In other words, if the system is found in state 5, it is moved to state 1, and so on.) This operation can be represented by the matrix Multiplying the state vector by this matrix leaves most of the state probabilities unchanged, but any probability of states 5, 8, and 9 is moved to the states 1, 4, and 7 respectively. Thus we know the probabilities of the states 5, 8, and 9 following this operation are precisely zero. Similarly the operations for the B and C checks can be represented by matrices. It’s worth noting that, although the coefficient matrix of the continuous time dependent solution is invertible (so they can be exercised forward or backward in time), these inspection matrices are generally not invertible. Suppose state 10 is the “complete failure” state. The rate of entering that state at any given by the probabilities of states 5, 8, and 9 each multiplied by their respective rates for transitioning to state 10. Thus the system failure rate at any time t can be expressed as the dot product L∙P(t) where L is the row vector Over any given interval of time, the mean system failure rate can therefore be determined by integrating the instantaneous rate as follows where I is the identity matrix. If we let P[j] denote the initial state vector for the jth time interval (between consecutive A checks), and note that each of these intervals is of length t, we can say the average system failure rate for the jth interval is Also, given the probability at the start of one interval, the probability at the end of that interval (prior to any inspections and repairs) is given by equation (1) as We’re now in a position to write down the average system failure rate for the entire sequence of hierarchical inspections and repairs. According to equation (2), it is just proportional to the average of the initial state vectors for the 15 intervals between C checks. These initial state vectors are given by multiplying the initial (“full up”) state vector P[0] by the cumulative transition matrices as listed below. Letting I denote the identity matrix, we can factor the sum of these 15 initial state vectors as Summing each of the geometric series, multiplying by the rate factor from equation (2), and dividing by mn = 15 (to give the average), we get By the way, there is no ambiguity in the order of the divisions when expressing the geometric series in closed form, because all the implicit multiplications are commutative. If there was another level in the inspection hierarchy, such as a D check occurring once every k checks at level C (and assuming D is complete and C is not), then the sequence of parenthetical terms in the above expression would be multiplied on the right by the factor Similarly any number of hierarchies can be modeled by applying suitable factors in this way. The analogy with quantum mechanics is striking, even to the point of hierarchical observations, and the issues this raises with regard to the measurement problem. At what point can we say a measurement has taken place? If it has taken place at a low level, but not yet at a high level, can we (at the higher level) be sure the state vector has actually been reduced by the lower-level observation? It’s also interesting that the inspection operators are not invertible, so they represent irreversible processes, just as a measurement in quantum mechanics is irreversible. Of course, when dealing with reliability calculations we are restricted to real-valued probabilities rather than complex amplitudes, so there is no interference or quantum entanglement, but aside from this difference, the computational structure of Markov models in reliability is remarkably similar to the structure of quantum mechanics, especially in Heisenberg’s matrix formulation. Return to MathPages Main Menu
{"url":"http://mathpages.com/home/kmath594/kmath594.htm","timestamp":"2014-04-16T21:59:45Z","content_type":null,"content_length":"30089","record_id":"<urn:uuid:ba81e388-ed06-467b-8ced-ec63f4087559>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
Hermite polynomials - generation Hermite polynomials – generation Required math: calculus Required physics: Schrödinger equation Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Problem 2.16. In the solution of the Schrödinger equation for the harmonic oscillator, we found that the wave function can be expressed as a power series: $\displaystyle \psi(y)=e^{-y^{2}/2}\sum_{j=0}^{\infty}a_{j}y^{j}$ where ${y}$ was introduced as a shorthand variable: $\displaystyle y\equiv\sqrt{\frac{m\omega}{\hbar}}x$ and the coefficients ${a_{j}}$ satisfy the recursion relation $\displaystyle a_{j+2}=\frac{2j+1-\epsilon}{(j+1)(j+2)}a_{j} \ \ \ \ \ (1)$ where ${\epsilon}$ is another shorthand variable for the energy: $\displaystyle \epsilon\equiv\frac{2E}{\hbar\omega}$ By requiring the recursion relation to terminate at various values of ${j}$ we can generate the polynomials for the various energy states, which turn out to be the Hermite polynomials. For example, if we take the highest value of ${j}$ to be 5, then we must have $\displaystyle a_{7}$ $\displaystyle =$ $\displaystyle \frac{11-\epsilon}{42}a_{5}$ $\displaystyle$ $\displaystyle =$ $\displaystyle 0$ $\displaystyle \epsilon$ $\displaystyle =$ $\displaystyle 11$ $\displaystyle E$ $\displaystyle =$ $\displaystyle \frac{11}{2}\hbar\omega$ To get the coefficients, we can start with ${a_{0}=0}$ (since all even terms must be zero if we want ${a_{5}e0}$) and ${a_{1}=1}$. Then we get $\displaystyle a_{j+2}=\frac{2j-10}{\left(j+1\right)\left(j+2\right)}a_{j}$ so ${a_{3}=-\frac{4}{3}}$, ${a_{5}=\left(-\frac{1}{5}\right)\left(-\frac{4}{3}\right)=\frac{4}{15}}$. The Hermite polynomial is $\displaystyle H_{5}(x)=A_{5}\left[\frac{4}{15}x^{5}-\frac{4}{3}x^{3}+x\right]$ where ${A_{5}}$ is a constant that is set by convention to make the coefficients satisfy some specified rule. If we require the coefficient of the highest power ${n}$ to be ${2^{n}}$ we can multiply this polynomial by 120 to get $\displaystyle H_{5}(x)=32x^{5}-160x^{3}+120x$ For ${H_{6}}$, all the odd terms are zero, and we require ${a_{8}=0}$, so we get $\displaystyle \epsilon$ $\displaystyle =$ $\displaystyle 13$ $\displaystyle E$ $\displaystyle =$ $\displaystyle \frac{13}{2}\hbar\omega$ $\displaystyle a_{j+2}$ $\displaystyle =$ $\displaystyle \frac{2j-12}{\left(j+1\right)\left(j+2\right)}a_{j}$ Taking ${a_{0}=1}$ and ${a_{1}=0}$, we get ${a_{2}=-6}$, ${a_{4}=4}$, ${a_{6}=-\frac{8}{15}}$. Requiring the coefficient of ${a_{6}}$ to be ${2^{6}=64}$ we get $\displaystyle H_{6}(x)=64x^{6}-480x^{4}+720x^{2}-120$ Leave a Reply Cancel reply By growescience, on Saturday, 21 July 2012 at 15:26, under Physics, Quantum mechanics. Tags: harmonic oscillator, series solution. No Comments Post a comment or leave a trackback: Trackback URL.
{"url":"http://physicspages.com/2012/07/21/hermite-polynomials-generation/","timestamp":"2014-04-18T18:14:43Z","content_type":null,"content_length":"87062","record_id":"<urn:uuid:a458f031-2a4b-46d8-89b5-93e22441384e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
Velocity, Speed, and Time question Ok, so what i think you're saying is... V1+4.5= 1/T1-10 v1= (1/t1-10)-4.5 so.. (1/t1-10-4.5) = 1/t? then solve that for T? The boldface part is not right. What you have in the first line is [itex]V_{1} = \frac{1}{T_{1}-10} - 4.5[/itex] and the second line doesn't agree with this. It should read [itex]\frac{1}{T_{1}} = \frac{1}{T_{1}-10} - 4.5[/itex]
{"url":"http://www.physicsforums.com/showthread.php?p=4241911","timestamp":"2014-04-17T09:48:37Z","content_type":null,"content_length":"46399","record_id":"<urn:uuid:5a21fbee-bb5b-42f9-8f0c-dae7f895681d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
6. Matrices and Linear Equations by M. Bourne We wish to solve the system of simultaneous linear equations using matrices: a[1]x + b[1]y = c[1] a[2]x + b[2]y = c[2] If we let `A=((a_1,b_1),(a_2,b_2))`, `\ X=((x),(y))\ ` and `\ C=((c_1),(c_2))` then `AX=C`. (We first saw this in Multiplication of Matrices). If we now multiply each side of AX = C on the left by A^-1, we have: A^-1AX = A^-1C. However, we know that A^-1A = I, the Identity matrix. So we obtain IX = A^-1C. But IX = X, so the solution to the system of equations is given by: X = A^-1C See the box at the top of Inverse of a Matrix for more explanation about why this works. Note: We cannot reverse the order of multiplication and use CA^-1 because matrix multiplication is not commutative. Example - solving a system using the Inverse Matrix Solve the system using matrices. −x + 5y = 4 2x + 5y = −2 Always check your solutions! Solving 3×3 Systems of Equations We can extend the above method to systems of any size. We cannot use the same method for finding inverses of matrices bigger than 2×2. We will use a Computer Algebra System to find inverses larger than 2×2. Example - 3×3 System of Equations Solve the system using matrix methods. `{: (x+2y-z=6),(3x+5y-z=2),(-2x-y-2z=4) :}` Did I mention? It's a good idea to always check your solutions. Example - Electronics application of 3×3 System of Equations Find the electric currents shown by solving the matrix equation (obtained using Kirchhoff's Law) arising from this circuit: Exercise 1 The following equations are found in a particular electrical circuit. Find the currents using matrix methods. `{: (I_A+I_B+I_C=0),(2I_A-5I_B=6),(5I_B-I_C=-3) :}` Exercise 2 Recall this problem from before? If we know the simultaneous equations involved, we will be able to solve the system using inverse matrices on a computer. The circuit equations, using Kirchhoff's Law: -26 = 72I[1] - 17I[3] - 35I[4] 34 = 122I[2] - 35I[3] - 87I[7] -4 = 233I[7] - 87I[2] - 34I[3] - 72I[6] -13 = 149I[3] - 17I[1] - 35I[2] - 28I[5] - 35I[6] - 34I[7] -27 = 105I[5] - 28I[3] - 43I[4] - 34I[6] 24 = 141I[6] - 35I[3] - 34I[5] - 72I[7] 5 = 105I[4] - 35I[1] - 43I[5] What are the individual currents, I[1] to I[7]? Exercise 3 We want 10 L of gasoline containing 2% additive. We have drums of the following: Gasoline without additive Gasoline with 5% additive Gasoline with 6% additive We need to use 4 times as much pure gasoline as 5% additive gasoline. How much of each is needed? Always check your solutions! Exercise 4 This statics problem was presented earlier in Section 3: Matrices. From the diagram, we obtain the following equations (these equations come from statics theory): Vertical forces: F[1] sin 69.3° − F[2] sin 71.1° − F[3] sin 56.6° + 926 = 0 Horizontal forces: F[1] cos 69.3° − F[2] cos 71.1° + F[3] cos 56.6° = 0 7.80 F[1] sin 69.3° − 1.50 F[2] sin 71.1° − 5.20 F[3] sin 56.6° = 0 Using matrices, find the forces F[1], F[2] and F[3]. Didn't find what you are looking for on this page? Try search: Online Algebra Solver This algebra solver can solve a wide range of math problems. (Please be patient while it loads.) Go to: Online algebra solver Ready for a break? Play a math game. (Well, not really a math game, but each game was made using math...) The IntMath Newsletter Sign up for the free IntMath Newsletter. Get math study tips, information, news and updates each fortnight. Join thousands of satisfied students, teachers and parents! Share IntMath! Short URL for this Page Save typing! You can use this URL to reach this page: Algebra Lessons on DVD Easy to understand algebra lessons on DVD. See samples before you commit. More info: Algebra videos
{"url":"http://www.intmath.com/matrices-determinants/6-matrices-linear-equations.php","timestamp":"2014-04-21T12:08:24Z","content_type":null,"content_length":"28991","record_id":"<urn:uuid:d2bd7201-8785-4ae6-93bb-129fa0234555>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with Angular Momentum March 15th 2012, 02:40 PM Help with Angular Momentum I dont really know if this problem fits in this section but its the best section I could find for it. Given the state vectors x=-6574330.3, y=-418132.2, z=1392264.8, V[x]=-889, V[y]=7490.39 and V [z]=-1941.32 of a spacecraft that is 22942.2kg, can you calculate the angular momentum of the spacecraft? I have already attemped this problem and got an angular momentum magniude of 1.20315389e+15 m^2kg/s but i dont know if it is right. By the way the position vectors of the spacecraft are in meters and the velocity vectors are in m/s. Also, its the angular momentum that is required to finish calculating the eccentricity and not the angular momentum vector components right? March 15th 2012, 04:16 PM Re: Help with Angular Momentum I dont really know if this problem fits in this section but its the best section I could find for it. Given the state vectors x=-6574330.3, y=-418132.2, z=1392264.8, V[x]=-889, V[y]=7490.39 and V [z]=-1941.32 of a spacecraft that is 22942.2kg, can you calculate the angular momentum of the spacecraft? I have already attemped this problem and got an angular momentum magniude of 1.20315389e+15 m^2kg/s but i dont know if it is right. By the way the position vectors of the spacecraft are in meters and the velocity vectors are in m/s. Also, its the angular momentum that is required to finish calculating the eccentricity and not the angular momentum vector components right? I get $|\vec{L}| = 1.68 \times 10^{15}$
{"url":"http://mathhelpforum.com/advanced-applied-math/196017-help-angular-momentum-print.html","timestamp":"2014-04-20T15:07:24Z","content_type":null,"content_length":"5384","record_id":"<urn:uuid:b777be98-07df-4c15-b60e-9539a1f78c4d>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
Shortest non-trivial cycles in directed surface graphs , 2012 "... Let G be a directed graph with n vertices and non-negative weights in its directed edges, embedded on a surface of genus g, and let f be an arbitrary face of G. We describe an algorithm to preprocess the graph in O(gn log n) time, so that the shortest-path distance from any vertex on the boundary of ..." Cited by 7 (5 self) Add to MetaCart Let G be a directed graph with n vertices and non-negative weights in its directed edges, embedded on a surface of genus g, and let f be an arbitrary face of G. We describe an algorithm to preprocess the graph in O(gn log n) time, so that the shortest-path distance from any vertex on the boundary of f to any other vertex in G can be retrieved in O(log n) time. Our result directly generalizes the O(n log n)-time algorithm of Klein [Multiple-source shortest paths in planar graphs. In Proc. 16th Ann. ACM-SIAM Symp. Discrete Algorithms, 2005] for multiple-source shortest paths in planar graphs. Intuitively, our preprocessing algorithm maintains a shortest-path tree as its source point moves continuously around the boundary of f. As an application of our algorithm, we describe algorithms to compute a shortest non-contractible or non-separating cycle in embedded, undirected graphs in O(g² n log n) time. "... We give a deterministic algorithm to find the minimum cut in a surface-embedded graph in near-linear time. Given an undirected graph embedded on an orientable surface of genus g, our algorithm computes the minimum cut in g O(g) n log log n time, matching the running time of the fastest algorithm kno ..." Cited by 2 (2 self) Add to MetaCart We give a deterministic algorithm to find the minimum cut in a surface-embedded graph in near-linear time. Given an undirected graph embedded on an orientable surface of genus g, our algorithm computes the minimum cut in g O(g) n log log n time, matching the running time of the fastest algorithm known for planar graphs, due to Ł ˛acki and Sankowski, for any constant g. Indeed, our algorithm calls Ł ˛acki and Sankowski’s recent O(n log log n) time planar algorithm as a subroutine. Previously, the best time bounds known for this problem followed from two algorithms for general sparse graphs: a randomized algorithm of Karger that runs in O(n log 3 n) time and succeeds with high probability, and a deterministic algorithm of Nagamochi and Ibaraki that runs in O(n 2 log n) time. We can also achieve a deterministic g O(g) n 2 log log n time bound by repeatedly applying the best known algorithm for minimum (s, t)-cuts in surface graphs. The bulk of our work focuses on the case where the dual of the minimum cut splits the underlying surface into multiple components with positive genus. 1 - CoRR "... Let G be a directed graph embedded on a surface of genus g with b boundary cycles. We describe an algorithm to compute the shortest non-contractible cycle in G in O((g 3 + g b)n log n) time. Our algorithm improves the previous best known time bound of (g + b) O(g+b) n log n for all positive g and b. ..." Cited by 2 (0 self) Add to MetaCart Let G be a directed graph embedded on a surface of genus g with b boundary cycles. We describe an algorithm to compute the shortest non-contractible cycle in G in O((g 3 + g b)n log n) time. Our algorithm improves the previous best known time bound of (g + b) O(g+b) n log n for all positive g and b. We also describe an algorithm to compute the shortest non-null-homologous cycle in G in O((g 2 + g b)n log n) time, generalizing a known algorithm to compute the shortest non-separating cycle. "... Let G be a graph embedded on a surface of genus g with b boundary cycles. We describe algorithms to compute multiple types of non-trivial cycles in G, using different techniques depending on whether or not G is an undirected graph. If G is undirected, then we give an algorithm to compute a shortest ..." Add to MetaCart Let G be a graph embedded on a surface of genus g with b boundary cycles. We describe algorithms to compute multiple types of non-trivial cycles in G, using different techniques depending on whether or not G is an undirected graph. If G is undirected, then we give an algorithm to compute a shortest non-separating cycle in G in 2O(g) n log log n time. Similar algorithms are given to compute a shortest non-contractible or non-null-homologous cycle in 2O(g+b) n log log n time. Our algorithms for undirected G combine an algorithm of Kutz with known techniques for efficiently enumerating homotopy classes of curves that may be shortest non-trivial cycles. Our main technical contributions in this work arise from assuming G is a directed graph with possibly asymmetric edge weights. For this case, we give an algorithm to compute a shortest non-contractible cycle in G in O((g 3 + g b)n log n) time. In order to achieve this time bound, we use a restriction of the infinite cyclic cover that may be useful in other contexts. We also describe an algorithm to compute a shortest non-null-homologous cycle in G in O((g 2 + g b)n log n) time, extending a known algorithm of Erickson to compute a shortest non-separating cycle. In both the undirected and directed cases, our algorithms improve the best time bounds known for many values of g and b. 1 , 2012 "... Let G be a directed graph with n vertices and non-negative weights in its directed edges, embedded on a surface of genus g, and let f be an arbitrary face of G. We describe an algorithm to preprocess the graph in O(gn log n) time, so that the shortest-path distance from any vertex on the boundary of ..." Add to MetaCart Let G be a directed graph with n vertices and non-negative weights in its directed edges, embedded on a surface of genus g, and let f be an arbitrary face of G. We describe an algorithm to preprocess the graph in O(gn log n) time, so that the shortest-path distance from any vertex on the boundary of f to any other vertex in G can be retrieved in O(log n) time. Our result directly generalizes the O(n log n)-time algorithm of Klein [Multiple-source shortest paths in planar graphs. In Proc. 16th Ann. ACM-SIAM Symp. Discrete Algorithms, 2005] for multiple-source shortest paths in planar graphs. Intuitively, our preprocessing algorithm maintains a shortest-path tree as its source point moves continuously around the boundary of f. As an application of our algorithm, we describe algorithms to compute a shortest non-contractible or non-separating cycle in embedded, undirected graphs in O(g 2 n log n) time.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=13793160","timestamp":"2014-04-16T11:09:40Z","content_type":null,"content_length":"23876","record_id":"<urn:uuid:35de3bc5-7300-47f8-b7c7-edd7f576ee73>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Re: Novice problem in mathematica syntax : Function of a function Replies: 0 Re: Novice problem in mathematica syntax : Function of a function Posted: Nov 23, 2012 3:34 AM What I want to do is to : Plot Reflectivity: Refelectivity is an expression ( or a function ) depending on nfilm which in turn depends on epsmodel( which in itself is a function of 3 parameters and a variable ) therefore : epsmodel = f(a,b,c,d,x) (* result is a comlpex number *) nfilm = sqrt[epsmodel] reflectivity = ((nfilm - 1) / (nfilm +1))^2 I just want to plot reflectivity and see the variation in it when (a,b,c,x) varies ( using the Manipulate[] tool ) Please let me know how to do this in term of these functions ? the exact formalism for which I got the error : variables are protected is shown below epsmodel[w_, omega0_, gama_, epsinfi_, omegaP_] := epsinfi + (2 Pi*10^12 omegaP)^2/((2 Pi*10^12 omega0)^2 - (2 \ Pi*10^12*w)^2 + I*(2 Pi*10^12)^2 *gama*w); nfilm[w1_, omega01_, gama1_, epsinfi1_, omegaP1_] := Sqrt[epsmodel[w1, omega01, gama1, omegaP1, epsinfi1]]; ReflectivityGaSb[w2_, omega02_, gama2_, epsinfi2_, omegaP2_] := Abs[((nfilm[w2, omega02, gama2, omegaP2, epsinfi2] - 1)/( nfilm[w2, omega02, gama2, omegaP2, epsinfi2] + 1))^2]; Plot[ReflectivityGaSb[w20, omega020, gama20, omegaP20, epsinfi20], {w20, 0, 300}, PlotRange -> {0, 1}], {epsinfi20, 2, 25}, {omega020, 0, 300}, {gama20, 0, 300}, {omegaP20, 0, 300}] Thanking you Prasad P Iyer PS. feel free to reply to ppiyer at outlook.com Hi, Prasad, Your code worked on my machine (Math. 8, Windows XP) without problems. The only thing that might be questionable, just check: the order of the last two parameters of the epsmodel in the definition of the function nfilm seam to be written in the reversed order. I think you did not intend that. Have fun, Alexei Alexei BOULBITCH, Dr., habil. IEE S.A. ZAE Weiergewan, 11, rue Edmond Reuter, L-5326 Contern, LUXEMBOURG Office phone : +352-2454-2566 Office fax: +352-2454-3566 mobile phone: +49 151 52 40 66 44 e-mail: alexei.boulbitch@iee.lu
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2416501","timestamp":"2014-04-16T11:27:29Z","content_type":null,"content_length":"15615","record_id":"<urn:uuid:2ec87e1c-b2a9-41e3-997e-d2d32aeb2246>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
physics essay 797 words (2.3 double-spaced pages) Red (FREE) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - What’s wrong with the Detroit Lions? It is obvious to me that the laws of physics can be applied to the game of football. However, The Detroit Lions may be unaware of this fact. With there current 0-12 record they are on a losing streak that could be placed in the record books. No team has ever lost every single football of their whole season. What an embarrassing way to be placed in the record books. Maybe if they looked at some of the laws of they could win a game. The passing game for the is one area that can be examined. Quarterback Charlie Batch has one of the lowest ratings in the league. He completes a little under 60% of his passes. His average passing yardage is approximately ten yards. Using the knowledge of physics to examine the projectile motion we can help Charlie complete more passes. Let’s look at what we know: 1. Charlie is about 2 meters tall. 2. His average pass is ten yards or 9 meters. 3. He’s throwing with parabolic trajectory. 4. We will use the equations: The velocity in the horizontal direction (Vx)= The initial velocity (Vi) cos (the angle) The velocity in the vertical direction (Vy)= The initial velocity (Vi) sin (the angle) The distance in the horizontal direction (x)= The velocity in the horizontal direction (Vx) multiplied by The time (t). The gravity is always equal to -9.8m/s squared. From the equations we can say that the initial velocity could be Charlie’s problem. Say Charlie always passes with an angle of 20 degrees so that it isn’t easy for the other team to intercept the pass. This kind of pass would usually takes 2 seconds to get the receiver. This means that Charlie’s average pass of 9 meters needs and initial velocity of 11m/s to get to the receiver. If any of these things don’t work out, or say, Charlie changes his passing angle, the pass will probably be incomplete. Kicking the football can also be an aspect of the game that could improve for the lions. Two weeks ago, Jason Hanson missed 3 field goals. Kicking field goals can be examined from a physics perspective. The football will follow the same parabolic trajectory as passing. Using the Same Equations we can determine the initial velocity needed to make his average 23 meter kick. He usually kicks with an average 40 degree angle. Also it takes 3 seconds for the kick to go threw the uprights. This kick will need a 11.5 m/s initial velocity in order to go threw. Not to mention, the kick also has to be straight as an arrow. For all the things that have to go right, maybe field goals should be worth more than just 3 points. Collisions happen all the time in the game of football. Whoever “wins” the collision, or keeps going forward, depends on who has more momentum. Momentum could be the difference between running the right threw the opposing defender, or having the defender hit you back and down to the ground. They are one of the most exciting parts of football. You momentum (p) is equal to your mass (m) multiplied by your velocity (v). So, the question is; Would you rather have a fat guy who’s really slow, a medium weight guy who’s has a medium speed, or a light weight super fast guy? Fact is that it wouldn’t really matter. They would all have about the same momentum because all the multiplication of big and small numbers would equal out. The running back of the Detroit Lions weighs about 170 kg and is traveling at 3 m/s. How fast does the huge 250 Newton defensive player have to be traveling? P of Detroit running back = 170 kg * 3 m/s = 510 kg*m/s P of defensive has to equal 510 kg*m/s 510 kg*m/s = 250 kg * X m/s 510 kg*m/s / 250 kg = about 2 m/s So, you see that no matter how different the size, to objects can always have the same momentum. It is always possible for a little running back to have enough momentum to break the tackle of the big, strong linebacker. The Lions are in big trouble if they can’t get it together soon. The season is coming close to an end and they are still win-less. If they could only realize how many aspects of the game that could be understood with the knowledge of physics. Perhaps the coach should be studying a physics books, instead of the video tapes of the last game. Using projectile motion and momentum could lead to the first Detroit Lion win in 12 games. They should take a small portion of the money that they make an invest in a physics course or two!!! How to Cite this Page MLA Citation: "physics essay." 123HelpMe.com. 15 Apr 2014
{"url":"http://www.123helpme.com/view.asp?id=81451","timestamp":"2014-04-16T04:13:36Z","content_type":null,"content_length":"20401","record_id":"<urn:uuid:975db3f1-2c6c-4e4a-b37c-4e96e06f8f2d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematica 7 compares to other languages Jon Harrop jon at ffconsultancy.com Sun Dec 7 18:39:30 CET 2008 Xah Lee wrote: > I didn't realize until after a hour, that if Jon simply give numerical > arguments to Main and Create, the result timing by a factor of 0.3 of > original. What a incredible sloppiness! and he intended this to show > Mathematica speed with this code? > The Main[] function calls Create. The create has 3 parameters: level, > c, and r. The level is a integer for the recursive level of > raytracing . The c is a vector for sphere center i presume. The r is > radius of the sphere. His input has c and r as integers, and this in > Mathematica means computation with exact arithmetics (and automatic > kicks into infinite precision if necessary). Changing c and r to float > immediately reduced the timing to 0.3 of original. That is only true if you solve a completely different and vastly simpler problem, which I see you have (see below). > The RaySphere function contain codes that does symbolic computation by > calling Im, which is the imaginary part of a complex number!! and if > so, it returns the symbol Infinity! The possible result of Infinity is > significant because it is used in Intersect to do a numerical > comparison in a If statement. So, here in these deep loops, > Mathematica's symbolic computation is used for numerical purposes! Infinity is a floating point number. > So, first optimization at the superficial code form level is to get > rid of this symbolic computation. That does not speed up the original computation. > Instead of checking whethere his “disc = Sqrt[b^2 - v.v + r^2]” has > imaginary part, one simply check whether the argument to sqrt is > negative. That does not speed up the original computation. > after getting rid of the symbolic computation, i made the RaySphere > function to be a Compiled function. That should improve performance but the Mathematica remains well over five orders of magnitude slower than OCaml, Haskell, Scheme, C, C++, Fortran, Java and even Lisp! > Besides the above basic things, there are several aspects that his > code can improve in speed. For example, he used pattern matching to do > core loops. > e.g. Intersect[o_, d_][{lambda_, n_}, Bound[c_, r_, s_]] > any Mathematica expert knows that this is something you don't want to > do if it is used in a core loop. Instead of pattern matching, one can > change the form to Function and it'll speed up. Your code does not implement this change. > Also, he used “Block”, which is designed for local variables and the > scope is dynamic scope. However the local vars used in this are local > constants. A proper code would use “With” instead. (in lisp, this is > various let, let*. Lispers here can imagine how lousy the code is > now.) Earlier, you said that "Module" should be used. Now you say "With". Which is it and why? Your code does not implement this change either. > Here's a improved code. The timing of this code is about 0.2 of the > original. > ... > Timing[Export["image.pgm",Graphics at Raster@Main[2,100,4.]]] You have only observed a speedup because you have drastically simplified the scene being rendered. Specifically, the scene I gave contained over 80,000 spheres but you are benchmarking with only 5 spheres and half of the image is blank! Using nine levels of spheres as I requested originally, your version is not measurably faster at all. Perhaps you should give a refund? Dr Jon D Harrop, Flying Frog Consultancy Ltd. More information about the Python-list mailing list
{"url":"https://mail.python.org/pipermail/python-list/2008-December/490919.html","timestamp":"2014-04-20T12:42:18Z","content_type":null,"content_length":"6620","record_id":"<urn:uuid:d5843a97-6265-4642-a1fc-fac753f7464a>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
Alphabet City, New York, NY New York, NY 10014 Professional Math, SAT, and GRE Tutor ...As a student, I've scored extremely well on my SAT (Math - 800) and middle and high school subjects (A student). I truly believe I have the qualifications to help you or your child with math and other subjects mentioned below: Algebra 1 , Algebra 2, Geometry, Prealgebra,... Offering 10+ subjects including algebra 1
{"url":"http://www.wyzant.com/Alphabet_City_New_York_NY_Algebra_1_tutors.aspx","timestamp":"2014-04-20T08:39:41Z","content_type":null,"content_length":"62811","record_id":"<urn:uuid:be1bc554-d1a6-49cd-99d7-2abf77b5304e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Some people prefer interval training because A. they can exercise at a steady, but slow, pace for longer periods of time. B. the intensity of the exercise doesn't increase, only the time does. C. they've found that this type of training is best for beginners. D. it's performed with speed and may not cause boredom. Some people prefer interval training because A. they can exercise at a steady, but slow, pace for longer periods of time. B. the intensity of the exercise doesn't increase, only the time does. C. they've found that this type of training is best for beginners. D. it's performed with speed and may not cause boredom. Sir/Madam, The answer is letter "B. it's performed with speed and may not cause boredom" Not a good answer? Get an answer now. (FREE) There are no new answers.
{"url":"http://www.weegy.com/?ConversationId=4A06369E","timestamp":"2014-04-17T10:23:40Z","content_type":null,"content_length":"41202","record_id":"<urn:uuid:01baed41-4796-4956-960b-dbfc9aedf4b9>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: How to save matrices in a file, effective way Replies: 4 Last Post: Nov 28, 2012 10:15 AM Messages: [ Previous | Next ] Edwardo How to save matrices in a file, effective way Posted: Nov 24, 2012 6:57 PM Posts: 7 Registered: 11/24/12 Hi, I a doing a program that multiplies a lot of matrices. All the matrices are stores in a cell matrix. where: cell(index,1) is the matrix in the left cell(index,2)is the matrix of the right cell(index,3) is the result. I want to save a lot of matrices in this way in a file (example): cell(index,1) cell(index,2) = cell(index,3) cell(index,1) cell(index,2) = cell(index,3) cell(index,1) cell(index,2) = cell(index,3) and so on. I know that exist the function dlmwrite, but I dont know exactly how to use it so that I can have this format. Date Subject Author 11/24/12 How to save matrices in a file, effective way Edwardo 11/25/12 Re: How to save matrices in a file, effective way Bruno Luong 11/27/12 Re: How to save matrices in a file, effective way Steven Lord 11/27/12 Re: How to save matrices in a file, effective way Edwardo 11/28/12 Re: How to save matrices in a file, effective way Steven Lord
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2416686&messageID=7927611","timestamp":"2014-04-16T11:39:41Z","content_type":null,"content_length":"21173","record_id":"<urn:uuid:b9a1e424-902a-4008-9fd4-cd9ba76e1b1c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
Electron. J. Diff. Eqns., Vol. 2000(2000), No. 46, pp. 1-30. Semilinear parabolic problems on manifolds and applications to the non-compact Yamabe problem Qi S. Zhang Abstract: We show that the well-known non-compact Yamabe equation (of prescribing constant positive scalar curvature) on a manifold with non-negative Ricci curvature and positive scalar curvature behaving like Submitted March 8, 1999. Published June 15, 2000. Math Subject Classifications: 35K55, 58J35. Key Words: semilinear parabolic equations, critical exponents, noncompact Yamabe problem. Show me the PDF file (251K), TEX file, and other files for this article. Qi S. Zhang Department of Mathematics, University of Memphis Memphis, TN 38152, USA e-mail: qizhang@memphis.edu Return to the EJDE web page
{"url":"http://ejde.math.txstate.edu/Volumes/2000/46/abstr.html","timestamp":"2014-04-17T07:10:07Z","content_type":null,"content_length":"2268","record_id":"<urn:uuid:6fc37b86-7a13-4a1a-a383-a5fdc0e0e372>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the difference? Crosshair V Formula » Crosshair V Formula-Z What is the difference between the Crosshair V Formula and Crosshair V Formula-Z and where can i buy the Formula-Z? Oh and will there be a Crosshair V Extreme or has it been replaced by the Formula-Z? And should i buy a AMD FX-8150 now or wait for the FX-8350 if it will be released this year? GrandAndreasK said: What is the difference between the Crosshair V Formula and Crosshair V Formula-Z and where can i buy the Formula-Z? Oh and will there be a Crosshair V Extreme or has it been replaced by the Formula-Z? And should i buy a AMD FX-8150 now or wait for the FX-8350 if it will be released this year? Improvements for AMD’s existing FX ‘Bulldozer’ and future ‘Piledriver’ architecture, there are feature revisions: Improved memory performance through higher attainable frequencies – even when using 4-DIMM. The term 2400MHz has been bandied about… The improved Extreme Engine “Digi+II” which allows for an improved control over the CPU and memory for precision overclocking. Improved high definition sound. Improved Ethernet setup features with gamers in mind. Windows 8 Ready with a Directkey Button/DRCT header, TPM header and FastBoot Switch. A new and improved UEFI Bios containing a new 64MB worth of ROM and is Windows Fastboot Enabled which if I’ve read correctly, can launch you into the OS in around 2 seconds on the latest SSD’s. OEM activation within BIOS. I have no idea if there will be a Crosshair V Extreme and neither does anyone outside of ASUS, as far as I know... Your decision regarding purchasing either the FX-8150 or 8350 is purely that, a personal choice. In my opinion the 8150 is plenty good enough, plus, when the 8350 is released, you can upgrade if you choose. The cost issue is another matter; because when the 8350 is released, surely the price on the 8150 will drop, although, at $160 - $200 price range, its pretty affordable now. So... Long story short, it's a decision you'll have to make on your own, based upon what you consider important to you alone. Good luck! August 11, 2012 11:15:34 PM GrandAndreasK said: Tnx for the details cos all i can find on Newegg is the regular Formula? When ASUS releases it to the public I suspect you'll find a large number of online retailers that will be happy to part you from your hard earned money. Until that time, you'll just have to wait like the rest of us... Sorry Can't find your answer ? Ask ! Read discussions in other Motherboards categories
{"url":"http://www.tomshardware.com/forum/id-444498/difference-crosshair-formula-crosshair-formula.html","timestamp":"2014-04-16T08:11:43Z","content_type":null,"content_length":"117261","record_id":"<urn:uuid:349df25b-bde0-4f29-97e1-693005eda29d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Define the region \(\Sigma\subset\mathbb{R^2}\) as a system of inequalities for \((x, y)\). \(\Sigma\) is the intersection between the two regions defined by the ellipses (1) and (2) - see below for equations. Assume that the ellipses intersect. • one year ago • one year ago Best Response You've already chosen the best response. \[\frac{y^2}{y_1^2}+\frac{x^2}{x_1^2}=1\]\[\frac{y^2}{y_2^2}+\frac{(x-x_0)^2}{x_2^2}=1\]As stated in assumptions, \(x_0>x_1-x_2\), and the resulting inequalities may be expressed in terms of all subscripted constants. Best Response You've already chosen the best response. It's also okay to express the inequalities for x in terms of y, seeing as this is a type II region. Best Response You've already chosen the best response. @lgbasallote @Hero @jim_thompson5910 @dumbcow Anybody mind helping please? No one's answered for 2 hours. Best Response You've already chosen the best response. @hero is here to save the day Best Response You've already chosen the best response. count me out Best Response You've already chosen the best response. x_0 is a point, x_1 - x_2 is a distance Best Response You've already chosen the best response. Best Response You've already chosen the best response. @experimentX x_0, x_1, x_2, y_1, y_2 are all scalars Best Response You've already chosen the best response. okay, I guess I'm going to Math.SE again! Best Response You've already chosen the best response. i would take each ellipse equation and solve for "y^2" then set them equal in order to get it in terms of x rearrange terms and finally solve for x_0 ...it gets messy with so many constants then substitute that expression into inequality .... x_0 > x_1 -x_2 then rearrange terms again to solve for "x" using quadratic formula Best Response You've already chosen the best response. Yes that would get us the two points of intersection, I agree. I can do that. But that still doesn't define the area. Best Response You've already chosen the best response. what do you mean? you have to find the area of the region Best Response You've already chosen the best response. i thought you had to define the region with inequalities... a<x<b and c<y<d is that right Best Response You've already chosen the best response. Yes. But say I have to elipses: |dw:1346028226442:dw| I admit that we can find a, b, and c. But how does that help us find the region in the middle? Best Response You've already chosen the best response. why not use piece wise functions ?? Best Response You've already chosen the best response. |dw:1346028684685:dw| the region in middle is defined as e < x < f b < y < c Best Response You've already chosen the best response. That's a rectangle Best Response You've already chosen the best response. hmm...looks like i am no help :| Best Response You've already chosen the best response. It's okay. As soon as I put the question up on Math.SE I'll put up a link here if you are interested. Best Response You've already chosen the best response. oh i have been approaching it wrong if you define each ellipse as function of y....f(y) and g(y) then using your picture|dw:1346029715377:dw| f(y) < x< g(y) c < y < b Best Response You've already chosen the best response. I already thought about it that way. Unfortunately, it wouldn't work: |dw:1346029850266:dw| Unless you want to break that up into a type II region and two type Is, that's not gonna help Best Response You've already chosen the best response. My suggestion would be to try polar although that has not given me any success so far. Best Response You've already chosen the best response. we can assume the region is symmetric about x-axis correct ? Best Response You've already chosen the best response. Best Response You've already chosen the best response. We know for a fact even Best Response You've already chosen the best response. hmm ok im done :) last thought then is you have to break it up into different cases of possible regions Best Response You've already chosen the best response. yes thats what I said you could do. But we'll see what SE says. I'll post the link soon. Best Response You've already chosen the best response. good job @dumbcow Best Response You've already chosen the best response. you sir, are the best Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/503a8ed0e4b0edee4f0da257","timestamp":"2014-04-19T01:52:41Z","content_type":null,"content_length":"169001","record_id":"<urn:uuid:1b2c22be-0d43-4aa0-95c5-b0ad051cc2f3>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
Large Standard Deviations Date: 12/24/97 at 02:58:36 From: Aaron Peet Subject: Large Standard Deviations Is there a way to calculate the percent values derived from the z-table without using the table, i.e. a formula? If not, can you tell me the percent area under 0 and 10 standard deviations. I'm guessing it's pretty close to 100, about If there is a graphical formula for the curve I could use calculus to find the area, but I can't figure out a formula. I am specifically trying to disprove (in terms of probability) that a person could have an IQ of 300. This is about 13+(1/3) deviations, which leaves very little area beyond it 'for someone to exist there'. Thank you. Date: 01/08/98 at 09:28:43 From: Doctor Bill Subject: Re: Large Standard Deviations The "z-score" for a point in a set of data is the number of standard deviations away from the mean the point is. To find the z-score of any point you must subtract the mean from that point and then divide by the standard deviation for the data. So, if x is a point in a set of data that has a mean of X and a standard deviation of S, then the z-score of x is; z = (x-X)/S In general there is something called the "Empirical Rule" which says that "about" 68% of all the data points will be within 1 standard deviation of the mean, about 95% will be within 2 standard deviations of the mean, and about 99.7% of the data points will be within 3 standard deviations of the mean. So you can see, it is very unlikely that any data points lie to the right of 3 standard deviations, let alone 13 standard deviation. To answer your question, the function for the normal curve is f(x) = 1/(sqrt(2*pi))*e^(-.5*x^2), where x is the z score for any data point. This function will tell you the probability density (how high above the x-axis the curve is) at a given z score. As you can see, if you plug in x = 13.33 there is essentially no area between the graph and the curve. Therefore, it is HIGHLY unlikely that any person has an IQ of 300. Remember, this does NOT prove that it is impossible, but statistically we can say that if there is anyone with an IQ of 300, he or she is VERY VERY special! To find the probability that anyone has an IQ of 300 or more, you would do the calculation 1 - F(z), where z is the z score of 300 and F(z) is the integral from -infinity to z of the function f(x), given -Doctor Bill, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/52752.html","timestamp":"2014-04-19T08:43:26Z","content_type":null,"content_length":"7373","record_id":"<urn:uuid:1003a335-7275-4d59-af2b-a798adf65851>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
Parameter Ray Atlas This page illuminates various interesting points of reference on the M-set, giving then the corresponding ray arc angle. This is a visual map; the corresponding tables of angles and formulas are on the Tables page. Visually, the stuff here is pretty boring. Don't expect any hot graphics. The interesting part on this page is the relationship between arc-angles and features. The Parameter Rays page describes how the rays are computed and numbered, while a mathematical intro is given on the Douady-Hubbard Potential page. Bud Vases One of the main results of Douady-Hubbard theory is that two rays always pinch off the base of buds. Here are some buds coming off the main cardiod. The 1/15 and 2/15 rays (in red) surrounding the m(1/4) bud. The 1/31 and 2/31 rays (in red) surrounding the m(1/5) bud. The 17/127 and 18/127 rays (in red) surrounding the m(2/7) bud. Note that the 2/7'ths bud is in the middle (via Farey addition) between 1/3 and 1/4. The 273/2047 and 274/2047 rays (in red) surrounding the m(3/11) bud. Note that the 3/11 bud is in the middle (via Farey addition) between 2/7 and 1/4. Here's a picture of the 4/15 bud, with the rays [4369/32767, 4370/32767] pointing at its base highlighted in red. Its quite small. Recall 4/15'ths position on the Farey tree: [1/3 2/7 3/11 4/15 1/4] i.e. to the right(4/15) to the right(3/11) of the one in the middle(2/7) between 1/3 and 1/4. Tenna Tips In general, rays that lie at multiples of inverse powers of two point at tips of antenna. Rays at inverse powers of two touch the tips of the prominent antennas of the m(1/3) bud. The 1/4 ray (in red) landing at the tip of the antenna of the m(1/3) bud. The 1/8 ray (in red) landing at the tip of the antenna of the m(1/4) bud. The 1/16 ray (in red) landing at the tip of the antenna of the m(1/5) bud. A closeup of the 1/4 ray touching the tip of the antenna of the m(1/3) bud. The antennas mounted on top of a bud indicate its cycle count. A three-pronged antenna is mounted on the n=3 bud, a four-pronged antenna is mounted on the n=4 bud, and so on. The antennas can be split into their component rays, as shown below. For bud n, the splitting rays are located at (2^n+2^m-1) / 2^n(2^n-1) for m=1,2,...,n. A theoretical treatment is given by Devaney, Moreno-Rocha, Geometry of the Antennas in the Mandelbrot Set (or mirror here) The bifurcation of the antenna at [9, 11, 15] / 56 on the m(1/3) bud. The trifurcation of the antenna at [17, 19, 23, 31] / 240 on the m(1/4) bud. Note the artifacts, these are discussed on the ray page. How can we be sure of the above fractions? Here, we split the 17/240 and 31/240 rays down the middle. We can see clearly bow the red edge points at the quadrifurcation. The quadri-furcation of the antenna at [33, 35, 39, 47,63] / 992 on the m(1/5) bud. Note the artifacts, these are discussed on the ray page. The mono-furcation of the antenna at the end of the m(1/2) bud isn't at all obvious, because, well, there's no obvious visual signpost. However, the formula tells us its at [5,7]/12. We've colored here so that the sharp red edge points exactly at it. The buds that sit on top of buds, buddies or friends. These are always split off by the two rays (2^n+2)/(2^n-1)(2^n+1) and (2^n+1+1)/(2^n-1)(2^n+1). The periods of the bulbs are 2n, and, of course, the first ray angle serves as the generator of the period-doubling cycle. (And, quite prettily, the generator doubles n times before wrapping around, doubles n-1 more times to wrap, then 1 to get back to the start. So its indeed 2n. ) The primary bud due west of the m(1/2) bud is separated by [2,3]/5 and thus has a period q=4 with cycle (2,4,3,1). The primary buddy of m(1/3) is split off by [10,17]/63, and has a period q=6 with cycle (10,20,40,17,34,5) The primary buddy of m(1/4) is split off by [18,33]/255, and has a period q=8 with cycle (18,36,72,144,33,66,132,9) The primary buddy of m(1/5) is split off by [34,65]/1023, and has a period q=10 with cycle (34,68,136,272,544,65,130,260,520,17) Double Troubling Due west of the main cardiod are a set of smaller and smaller bulbs. If we count these, assigning n=1 to the period-2 bulb off the main cardiod, then we have that the period of each bulb is 2^n. The smaller of the two rays that pinch these off are given by s^-[n] = a[n] / (2^2^n-1+1) where a[n] = a[n-1] * (2^2^n-2-1) + 1 and a[1]=1. The other ray is of course s^+[n]= 1-s^-[n]. In the limit, this appears to converge to 0.412454033640107 which is the Thue Morse codeword. Its not obvious to me why it appears here; but I haven't thought about it either. (See, for example, Thue Morse L-systems for its relevance to fractals. See also A Fresh Look at Number for the occurrence of complimentary (gray-code) Thue-Morse in the symbolic dynamics of the logistic equation.) The codeword is the Farey number of 0.418979789366342 but this number seems to be unknown. The primary bud due west of the m(1/2) bud is separated by [2,3]/5 and thus has a period q=4 with cycle (2,4,3,1). The bud due west of the bud due west is rayed by [7,10]/17 and thus has a period q=8 with cycle (7,14,11,5,10,3,6,12). The bud due west ... is rayed by [106,151]/257 and thus has a period q=16 with cycle (106,212,167,77,154,51,102,204,151,45,90,180,103,206,155,53) The bud due west ... is rayed by [27031,38506]/65537 Mini Me The largest mini-M-set on the real axis is at c=(re,im)=(-1.75,0). The main M-set can be mapped into, but not onto, using a simple binary-codeword expansion algorithm, as defined on the Tables page. For any feature on the main bulb, the algorithm provides the ray angle for the corresponding feature on the mini-bulb. The two rays entering its tail are 3/7 and 4/7. It has period q=3. The two rays splitting off the primary bud are 4/9 and 5/9. The bud has period q=6, that is, twice the period of the mini-cardiod. In general, the rays that go into the tails of mini-me's seem to be of the for p/(2^n-1) for some integers p, n. The following pictures show some of these. Unfortunately, they are quite messy, because the ray algorithm breaks down in this area. As mentioned elsewhere, there doesn't seem to be any way f fixing this algorithm, and I don't know of others. This one shows the rays 3/31 and 4/31. The 3/31 ray is the red-blue discontinuity coming in from the right side, and heading straight into the tail of the bud. The 4/31 ray comes in from the left, gets interfered with by the busted algorithm but if you look at it just right, you'll see that it goes into the tail as well. This mini-me is the largest one on the longest antenna spoke of the 1/4 The [5,6]/31 rays from a distance. The same view, using ordinary coloration. So which mini-me was that? Why, the largest one on the smaller antenna of the 1/3 bud. There seems to be a pattern, but its hard to describe & intuit. For example, the [7,8]/31 rays enter the tail of the largest mini-me on the longest antenna of the largest mini-me of the 1/3 bud. The [9,10]/31 rays pinch off the 2/5 bulb, while the [11,12]/31 rays go to the tail of the mini-off-the-mini-off-the 1/2 bud. Copyright (c) 2000 Linas Vepstas. All Rights Reserved. Linas Vepstas December 2000 Return to Linas' Art Gallery
{"url":"http://www.linas.org/art-gallery/escape/phase/atlas.html","timestamp":"2014-04-17T04:33:11Z","content_type":null,"content_length":"13657","record_id":"<urn:uuid:8dd5d1f1-cb66-4ece-9c7f-33850463e305>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
Chapter 29. The Solution of the Problem of Gravitation on the Basis of the General Principle of Relativity. Einstein, Albert. 1920. Relativity: The Special and General Theory IF the reader has followed all our previous considerations, he will have no further difficulty in understanding the methods leading to the solution of the problem of gravitation. 1 We start off from a consideration of a Galileian domain, i.e. a domain in which there is no gravitational field relative to the Galileian reference-body K. The behaviour of measuring-rods and 2 clocks with reference to K is known from the special theory of relativity, likewise the behaviour of isolated material points; the latter move uniformly and in straight lines. Now let us refer this domain to a random Gauss co-ordinate system or to a mollusk as reference-body K'. Then with respect to K' there is a gravitational field G (of a particular kind). We learn 3 the behaviour of measuring-rods and clocks and also of freely-moving material points with reference to K' simply by mathematical transformation. We interpret this behaviour as the behaviour of measuring-rods, clocks and material points under the influence of the gravitational field G. Hereupon we introduce a hypothesis: that the influence of the gravitational field on measuring-rods, clocks and freely-moving material points continues to take place according to the same laws, even in the case when the prevailing gravitational field is not derivable from the Galileian special case, simply by means of a transformation of co-ordinates. The next step is to investigate the space-time behaviour of the gravitational field G, which was derived from the Galileian special case simply by transformation of the co-ordinates. This 4 behaviour is formulated in a law, which is always valid, no matter how the reference-body (mollusk) used in the description may be chosen. This law is not yet the general law of the gravitational field, since the gravitational field under consideration is of a special kind. In order to find out the general law-of-field of gravitation 5 we still require to obtain a generalisation of the law as found above. This can be obtained without caprice, however, by taking into consideration the following demands: a. The required generalisation must likewise satisfy the general postulate of relativity. b. If there is any matter in the domain under consideration, only its inertial mass, and thus according to Section XV only its energy is of importance for its effect in exciting a field. c. Gravitational field and matter together must satisfy the law of the conservation of energy (and of impulse). Finally, the general principle of relativity permits us to determine the influence of the gravitational field on the course of all those processes which take place according to known laws when a 6 gravitational field is absent, i.e. which have already been fitted into the frame of the special theory of relativity. In this connection we proceed in principle according to the method which has already been explained for measuring-rods, clocks and freely-moving material points. The theory of gravitation derived in this way from the general postulate of relativity excels not only in its beauty; nor in removing the defect attaching to classical mechanics which was brought 7 to light in Section XXI; nor in interpreting the empirical law of the equality of inertial and gravitational mass; but it has also already explained a result of observation in astronomy, against which classical mechanics is powerless. If we confine the application of the theory to the case where the gravitational fields can be regarded as being weak, and in which all masses move with respect to the co-ordinate system with 8 velocities which are small compared with the velocity of light, we then obtain as a first approximation the Newtonian theory. Thus the latter theory is obtained here without any particular assumption, whereas Newton had to introduce the hypothesis that the force of attraction between mutually attracting material points is inversely proportional to the square of the distance between them. If we increase the accuracy of the calculation, deviations from the theory of Newton make their appearance, practically all of which must nevertheless escape the test of observation owing to their smallness. We must draw attention here to one of these deviations. According to Newton s theory, a planet moves round the sun in an ellipse, which would permanently maintain its position with respect to the 9 fixed stars, if we could disregard the motion of the fixed stars, themselves and the action of the other planets under consideration. Thus, if we correct the observed motion of the planets for these two influences, and if Newton s theory be strictly correct, we ought to obtain for the orbit of the planet an ellipse, which is fixed with reference to the fixed stars. This deduction, which can be tested with great accuracy, has been confirmed for all the planets save one, with the precision that is capable of being obtained by the delicacy of observation attainable at the present time. The sole exception is Mercury, the planet which lies nearest the sun. Since the time Leverrier, it has been known that the ellipse corresponding to the orbit of Mercury, after it has been corrected for the influences mentioned above, is not stationary with respect to the fixed stars, but that it rotates exceedingly slowly in the plane of the orbit and in the sense of the orbital motion. The value obtained for this rotary movement of the orbital ellipse was 43 seconds of arc per century, an amount ensured to be correct to within a few seconds of arc. This effect can be explained by means of classical mechanics only on the assumption of hypotheses which have little probability, and which were devised solely for this purpose. On the basis of the general theory of relativity, it is found that the ellipse of every planet round the sun must necessarily rotate in the manner indicated above; that for all the planets, with 10 the exception of Mercury, this rotation is too small to be detected with the delicacy of observation possible at the present time; but that in the case of Mercury it must amount to 43 seconds of arc per century, a result which is strictly in agreement with observation. Apart from this one, it has hitherto been possible to make only two deductions from the theory which admit of being tested by observation, to wit, the curvature of light rays by the gravitational 11 field of the sun, 1 and a displacement of the spectral lines of light reaching us from large stars, as compared with the corresponding lines for light produced in an analogous manner terrestrially (i.e. by the same kind of molecule). I do not doubt that these deductions from the theory will be confirmed also.
{"url":"http://bartleby.com/173/29.html","timestamp":"2014-04-18T03:13:17Z","content_type":null,"content_length":"28920","record_id":"<urn:uuid:d16d2648-0dfa-4164-9caa-4c898c1687fe>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Re: How to solve a problem with globals [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: Re: How to solve a problem with globals From "Martin Weiss" <martin.weiss1@gmx.de> To <statalist@hsphsun2.harvard.edu> Subject st: Re: How to solve a problem with globals Date Tue, 24 Feb 2009 22:02:08 +0100 Very normal: Stata does not know the -global- "$f_pA_`", so it evaluates to nothing. So all that is left is `i' which evaluates to 1.. ----- Original Message ----- From: "Tiago V. Pereira" <tiago.pereira@incor.usp.br> To: <statalist@hsphsun2.harvard.edu> Sent: Tuesday, February 24, 2009 7:50 PM Subject: st: How to solve a problem with globals Dear statalisters, Firstly, I would like to thank several statalisters for the splendid help in my previous questions. Now, I am having the following problem: . global f_pA_1 = 0.5 . local i = 1 . dis $f_pA_`i' . dis $f_pA_1 Is that normal? All the best, I usually need to compute a value X that satifies a specific condition. A typical example is shown below. But this is only a single example, and ordinarily I have similar, but more complex aims. Hence, I would like to know your expert opinion on how one can make codes these kind of codes faster. Perhaps only Mata helps? In this example, I want to compute the value of tau^2 that imposes the lower limit of my confidence interval to be 0. scalar lower_limit = 999999999 while lower_limit>0 { local tau2 = `tau2'+0.0001 cap drop T_i V_i W_i WT gene T_i= ln(_ES) gene V_i = _selogES^2 gene W_i = 1/(V_i+`tau2') gene WT = W_i*T_i qui summ WT scalar sum_WT = r(sum) qui sum W_i scalar sum_W_i = r(sum) scalar summary_random = sum_WT/sum_W_i scalar lower_limit = summary_random-(1.96/sqrt(sum_W_i)) * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-02/msg01103.html","timestamp":"2014-04-19T06:55:00Z","content_type":null,"content_length":"7387","record_id":"<urn:uuid:ad5282c0-8450-41d2-bf34-ab92897d91c9>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
Chapter 5 - Derivatives Upon completion of this chapter, you should be able to do the following: 1. Compute the derivative of a constant. 2. Compute the derivative of a variable raised to a power. 3. Compute the derivative of the sum and product of two or more functions and the quotient of two functions. 4. Compute the derivative of a function raised to a power, in radical form, and by using the chain rule. 5. Compute the derivative of an inverse function, an implicit func­tion, a trigonometric function, and a natural logarithmic function. 6. Compute the derivative of a constant raised to a variable power. In the previous chapter on limits, we used the delta process to find the limit of a function as Ax approached zero. We called the result of this tedious and, in some cases, lengthy process the derivative. In this chapter we will examine some rules used to find the derivative of a function without using the delta process. To find how y changes as x changes, we take the limit of which is called the derivative of y with respect to x; we use the symbol to indicate the derivative and write In this section we will learn a number of rules that will enable us to easily obtain the derivative of many algebraic functions. In the derivation of these rules, which will be called theorems, we will assume that exists and is finite. The method we will use to find the derivative of a constant is similar to the delta process used in the previous chapter but in­cludes an analytical proof. A diagram is used to give a geometrical meaning of the function. Theorem 1. The derivative of a constant is zero. Expressed as a formula, this may be written as where y = c. PROOF: In figure 5-1, the graph of Figure 5-1.-Graph of y = c, where c is a constant. where c is a constant, the value of y is the same for all values of x, and any change in x (that is, does not affect y; then Another way of stating this is that when x is equal to x[1] and when x is equal to has the same value. Therefore, so that The equation represents a straight line parallel to the X axis. The slope of this line will be zero for all values of x. Therefore, the derivative is zero for all values of x. EXAMPLE. Find the derivative of the function
{"url":"http://www.tpub.com/math2/43.htm","timestamp":"2014-04-21T02:11:38Z","content_type":null,"content_length":"23528","record_id":"<urn:uuid:e5ce4776-390c-401d-9363-2e3cbb5c49b1>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
Transformation to Standard Problems Next: Invert . Up: Generalized Non-Hermitian Eigenvalue Problems Previous: Direct Methods &nbsp Contents &nbsp Index Transformation to Standard Problems A common approach for the numerical solution of the large sparse generalized eigenvalue problem (8.1) is to transform the problem first to an equivalent standard eigenvalue problem and then to apply an appropriate iterative method as described in Chapter 7. In this section, we will discuss three approaches for the transformation to a standard eigenproblem. The first approach (invert Susan Blackford 2000-11-20
{"url":"http://web.eecs.utk.edu/~dongarra/etemplates/node283.html","timestamp":"2014-04-17T01:06:35Z","content_type":null,"content_length":"5071","record_id":"<urn:uuid:0ed9c1f8-9793-48dc-b1d3-3a1b2e424637>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: pseudo random image with values -1 and 1 , zero mean and Gaussian distribution Replies: 13 Last Post: May 7, 2012 7:00 AM Messages: [ Previous | Next ] sudesh Re: pseudo random image with values -1 and 1 , zero mean and Gaussian distribution Posted: May 7, 2012 7:00 AM Posts: 15 Registered: 11/2/11 "Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <jo87f3$jm7$1@newscl01ah.mathworks.com>... > "sudesh" wrote in message <jo879g$irp$1@newscl01ah.mathworks.com>... > > I can e-mail you the screenshot of the part where its written/the whole paper. > > > No. I don't reply by email. > Bruno Thanks again for your valuable help! It good to learn some new things today.
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2376288&messageID=7812875","timestamp":"2014-04-17T15:37:39Z","content_type":null,"content_length":"32468","record_id":"<urn:uuid:e58e6b10-bd01-4676-893b-90902468b9e0>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
A fast and high-order accurate surface perturbation method for nanoplasmonic simulations: basic concepts, analytic continuation and applications Published in JOSA A, Vol. 30 Issue 11, pp.2175-2187 (2013) by Fernando Reitich, Timothy W. Johnson, Sang-Hyun Oh, and Gary Meyer Source article Abstract | Full Text: XHTML | Full Text: PDF Spotlight summary: Suppose we want to predict the number of trees in a certain forest year after year. As a first approximation, we might assume that it’s a forest like any other forest, with no defining characteristics. But maybe this forest has a pond in it, which we expect to affect tree growth in its local environment. We could simply estimate how such a pond affects tree growth locally, and then add that to our estimate for the rest of the forest. The pond, in this case, causes a perturbation on our calculation. We could keep adding perturbations, like boulders, or a gorge, or a large field, until we had an answer that was as precise as we needed it to be. It would look like "total trees = trees for a typical forest + trees added by pond - trees subtracted by presence of field + …" Sometimes, a change is too much for this so-called perturbation theory to handle. Instead of introducing a pond, suppose we introduce a huge lake. We expect such change to heavily affect tree growth on a forest-wide scale. To solve the problem now, we need to introduce new elements into the calculation, and our understanding of how a featureless forest works becomes less important. In this paper, Reitich et al. attack a problem in the field of plasmonics, for which neither the standard "forest" model nor the standard perturbative model work. They look at metallic films covered in periodic ridges, called gratings. When such films are illuminated at certain angles and wavelengths of light, it is known that some of the light drives plasmons, or electronic waves, in the metal. It is important to know exactly which angles and wavelengths will couple to plasmonic modes, and how efficiently, so that devices can be constructed. But while it is known how to solve the problem for small perturbations on a smooth metal film, the problem becomes much harder if the ridges start to get deep, or have irregular shapes. This paper solves the problem by using a subtle insight: If we make a tiny change to the metal film, it should cause a tiny change in the optical characteristics of the film. It turns out that this is equivalent to saying that if we graph the efficiency of light-plasmon coupling vs., say, the depth of the ridges, we will get a function that is smooth no matter how many derivatives we take of it. Such a function is called analytic, and it has a special property that if we know any piece of it, we know the whole function. Reitich et al. can then calculate the plasmonic characteristics of a film with small perturbations, and then use that knowledge to extend the answer out to what it would be for deep ridges, where perturbation theory doesn’t work. They call the method "high order perturbation of surfaces" (HOPS), and show that it correctly predicts all of the standard optical properties of metallic gratings. They also claim that it’s much faster than using Maxwell’s equations to calculate an exact solution, which would be like cataloging every tree, stone, and drop of water in the forest to make predictions about tree growth. This work extends the domain of viable plasmonic devices a bit farther from flat surfaces, and lets us better tailor the device to the application. --Brad Deutsch Technical Division: Light–Matter Interactions ToC Category: Optics at Surfaces OCIS Codes: (240.6680) Optics at surfaces : Surface plasmons (050.1755) Diffraction and gratings : Computational electromagnetic methods Posted on October 29, 2013 Add Comment You must to add comments.
{"url":"http://www.opticsinfobase.org/spotlight/summary.cfm?URI=josaa-30-11-2175","timestamp":"2014-04-19T15:53:05Z","content_type":null,"content_length":"40874","record_id":"<urn:uuid:a4ca1a26-a376-44c2-a5fa-b105b800d922>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: ON THE CARTAN MAP FOR CROSSED PRODUCTS AND Abstract. We study certain aspects of the algebraic K­theory of Hopf­Galois extensions. We show that the Cartan map from K­theory to G­theory of such an extension is a rational isomorphism, provided the ring of coinvariants is regular, the Hopf algebra is finite dimensional and its Cartan map is injective in degree zero. This covers the case of a crossed product of a regular ring with a finite group and has an application to the study of Iwasawa modules. 1. Introduction 1.1. The Cartan map. Recall that a ring is said to be right regular if it is right Noetherian and every finitely generated right module has finite projective dimen­ sion. So any Noetherian ring of finite global dimension is necessarily regular. One consequence of Quillen's celebrated Resolution Theorem is that the K­ theory and the G­theory of a right regular ring B coincide [5, Corollary 2 to Theorem 3]. More precisely, the Cartan map K i (B) # G i (B) is an isomorphism for all i # 0. Now if G is a finite group and A = B # G is a crossed product then A need not be regular, so the Resolution Theorem does not apply. This is evident even in the simplest case when B = k is a field of characteristic p > 0, p divides the order of G and A = kG is the group algebra of G --- in fact, in this case the Cartan map
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/600/3729307.html","timestamp":"2014-04-16T23:33:23Z","content_type":null,"content_length":"8502","record_id":"<urn:uuid:368dee3a-df6d-4bd5-8379-974a6a48ae9d>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
Memorial, Houston, TX Humble, TX 77396 Professor, researcher, businessman wants to help you excel ...Algebra is not necessarily easy, but it is completely logical. There is nothing you learn early that will be contradicted by later lessons. My approach in working with you on algebra 1 and algebra 2 is first to assess your familiarity and comfort with basic concepts,... Offering 10+ subjects including algebra 1
{"url":"http://www.wyzant.com/Memorial_Houston_TX_algebra_1_tutors.aspx","timestamp":"2014-04-16T05:23:16Z","content_type":null,"content_length":"62650","record_id":"<urn:uuid:06b51688-fa44-445e-9bd7-12879f4c673c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
approximation problem January 17th 2009, 03:32 AM #1 Jan 2009 approximation problem A book I'm using makes heavy use of the approximation (X-Y)/Y = ln X - ln Y. (ln = natural log.) Can anybody tell me (a) what the intuition of this is, and (b) what the exact RHS should be? Many thanks in anticipation ... Are you sure you are not mis-reading something? That's not close at all. For example, if X= 53 and Y= 25, then (X-Y)/Y= 1.12 while ln(X)- ln(Y)= 0.751. Taking X and Y very large, say X= 100000 and Y= 800000 makes it a bit better: (X-Y)/Y= 0.25 and ln(X)- ln(Y)= 0.22. Is this supposed to be for X and Y very large? Or perhaps your book is requiring that X and Y be very close- that is that X-Y is small compared to Y? Are you sure you are not mis-reading something? That's not close at all. For example, if X= 53 and Y= 25, then (X-Y)/Y= 1.12 while ln(X)- ln(Y)= 0.751. Taking X and Y very large, say X= 100000 and Y= 800000 makes it a bit better: (X-Y)/Y= 0.25 and ln(X)- ln(Y)= 0.22. Is this supposed to be for X and Y very large? Or perhaps your book is requiring that X and Y be very close- that is that X-Y is small compared to Y? Sorry, I wasn't very clear. Yes, it's an approximation when X and Y are close. I tried it numerically and when the LHS is 5%, the right is 4.88% - so, as you say, not very close, but rule-of-thumb OK for small differences. In one case (I understand) it is exact, dX/X = d ln X. Since the dx is supposed to be very small indeed relative to X, this should work exactly. Anyway, this approximation is very widely used in economics. But I'm interested in the intuition behind it and what the exact equality would be. I'm guessing it will just be the RHS plus some cross-multiplication term, something small times something small, which can therefore be ignored. Apologies again for the lack of clarity. If t is small (less than 1) then there is a power series expansion for ln(1+t), namely $\ln(1+t) = t - \tfrac12t^2+\ldots$. If x–y is small compared to y, then we can put $t = \tfrac{x-y}y$, so that $1+t = \tfrac xy$. That gives $\ln x - \ln y = \ln\bigl(\tfrac xy\bigr) = \tfrac{x-y}y - \tfrac12\bigl(\tfrac{x-y}y\bigr)^2 + \ldots$. So the error in using the approximation $\tfrac{x-y}y \approx \ln x - \ln y$ is at most $\tfrac12\bigl(\tfrac{x-y}y\bigr)^2$. approximation problem If t is small (less than 1) then there is a power series expansion for ln(1+t), namely $\ln(1+t) = t - \tfrac12t^2+\ldots$. If x–y is small compared to y, then we can put $t = \tfrac{x-y}y$, so that $1+t = \tfrac xy$. That gives $\ln x - \ln y = \ln\bigl(\tfrac xy\bigr) = \tfrac{x-y}y - \tfrac12\bigl(\tfrac{x-y}y\bigr)^2 + \ldots$. So the error in using the approximation $\tfrac{x-y}y \approx \ln x - \ln y$ is at most $\tfrac12\bigl(\tfrac{x-y}y\bigr)^2$. This is extremely helpful - many thanks! January 17th 2009, 04:09 AM #2 MHF Contributor Apr 2005 January 17th 2009, 04:43 AM #3 Jan 2009 January 17th 2009, 05:49 AM #4 January 17th 2009, 08:04 AM #5 Jan 2009
{"url":"http://mathhelpforum.com/calculus/68610-approximation-problem.html","timestamp":"2014-04-17T22:21:12Z","content_type":null,"content_length":"46734","record_id":"<urn:uuid:70d5560b-9de5-4235-b835-0365d8b5c841>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
Redwood Estates Prealgebra Tutor Find a Redwood Estates Prealgebra Tutor I tutored all lower division math classes at the Math Learning Center at Cabrillo Community College for 2 years. I assisted in the selection and training of tutors. I have taught algebra, trigonometry, precalculus, geometry, linear algebra, and business math at various community colleges and a state university for 4 years. 11 Subjects: including prealgebra, calculus, statistics, geometry ...I presently teach physical science as a substitute teacher. I work mainly with fifth to eighth graders. My explanations are clear, concise, and thorough. 37 Subjects: including prealgebra, reading, English, physics ...I hold a Bachelors' degree in Biochemistry from U.C. Berkeley, and a PhD in Immunology from Stanford. I have ten years of practical, hands-on computer programming experience through my work as a scientist. 17 Subjects: including prealgebra, chemistry, statistics, geometry ...I am currently a full-time instructional assistant in the Math Learning Center at Cabrillo College, where I have been tutoring off and on for the past 10 years. I taught algebra 2 at Georgiana Bruce Kirby Preparatory School in Santa Cruz during the 2010-2011 school year. I have extensive experi... 30 Subjects: including prealgebra, Spanish, calculus, physics ...It's not social cue dependent or choice confusing. I don't have special ed. or autism certification, but I do have teaching experience specifically with aspergers designated students in my middle school math classes. My multiple subject credential earned Professional Clear status prior to 1983. 16 Subjects: including prealgebra, reading, writing, geometry
{"url":"http://www.purplemath.com/Redwood_Estates_prealgebra_tutors.php","timestamp":"2014-04-18T19:06:42Z","content_type":null,"content_length":"24091","record_id":"<urn:uuid:ff0d4d48-2e86-4bd9-bf3b-269d3e613b7d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
The Purplemath Forums y= ±2x+7 y= rad(5-x) (all under the rad) y= 1/(x+62) - 2 y= -3^x + 41 (no parentheses around the -3 purposely) how many are functions? this is what i think no, yes, yes, yes what do you think and why? Re: Help with functions Hell0 wrote:y= ±2x+7 y= rad(5-x) (all under the rad) y= 1/(x+62) - 2 y= -3^x + 41 (no parentheses around the -3 purposely) how many are functions? this is what i think no, yes, yes, yes what do you think and why? I agree. Why? Because the last three match the definition of "function" but the first one doesn't.
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=9&t=3325","timestamp":"2014-04-16T04:19:19Z","content_type":null,"content_length":"18117","record_id":"<urn:uuid:f0ad48bd-805c-4971-9584-86fc7f2b695a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
SKRIPSI Jurusan Kependidikan Sekolah Dasar & Prasekolah - Fakultas Ilmu Pendidikan UM Yiyin Anggarini Anggarini, Yiyin. 2010. Improving The Mathematics Achievement of The Sixth Grade Student in Doing Arithmetical Operation Using Cooperative Model of Jigsaw Type at SDN 1 Sedayugunung Tulungagung. Thesis, Department of Elementary and Preschool Education, Faculty of Education, State University of Malang. Advisors: (I) Drs.Goenawan Roebyanto, S.Pd, M.Pd, (II) Dra. Wasih DS, M.Pd. Keywords: learning, math, cooperative model of jigsaw type Learning mathematics should be given to all students ranging from elementary schools to equip students with the ability to think logically, systematically, critically, and creatively, and ability to cooperate. This is because the math lesson is aimed for students to have the ability to understand math concepts, solve problems, communicate ideas, and appreciate the use of mathematics in life. Based on the results of observations on the implementation of learning mathematics in the sixth grade SDN 1 Sedayugunung Tulungagung, there are known problems in learning, namely the average value of student evaluation results on the subject of arithmetic operations mixture of 56.67, the social feeling among students is very less, and student activities in learning did not vary. The purpose of this study was to determine the increase in the learning of mathematics in sixth grade student in progress calculate the mixture through a cooperative model of jigsaw type at SDN 1 Sedayugunung The design of this study using a type of Classroom Action Research (CAR). Subjects were researchers and students of class VI SDN 1 Sedayugunung Tulungagung with the number of 12 students. Data collection technique used observation sheets, questionnaires, and tests. Data analysis was performed after administration of the action in each cycle that has been done. Learning mathematics in pre-action, which was held in the sixth grade SDN 1 Sedayugunung in progress count mixture using conventional methods and monotonous, so the activity and student learning outcomes are still many who do not meet the standards set. The results showed the results of studying mathematics in grade VI in progress calculate the mixture through a cooperative model of jigsaw type increased. This increase is known from student test scores, on a pre action KKM students who achieve 33%, 67% in cycle I and cycle II at 100%. While the students' learning activities in the first cycle that appears quite dominant criteria, and on the second cycle either, which is observed on student activities in accordance with the steps using the model of cooperative learning jigsaw types namely: (a) formation of groups, (b) formation of new groups and discussion, (c) the results of discussions in the early group, (d) percentage of the discussions, (e) concluded, and (f) to reflect.
{"url":"http://karya-ilmiah.um.ac.id/index.php/KSDP/article/view/11252","timestamp":"2014-04-20T05:59:46Z","content_type":null,"content_length":"13157","record_id":"<urn:uuid:032e9c28-04e2-4203-9b2c-ca03025ecccf>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] why std() eats much memory in multidimensional case? [Numpy-discussion] why std() eats much memory in multidimensional case? Emanuele Olivetti emanuele@relativita.... Fri Apr 20 09:57:13 CDT 2007 I'm working with 4D integer matrices and need to compute std() on a given axis but I experience problems with excessive memory consumption. import numpy a = numpy.random.randint(100,size=(50,50,50,200)) # 4D randint matrix b = a.std(3) It seems that this code requires 100-200 Mb to allocate 'a' as a matrix of integers, but requires >500Mb more just to compute std(3). Is it possible to compute std(3) on integer matrices without spending so much memory? I manage 4D matrices that are not much bigger than the one in the example and they require >1.2Gb of ram to compute std(3) only. Note that quite all this memory is immediately released after computing std() so it seems it's used just internally and not to represent/store the result. Unfortunately I haven't all that RAM... Could someone explain/correct this problem? Thanks in advance, More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-April/027332.html","timestamp":"2014-04-18T05:57:03Z","content_type":null,"content_length":"3624","record_id":"<urn:uuid:8e0b17f1-3869-4ab8-90a6-099f22a358ad>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Using Aggregate Functions and Operators in PostgreSQL "Linux Gazette...making Linux just a little more fun!" Using Aggregate Functions and Operators in PostgreSQL This article assumes the reader can do basic SELECT, INSERT, UPDATE, and DELETE queries to and from a SQL database. If you are not sure on how these functions work, please read a tutorial on how these types of queries work. Specifically if you can use a SELECT query, then you are armed with enough information to read through this document with a high level of understanding. That said, lets get on to aggregate functions! In the beginning of this rather extensive article, I will cover how to use the five most common and basic aggregate functions on PostgreSQL. Those functions are count(), min(), max(), avg(), and sum (). Then I will cover how to use several common operators that exist for your use in PostgreSQL. Depending on your development environment, a good philosophy to practice is letting your DataBase Management System (DBMS) craft your results so that they are immediately usable in your code with little or no processing. Good examples for the reasoning behind this philosophy are exhibited when using aggregates. Finally, I will cover how to use several common operators with our aggregate function counterparts that exist for your use in PostgreSQL. Depending on your development environment, a good philosophy to practice is letting your DataBase Management System (DBMS) craft your results so that they are immediately usable in your code with little or no processing. In this article, I will demonstrate how to use some simple operators in your queries to craft data exactly as you need it. What is an aggregate function? An aggregate function is a function such as count() or sum() that you can use to calculate totals. In writing expressions and in programming, you can use SQL aggregate functions to determine various statistics and values. Aggregate functions can greatly reduce the amount of coding that you need to do in order to get information from your database. (Excerpt from the PostgreSQL 7.1 manual) aggregate_name (expression) aggregate_name (ALL expression) aggregate_name (DISTINCT expression) aggregate_name ( * ) where aggregate_name is a previously defined aggregate, and expression is any expression that does not itself contain an aggregate expression. The first form of aggregate expression invokes the aggregate across all input rows for which the given expression yields a non-NULL value. (Actually, it is up to the aggregate function whether to ignore NULLs or not --- but all the standard ones do.) The second form is the same as the first, since ALL is the default. The third form invokes the aggregate for all distinct non-NULL values of the expression found in the input rows. The last form invokes the aggregate once for each input row regardless of NULL or non-NULL values; since no particular input value is specified, it is generally only useful for the count() aggregate function. Consider this example. You are writing a program which tracks sales of books. You have a table called the "sale" table that contains the book title, book price, and date of purchase. You want to know what the total amount of money that you made by selling books for the month of March 2001. Without aggregate functions, you would have to select all the rows with a date of purchase in March 2001, iterate through them one by one to calculate the total. Now if you only have 10 rows, this does not make a big difference (and if you only sell 10 books a month you should hope those are pretty high dollar!). But consider a bookstore that sells on average 2000 books a month. Now iterating through each row one by one does not sound so efficient does it? With aggregate functions you can simply select the sum() of the book price column for the month of March 2001. Your query will return one value and you will not have to iterate through them in your The SUM() function. The sum() function is very useful as described in the above example. Based on our fictitious table, consider the following. table sale ( book_title varchar(200), book_price real, date_of_purchase datetime Without aggregate functions: SELECT * FROM sale WHERE date_of_purchase BETWEEN '03/01/2001' AND '04/01/2001'; This returns all rows which correspond to a sale in the month of March 2001. With aggregate functions: SELECT SUM(book_price) AS total FROM sale WHERE date_of_purchase BETWEEN '03/01/2001' AND '04/01/2001'; This returns a single row with a single column called total containing the total books sold in the month of March 2001. You can also use mathematical operators within the context of the sum() function to add additional functionality. Say for instance, you wanted to get the value of 20% of your sum of book_price as all of your books have a 20% markup built in to the price. Your aggregate would look like: SELECT SUM(book_price) AS total, SUM(book_price * .2) AS profit FROM sale WHERE date_of_purchase BETWEEN '03/01/2001' AND '04/01/2001'; If you look on a grander scale, you will see even more uses for the sum() function. For example calculating commissions, generating detailed reports, and generating running statistical totals. When writing a report, it is much easier to have SQL do the math for you and simply display the results than attempting to iterate through thousands or millions of records. The count() function. Yet another useful aggregate function is count(). This function allows you to return the number of rows that match a given criteria. Say for example you have a database table that contains news items and you want to display your current total of news items in the database without selecting them all and iterating through them one by one. Simply do the following: SELECT COUNT(*) AS myCount FROM news; This will return the total number of news articles in your database. The MAX() and MIN() functions. These two functions will simply return the maximum or minimum value in a given column. This may be useful if you want to very quickly know the highest priced book you sold and the lowest price book you sold (back to the bookstore scenario). That query would look like this. SELECT MAX(book_price) AS highestPrice, MIN(book_price) AS lowestPrice FROM sale WHERE date_of_purchase BETWEEN '03/01/2001' AND '04/01/2001'; Again, this simply prevents you from having to select EVERYTHING from the database, iterate through each row one by one, and calculate the result by hand. The AVG() function. This particular aggregate is definitely very useful. Any time you would like to generate an average value for any number of fields, you can use the avg() aggregate. Without aggregates, you would once again have to iterate through all rows returned, sum up your column and take a count of the number of rows, then do your math. In our bookstore example, say you would like to calculate the average book price that was sold during March 2001. Your query would look like this. SELECT AVG(book_price) AS avg_price FROM sale WHERE date_of_purchase BETWEEN '03/01/2001' AND '04/01/2001'; What is an operator? An operator is something that performs on operation or function on the values that are around it. For an example of this, let's look at Mathematical Operators. If you wanted to subtract the values from two fields in a select statement, you would use the subtraction (-) operator. SELECT salesperson_name, revenue - cost AS commission FROM sales; What will be returned is the results of the revenue each sales person brought in minus the cost of the products that they sold which will yield their commission amount. │ salesperson_name │ commission │ │ Branden Williams │ 234.43 │ │ Matt Springfield │ 87.74 │ Operators can be VERY useful when you have complex calculations or a need to produce the exact results you need without having your script do any text or math based processing. Let's refer to our bookstore example. You are writing a program which will show you the highest margin books (largest amount of profit per book) so that your marketing monkey can place them closer to the door of the store. Instead of doing your math on the fly while iterating through your result set, you can have the result set display the correct information for you. table inventory ( book_title varchar(200), book_cost real, selling_price real SELECT book_title, selling_price - book_cost AS profit ORDER BY profit DESC; Which will produce results similar to the following. │ book_title │ profit │ │ How To Scam Customers Into Buying Your Books │ 15.01 │ │ How To Crash Windows 2000 │ 13.84 │ Now your marketing guy can very quickly see which books are the highest margin books. Another good use for operators is when you are selecting information from one table to another. For example, you may have a temporary table that you select product data into so that it can be proofread before it is sent into some master data table. Shopping Carts make great examples of this. You can take the pertinent information from your production tables and place it in a temporary table to be then removed, quantity increased, or discounts added before it is placed into your master order table. In an example like this, you would not want to select out your various kinds of information, perform some functions to get them just right, and then insert them back into your temporary table. You can simply do it all in one query by using operators. It also creates less of a headache when you are dealing with very dynamic data. Let the database handle as much of your dynamic data as it can. Now I would like to go into some specific operators and their functions. To see a complete list of operators, in your pgsql interface window type '\do'. The +, -, *, and / operators. These are the basic math operators that you can use in PostgreSQL. See above for good examples on how to use them. A few additional examples are here. • Calculating tax (SELECT subtotal * tax AS taxamount) • Calculating unit cost (SELECT extendedcost / quantity AS unitcost) Many more uses for math operators will be revealed in the next article in this series which combines operators with aggregate functions. Inequality (<, >, <=, >=) operators. You most likely have used these in the WHERE clause of a specific SQL query. For instance. SELECT book_title FROM inventory WHERE selling_price >= '30.00'; This query will select all books that have a selling price of $30.00 or more. You could even extend that to our profit example earlier and do the following. SELECT book_title, selling_price - book_cost AS profit WHERE selling_price - book_cost >= '14.00' ORDER BY profit DESC; Which will only produce the following results. │ book_title │ profit │ │ How To Scam Customers Into Buying Your Books │ 15.01 │ This can allow you to set thresholds for various kinds of queries which is very useful in reporting. The || (concatenate) operator. When doing any sort of text concatenation, this operator comes in handy. Say for instance, you have a product category which has many different products within it. You might want to print out the product category name as well as the product item on the invoice. SELECT category || CAST(': ' AS VARCHAR) || productname AS title FROM products; Notice the use of the CAST() function. Concatenate will require knowledge about the elements it is operating on. You must tell PostgreSQL that the string ': ' is of type VARCHAR in order for your operator to function. Your results may look like: │ title │ │ Music CDs: Dave Matthews, Listener Supported │ │ DVDs: Airplane │ In the previous articles, I showed you some simple ways to use operators and aggregate functions to help speed up your applications. The true power of operators and aggregate functions come when you combine their respective powers together. You can cut down on the lines of code your application will need by simply letting your database handle that for you. This article will arm you with a plethora of information on this subject. Our Scenario: You are hired to create a web-based shopping application. Here is your database layout for your order table. create table orders ( orderid integer (autoincrement), customerid integer, subtotal real, tax real, shipping real create table orderdetail ( orderid integer, productid integer, price real, qty integer create table taxtable ( state varchar(2), rate real create table products ( productid integer, description varchar(100), price real create table cart ( sessionid varchar(30), productid integer, price real, qty integer In this example, I will use database driven shopping carts instead of storing the cart information in a session. However, I will need a sessionID to keep up with the changes in the database. Our cart table contains the current pre-checkout shopping cart. Orders and Orderdetail contain the completed order with items. We can calculate each order's Grand Total by adding up the sub parts when needed for tracking or billing. Finally, products is our product table which contains a price and description. The point of this exercise is to pass as much of the computation back to the database so that your application layer does not have to perform many trips to and from the database, as well as to reduce the lines of code required to complete your task. In this example, several of your items are stored in a database table so they may be dynamic. Those items are the basis of your subtotal, tax and shipping calculations. If you do not use operators and aggregates (and potentially subqueries), you will run the risk of making many trips around the database and putting added overhead into your application layer. I will break down the calculation of each of those items for you, as well as an example of how to put it all together in the end. The subtotal calculation. This is a rather simple calculation, and will only use an aggregate function and simple operator to extract. In our case. SELECT SUM(price*qty) AS subtotal FROM cart WHERE sessionid = '9j23iundo239new'; All we need is the sum of the results from every price * qty calculation. This shows how you can combine the power of operators and aggregates very nicely. Remember that the SUM aggregate will return the total sum from every calculation that is performed on a PER ROW basis. Don't forget your order of operations! The tax calculation. This one can be kind of tricky without some fancy SQL. I will be using COALESCE to determine the actual tax rate. COALESCE takes two arguments. If the results of the first argument are null, it will return the second. It is very handy in situations like this. Below is the query. Note: _subtotal_ is simply a placeholder. SELECT _subtotal_ * COALESCE(tax, 0) AS tax FROM tax WHERE state = 'TX'; In the final query, I will show you how all these will add up so try not to get confused by my nifty placeholders. The shipping calculation. For simplicity, we will just assume that you charge shipping based on a $3 fee per item. You could easily expand that to add some fancy calculations in as well. By adding a weight field to your products table, you could easily calculate shipping based on an algorithm. In our instance, we will just count the number of items in our cart and multiply that by 3. SELECT COUNT(*) * 3 FROM cart AS shipping WHERE sessionid = '9j23iundo239new'; Tying it all together. Now that I have shown you how to get the results for those calculations separately, lets tie them all together into one big SQL query. This query will handle all of those calculations, and then place them into the orders table for you. INSERT INTO orders (customerid, subtotal, tax, shipping) VALUES (customerid, (SELECT SUM(price*qty) FROM cart WHERE sessionid = '9j23iundo239new'), (SELECT SUM(price*qty) FROM cart WHERE sessionid = '9j23iundo239new') * (SELECT COALESCE(tax, 0) FROM tax WHERE state = 'TX'), (SELECT COUNT(*) * 3 FROM cart WHERE sessionid = '9j23iundo239new')); Additionally, if you had a Grand Total field in your orders table, you could complete this by adding up the sub items in either a separate query, or inside your INSERT query. The first of those two examples might look like this. UPDATE orders SET grandtotal = subtotal+tax+shipping WHERE orderid = 29898; To move the rest of the items from the cart table to the orderdetail table the following two queries can be issued in sequence. INSERT INTO orderdetail (orderid, productid, price, qty) values SELECT _yourorderid_, productid, price, qty FROM cart WHERE sessionid = '9j23iundo239new'; DELETE FROM cart WHERE sessionid = '9j23iundo239new'; Aggregate functions can greatly simplify and speed up your applications by allowing the SQL server to handle these kinds of calculations. In more complex applications they can be used to return customized results from multiple tables for reporting and other functions. Operators can greatly enhance the quality of the results that you return from your database. The correct use of operators and aggregate functions can not only increase the speed and accuracy of your application, but it also can greatly reduce your code base by removing unneeded lines of code for looping through result sets, simple calculations, and other line hogs. I hope that you enjoy reading and learning from this article as much as I enjoyed writing it! Branden is currently a consultant for Elliptix, an e-business and security consulting firm he co-founded this year. He has over 10 years of experience in various Internet-related technology disciplines including Unix administration, network infrastructure design and deployment, and many scripting and programming languages. For the last six years, Branden has been designing, building and deploying enterprise-scale e-commerce applications. His real-world experience is backed up by a Bachelors of Business Administration in Marketing from the University of Texas, Arlington. Branden can also be reached at brw@brw.net. Copyright © 2001, Branden R Williams. Copying license http://www.linuxgazette.net/copying.html Published in Issue 70 of Linux Gazette, September 2001
{"url":"http://www.tldp.org/LDP/LGNET/issue70/williams.html","timestamp":"2014-04-18T05:36:16Z","content_type":null,"content_length":"23510","record_id":"<urn:uuid:19b932f8-62b7-4cd3-bc7d-2915d1f5fdad>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
Moore's Law in Practical Terms There are two popular formulations of Moore's Law: The most popular formulation [of Moore's Law] is the doubling of the number of transistors on integrated circuits every 18 months. At the end of the 1970s, Moore's Law became known as the limit for the number of transistors on the most complex chips. However, it is also common to cite Moore's Law to refer to the rapidly continuing advance in computing power per unit cost, because transistor count is also a rough measure of computer processing power. The number of transistors on a CPU hasn't actually been doubling every 18 months; it's been doubling every 24 months. Here's a graph of the transistor count of each major Intel x86 chip family release from 1971 to 2006: The dotted line is the predicted transistor count if you doubled the 2,300 transistors from the Intel 4004 chip every two years since 1971. That's why I prefer the second, looser definition of Moore's law: dramatic increases in computing power per unit cost. If you're a stickler for detail, there's an extensive investigation of Moore's law at Ars Technica you can refer to. But how do we correlate Moore's Law-- the inexorable upward spiral of raw transistor counts-- with performance in practical terms? Personally, I like to look at benchmarks that use "typical" PC applications, such as SysMark 2004. According to page 14 of this PDF, SysMark 2004 scores are calibrated to a reference system: a Pentium 4 2.0 GHz. The reference system scores 100. Thus, a system which scores 200 in SysMark 2004 will be twice as fast as the reference system. So, what was the first new CPU to double the performance of the SysMark 2004 reference system with a perfect 200? The Pentium 4 "Extreme Edition" 3.2 GHz scores 197 on the SysMark 2004 office benchmark in this set of Tom's Hardware benchmarks. Let's compare the release dates of these two CPUs: Pentium 4 2.0 GHz August 27th, 2001 Pentium 4EE 3.2 GHz November 3rd, 2003 It took 26 months to double real world performance in SysMark 2004. That tracks almost exactly with the doubling of transistor counts every 24 months. This isn't a perfect comparison, since other parts of the PC get faster at different rates. But it's certainly a good indication that CPU transistor count is fairly reliable indicator of overall
{"url":"http://blog.codinghorror.com/moores-law-in-practical-terms/","timestamp":"2014-04-20T05:55:21Z","content_type":null,"content_length":"10424","record_id":"<urn:uuid:5a3f230c-6047-4747-b7dc-4787640266fa>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
Project ideas I know how to tell what the k in LL(k) is, but I don't know how to differentiate LL and LR. I don't know what "x-most derivation" means and haven't found a good explanation that I could understand. I am assuming you do not understand terminal and no-terminal. In a context free grammar, production rules are written as: Where 'S' is a non-terminal (i.e. can be transformed into something else) and 'x' is a string of terminals and/or non-terminals. (terminals cannot be transformed into something else, and thus can never be on the left hand side of the production). Okay so now onto LL and LR: If we have the productions: Rule 1: S->SX Rule 2: S->m Rule 3: X->1 then there are multiple ways to form the string "m11" We always start at the first production rule. So 'S'. S can be transformed (bu rule 1) into SX. From there we can either transform S or X next. If we use the left-most non-terminal to transform we are using LL. If we use the right most terminal, we are using LR. Transformation of the string using LL: rule 1; rule 1; rule 2; rule 3; rule 3; Transformation of the string using LR: rule 1; rule 3; rule 1; rule 3; rule 2; As you can see you will always use the same number of rules, just in a different order. Hope this helps. Please let me know of any error/ thing that are not explained properly. EDIT: @chrisname I just realised that you have not made an LL or an LR. They refer to parsers, you have made a tokeniser with one look-ahead character. Topic archived. No new replies allowed.
{"url":"http://www.cplusplus.com/forum/lounge/105160/","timestamp":"2014-04-16T16:13:17Z","content_type":null,"content_length":"28721","record_id":"<urn:uuid:ac10a556-5c62-4847-912e-44e935a26a61>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Solid harmonics From Knowino In mathematics, solid harmonics are defined as solutions of the Laplace equation in spherical polar coordinates. There are two kinds of solid harmonic functions: the regular solid harmonics $\ scriptstyle R^m_\ell(\mathbf{r})$, which vanish at the origin, and the irregular solid harmonics $\scriptstyle I^m_{\ell}(\mathbf{r})$, which have an $r^{-(\ell+1)}$ singularity at the origin. Both sets of functions play an important role in potential theory. Regular solid harmonics appear in chemistry in the form of s, p, d, etc. atomic orbitals and in physics as multipoles. Irregular harmonics appear in the expansion of scalar fields in terms of multipoles. Both kinds of solid harmonics are simply related to spherical harmonics $\scriptstyle Y^m_\ell$ (normalized to unity), $R^m_{\ell}(\mathbf{r}) \equiv \sqrt{\frac{4\pi}{2\ell+1}}\; r^\ell Y^m_{\ell}(\theta,\varphi), \qquad I^m_{\ell}(\mathbf{r}) \equiv \sqrt{\frac{4\pi}{2\ell+1}} \; \frac{ Y^m_{\ell}(\theta,\ varphi)}{r^{\ell+1}} .$ [edit] Derivation, relation to spherical harmonics The following vector operator plays a central role in this section $\mathbf{L} \equiv \mathbf{r} \times \mathbf{abla}.$ Parenthetically, we remark that in quantum mechanics $\scriptstyle -i\hbar \mathbf{L}$ is the orbital angular momentum operator, where $\scriptstyle \hbar\,$ is Planck's constant divided by 2π. In quantum mechanics the momentum operator is proportional to the gradient, $\scriptstyle \mathbf{p}= -i\hbar\mathbf{abla}$, so that L is proportional to r×p, the orbital angular momentum operator. By using the relations $L^2 \equiv \mathbf{L}\cdot\mathbf{L} = \sum_{i,j} [r_iabla_j r_i abla_j - r_iabla_j r_j abla_i]\quad \hbox{and}\quad abla_j r_i - r_iabla_j = \delta_{ji}$ one can derive that $L^2 = r^2 abla^2 - (\mathbf{r}\cdot\mathbf{abla} )^2 - \mathbf{r}\cdot\mathbf{abla}.$ Expression in spherical polar coordinates gives: $\mathbf{r}\cdot \mathbf{abla} = r\frac{\partial}{\partial r}$ $(\mathbf{r}\cdot\mathbf{abla} )^2 + \mathbf{r}\cdot\mathbf{abla} = \frac{1}{r} \frac{\partial^2}{\partial r^2} r .$ It can be shown by expression of L in spherical polar coordinates that L² does not contain a derivative with respect to r. Hence upon division of L² by r² the position of 1/r² in the resulting expression is irrelevant. After these preliminaries we find that the Laplace equation ∇² Φ = 0 can be written as $abla^2\Phi(\mathbf{r}) = \left(\frac{1}{r} \frac{\partial^2}{\partial r^2}r + \frac{L^2}{r^2} \right)\Phi(\mathbf{r}) = 0 , \qquad \mathbf{r} e \mathbf{0}.$ It is known that spherical harmonics Y^m[l] are eigenfunctions of L²: $L^2 Y^m_{\ell} = -\ell(\ell+1) Y^m_{\ell}.$ Substitution of Φ(r) = F(r) Y^m[l] into the Laplace equation gives, after dividing out the spherical harmonic function, the following radial equation and its general solution, $\frac{1}{r}\frac{\partial^2}{\partial r^2}r F(r) =\frac{\ell(\ell+1)}{r^2} F(r) \Longrightarrow F(r) = A r^\ell + B r^{-\ell-1}.$ The particular solutions of the total Laplace equation are regular solid harmonics: $R^m_{\ell}(\mathbf{r}) \equiv \sqrt{\frac{4\pi}{2\ell+1}}\; r^\ell Y^m_{\ell}(\theta,\varphi),$ and irregular solid harmonics: $I^m_{\ell}(\mathbf{r}) \equiv \sqrt{\frac{4\pi}{2\ell+1}} \; \frac{ Y^m_{\ell}(\theta,\varphi)}{r^{\ell+1}} .$ Racah's normalization (also known as Schmidt's semi-normalization) is applied to both functions $\int_{0}^{\pi}\sin\theta\, d\theta \int_0^{2\pi} d\varphi\; R^m_{\ell}(\mathbf{r})^*\; R^m_{\ell}(\mathbf{r}) = \frac{4\pi}{2\ell+1} r^{2\ell}$ (and analogously for the irregular solid harmonic) instead of normalization to unity. This is convenient because in many applications the Racah normalization factor appears unchanged throughout the [edit] Connection between regular and irregular solid harmonics From the definitions follows immediately that $I^m_\ell(\mathbf{r})= \frac{R^m_{\ell}(\mathbf{r})}{r^{2\ell+1}}$ A more interesting relationship follows from the observation that the regular solid harmonics are homogeneous polynomials in the components x, y, and z of r. We can replace these components by the corresponding components of the gradient operator ∇. Thus, the left hand side in the following equation is well-defined: $R^m_\ell(\mathbf{abla})\; \frac{1}{r} = (-1)^\ell \frac{(2\ell)!}{2^\ell \ell!}\;I^m_\ell(\mathbf{r}), \qquad re 0.$ For a proof see Biedenharn and Louck (1981), p. 312. [edit] Addition theorems The translation of the regular solid harmonic gives a finite expansion, $R^m_\ell(\mathbf{r}+\mathbf{a}) = \sum_{\lambda=0}^\ell\binom{2\ell}{2\lambda}^{1/2} \sum_{\mu=-\lambda}^\lambda R^\mu_{\lambda}(\mathbf{r}) R^{m-\mu}_{\ell-\lambda}(\mathbf{a})\; \langle \ lambda, \mu; \ell-\lambda, m-\mu| \ell m \rangle,$ where the Clebsch-Gordan coefficient is given by $\langle \lambda, \mu; \ell-\lambda, m-\mu| \ell m \rangle = {\ell+m \choose \lambda+\mu}^{1/2} {\ell-m \choose \lambda-\mu}^{1/2} {2\ell \choose 2\lambda}^{-1/2}.$ The similar expansion for irregular solid harmonics gives an infinite series, $I^m_\ell(\mathbf{r}+\mathbf{a}) = \sum_{\lambda=0}^\infty\binom{2\ell+2\lambda+1}{2\lambda}^{1/2} \sum_{\mu=-\lambda}^\lambda R^\mu_{\lambda}(\mathbf{r}) I^{m-\mu}_{\ell+\lambda}(\mathbf{a})\; \ langle \lambda, \mu; \ell+\lambda, m-\mu| \ell m \rangle$ with $|r| \le |a|\,$. The quantity between pointed brackets is again a Clebsch-Gordan coefficient, $\langle \lambda, \mu; \ell+\lambda, m-\mu| \ell m \rangle = (-1)^{\lambda+\mu}{\ell+\lambda-m+\mu \choose \lambda+\mu}^{1/2} {\ell+\lambda+m-\mu \choose \lambda-\mu}^{1/2}{2\ell+2\lambda+1\ choose 2\lambda}^{-1/2}.$ [edit] Real form By a simple linear combination of solid harmonics of ±m these functions are transformed into real functions. The real regular solid harmonics, expressed in Cartesian coordinates, are homogeneous polynomials of order l in x, y, z. The explicit form of these polynomials is of some importance. They appear, for example, in the form of spherical atomic orbitals and real multipole moments. The explicit Cartesian expression of the real regular harmonics will now be derived. [edit] Linear combination We write in agreement with the earlier definition $R_\ell^m(r,\theta,\varphi) = (-1)^{(m+|m|)/2}\; r^\ell \;\Theta_{\ell}^{|m|} (\cos\theta) e^{im\varphi}, \qquad -\ell \le m \le \ell,$ $\Theta_{\ell}^m (\cos\theta) \equiv \left[\frac{(\ell-m)!}{(\ell+m)!}\right]^{1/2} \,\sin^m\theta\, \frac{d^m P_\ell(\cos\theta)}{d\cos^m\theta}, \qquad m\ge 0,$ where $P_\ell(\cos\theta)$ is a Legendre polynomial of order l. The m dependent phase is known as the Condon-Shortley phase. The following expression defines the real regular solid harmonics: $\begin{pmatrix} C_\ell^{m} \\ S_\ell^{m} \end{pmatrix} \equiv \sqrt{2} \; r^\ell \; \Theta^{m}_\ell \begin{pmatrix} \cos m\varphi\\ \sin m\varphi \end{pmatrix} = \frac{1}{\sqrt{2}} \begin {pmatrix} (-1)^m & \quad 1 \\ -(-1)^m i & \quad i \end{pmatrix} \begin{pmatrix} R_\ell^{m} \\ R_\ell^{-m} \end{pmatrix}, \qquad m > 0.$ and for m = 0: $C_\ell^{0} \equiv R_\ell^{0} .$ Since the transformation is by a unitary matrix the normalization of the real and the complex solid harmonics is the same. [edit] z-dependent part Upon writing u = cos θ the mth derivative of the Legendre polynomial can be written as the following expansion in u $\frac{d^m P_\ell(u)}{du^m} = \sum_{k=0}^{\left \lfloor (\ell-m)/2\right \rfloor} \gamma^{(m)}_{\ell k}\; u^{\ell-2k-m}$ $\gamma^{(m)}_{\ell k} = (-1)^k 2^{-\ell} {\ell\choose k}{2\ell-2k \choose \ell} \frac{(\ell-2k)!}{(\ell-2k-m)!}.$ Since z = r cosθ it follows that this derivative, times an appropriate power of r, is a simple polynomial in z, $\Pi^m_\ell(z)\equiv r^{\ell-m} \frac{d^m P_\ell(u)}{du^m} = \sum_{k=0}^{\left \lfloor (\ell-m)/2\right \rfloor} \gamma^{(m)}_{\ell k}\; r^{2k}\; z^{\ell-2k-m}.$ [edit] (x,y)-dependent part Consider next, recalling that x = r sinθcosφ and y = r sinθsinφ, $r^m \sin^m\theta \cos m\varphi = \frac{1}{2} \left[ (r \sin\theta e^{i\varphi})^m + (r \sin\theta e^{-i\varphi})^m \right] = \frac{1}{2} \left[ (x+iy)^m + (x-iy)^m \right]$ $r^m \sin^m\theta \sin m\varphi = \frac{1}{2i} \left[ (r \sin\theta e^{i\varphi})^m - (r \sin\theta e^{-i\varphi})^m \right] = \frac{1}{2i} \left[ (x+iy)^m - (x-iy)^m \right].$ $A_m(x,y) \equiv \frac{1}{2} \left[ (x+iy)^m + (x-iy)^m \right]= \sum_{p=0}^m {m\choose p} x^p y^{m-p} \cos\left( (m-p) \frac{\pi}{2} \right)$ $B_m(x,y) \equiv \frac{1}{2i} \left[ (x+iy)^m - (x-iy)^m \right]= \sum_{p=0}^m {m\choose p} x^p y^{m-p} \sin\left( (m-p) \frac{\pi}{2}\right).$ [edit] In total $C^m_\ell(x,y,z) = \left[\frac{(2-\delta_{m0}) (\ell-m)!}{(\ell+m)!}\right]^{1/2} \Pi^m_{\ell}(z)\;A_m(x,y),\qquad m=0,1, \ldots,\ell$ $S^m_\ell(x,y,z) = \left[\frac{2 (\ell-m)!}{(\ell+m)!}\right]^{1/2} \Pi^m_{\ell}(z)\;B_m(x,y) ,\qquad m=1,2,\ldots,\ell.$ [edit] List of lowest functions We list explicitly the lowest functions up to and including l = 5 . Here $\bar{\Pi}^m_\ell(z) \equiv \left[\frac{(2-\delta_{m0}) (\ell-m)!}{(\ell+m)!}\right]^{1/2} \Pi^m_{\ell}(z) .$ $\begin{matrix} \bar{\Pi}^0_0 = & 1 & \bar{\Pi}^1_3 = & \frac{1}{4}\sqrt{6}(5z^2-r^2) & \bar{\Pi}^4_4 = & \frac{1}{8}\sqrt{35} \\ \bar{\Pi}^0_1 = & z & \bar{\Pi}^2_3 = & \frac{1}{2}\sqrt{15}\; z & \bar{\Pi}^0_5 = & \frac{1}{8}z(63z^4-70z^2r^2+15r^4) \\ \bar{\Pi}^1_1 = & 1 & \bar{\Pi}^3_3 = & \frac{1}{4}\sqrt{10} & \bar{\Pi}^1_5 = & \frac{1}{8}\sqrt{15} (21z^4-14z^2r^2+r^4) \\ \bar{\Pi}^ 0_2 = & \frac{1}{2}(3z^2-r^2) & \bar{\Pi}^0_4 = & \frac{1}{8}(35 z^4-30 r^2 z^2 +3r^4 ) & \bar{\Pi}^2_5 = & \frac{1}{4}\sqrt{105}(3z^2-r^2)z \\ \bar{\Pi}^1_2 = & \sqrt{3}z & \bar{\Pi}^1_4 = & \ frac{\sqrt{10}}{4} z(7z^2-3r^2) & \bar{\Pi}^3_5 = & \frac{1}{16}\sqrt{70} (9z^2-r^2) \\ \bar{\Pi}^2_2 = & \frac{1}{2}\sqrt{3} & \bar{\Pi}^2_4 = & \frac{1}{4}\sqrt{5}(7z^2-r^2) & \bar{\Pi}^4_5 = & \frac{3}{8}\sqrt{35} z \\ \bar{\Pi}^0_3 = & \frac{1}{2} z(5z^2-3r^2) & \bar{\Pi}^3_4 = & \frac{1}{4}\sqrt{70}\;z & \bar{\Pi}^5_5 = & \frac{3}{16}\sqrt{14} \\ \end{matrix}$ The lowest functions $A_m(x,y)\,$ and $B_m(x,y)\,$ are: m A[m] B[m] 0 $1\,$ $0\,$ 1 $x\,$ $y\,$ 2 $x^2-y^2\,$ $2xy\,$ 3 $x^3-3xy^2\,$ $3x^2y -y^3\,$ 4 $x^4 - 6x^2 y^2 +y^4\,$ $4x^3y-4xy^3\,$ 5 $x^5-10x^3y^2+ 5xy^4\,$ $5x^4y -10x^2y^3+y^5\,$ [edit] Examples Thus, for example, the angular part of one of the nine normalized spherical g atomic orbitals is: $C^2_4(x,y,z) = \sqrt{{\textstyle\frac{9}{4\pi}}}\bar{\Pi}^2_4 A_2= \sqrt{{\textstyle\frac{9}{4\pi}}} \sqrt{\textstyle{\frac{5}{16}}} (7z^2-r^2)(x^2-y^2).$ One of the 7 components of a real multipole of order 3 (octupole) of a system of N charges q[ i] is $S^1_3(x,y,z) = \bar{\Pi}^1_3 B_1= \frac{1}{4}\sqrt{6}\sum_{i=1}^N q_i (5z_i^2-r_i^2) y_i .$ [edit] Spherical harmonics in Cartesian form The following expresses normalized spherical harmonics in Cartesian coordinates (Condon-Shortley phase): $r^\ell\, \begin{pmatrix} Y_\ell^{m} \\ Y_\ell^{-m} \end{pmatrix} = \left[\frac{2\ell+1}{4\pi}\right]^{1/2} \bar{\Pi}^m_\ell(z) \begin{pmatrix} (-1)^m (A_m + i B_m)/\sqrt{2} \\ \qquad (A_m - i B_m)/\sqrt{2} \\ \end{pmatrix} , \qquad m > 0.$ and for m = 0: $r^\ell\,Y_\ell^{0} \equiv \sqrt{\frac{2\ell+1}{4\pi}} \bar{\Pi}^0_\ell(z) .$ $A_m(x,y) = \sum_{p=0}^m {m \choose p} x^p y^{m-p} \cos\left( (m-p) \frac{\pi}{2}\right),$ $B_m(x,y) = \sum_{p=0}^m {m\choose p} x^p y^{m-p} \sin\left( (m-p) \frac{\pi}{2} \right),$ and for m > 0: $\bar{\Pi}^m_\ell(z) = \left[\frac{(\ell-m)!}{(\ell+m)!}\right]^{1/2} \sum_{k=0}^{\left \lfloor (\ell-m)/2\right \rfloor} (-1)^k 2^{-\ell} {\ell\choose k}{2\ell-2k \choose \ell} \frac{(\ell-2k)!} {(\ell-2k-m)!} \; r^{2k}\; z^{\ell-2k-m}.$ For m = 0: $\bar{\Pi}^0_\ell(z) = \sum_{k=0}^{\left \lfloor \ell/2\right \rfloor} (-1)^k 2^{-\ell} {\ell \choose k}{2\ell-2k \choose \ell} \; r^{2k}\; z^{\ell-2k}.$ [edit] Examples Using the expressions for $\bar{\Pi}^\ell_m(z)$, $A_m(x,y)\,$, and $B_m(x,y)\,$ listed explicitly above we obtain: $Y^1_3 = - \frac{1}{r^3} \left[{\textstyle \frac{7}{4\pi}\cdot \frac{3}{16} }\right]^{1/2} (5z^2-r^2)(x+iy) = - \left[{\textstyle \frac{7}{4\pi}\cdot \frac{3}{16}}\right]^{1/2} (5\cos^2\theta-1) (\sin\theta e^{i\varphi})$ $Y^{-2}_4 = \frac{1}{r^4} \left[{\textstyle \frac{9}{4\pi}\cdot\frac{5}{32}}\right]^{1/2}(7z^2-r^2) (x-iy)^2 = \left[{\textstyle \frac{9}{4\pi}\cdot\frac{5}{32}}\right]^{1/2}(7 \cos^2\theta -1) (\sin^2\theta e^{-2 i \varphi})$ [edit] References Most books on angular momenta discuss solid harmonics. See, for instance, • D. M. Brink and G. R. Satchler, Angular Momentum, 3rd edition ,Clarendon, Oxford, (1993) • L. C. Biedenharn and J. D. Louck, Angular Momentum in Quantum Physics, volume 8 of Encyclopedia of Mathematics, Addison-Wesley, Reading (1981) The addition theorems for solid harmonics have been proved in different manners by many different workers. See for two different proofs for example: • R. J. A. Tough and A. J. Stone, J. Phys. A: Math. Gen. Vol. 10, p. 1261 (1977) • M. J. Caola, J. Phys. A: Math. Gen. Vol. 11, p. L23 (1978)
{"url":"http://www.tau.ac.il/~tsirel/dump/Static/knowino.org/wiki/Solid_harmonics.html","timestamp":"2014-04-18T07:14:02Z","content_type":null,"content_length":"49274","record_id":"<urn:uuid:587518b7-1cfc-4b93-8c9b-258a670b268b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
Action of k* on a variety induces grading? up vote 3 down vote favorite Let $V$ be a $\Bbbk$-variety such that $\Bbbk^\times$ (as an algebraic group) acts algebraically on $V$. Given any $f\in\Bbbk[V]$, let us call $f$ homogeneous of degree $d$ if for all $v\in V$ and all $\lambda\in\Bbbk^\times$, we have $f(\lambda.v)=\lambda^d f(v)$. My question is: Does this define a grading on $\Bbbk[V]$? I was convinced that it is true, but I am running into difficulties. Let us first assume $\Bbbk=\mathbb{C}$, the ground field should not be an obstruction. The linear span of $\Bbbk^\times f$ decomposes since $\Bbbk^\times$ is reductive, but I don't see how to turn this into a grading on all of $\Bbbk[V]$. If it is true, I would really like to see a proof - it should use as little machinery as possible. ag.algebraic-geometry rt.representation-theory geometric-invariant-theor 2 If $k$ is algebraically closed then the action of $k^*$ identifies with an action of the torus $G_{m,k}$. This is a diagonalisable group scheme and therefore any action of $G_{m,k}$ on a $k$-algebra corresponds to a $\bf Z$-grading (it does not have to be finite dimensional over $k$). See for instance SGA 3, I, 4.7.3. – Damian Rössler Aug 15 '12 at 11:00 @Jesko: Please see my final comment to your earlier question related to this matter; Damian's SGA3 reference is precisely Ben's computation, carried out over any ring. Note also that if you 1 already believed that "the linear span of $k^{\times}f$ decomposes since $k^{\times}$ is reductive" (as you say above) then you are done, since it would imply that every element lies in a finite-dimensional $k^{\times}$-stable subspace and so the span of any two such would be similarly exhausted in this way (hence graded, etc.). So that case done rigorously contains all of the difficulties. – user22479 Aug 15 '12 at 14:19 If you imagine that K∗ is like a circle, it will have $π_1=Z$. Then you have have Z "act on" V. This "action" could be grading. I'm am not sure if this is non-sense from a coincidence or their is something to this. I am not even sure it this can be made precise. – Spice the Bird Aug 16 '12 at 8:01 add comment 1 Answer active oldest votes Turning the action map of varieties into a map of rings, we get a ring map $\phi$ from $k[V]$ to $k[V][t,t^{-1}] $, the coordinate ring with an extra invertible variable (the coordinate on $k^*$) adjoined. Now, for any function $\phi(f)=\sum_{i\in \mathbb{Z}}f_it^i$ for some $f_i$'s, almost all of which are 0. Note that $f=\sum f_i$, which we obtain by restricting the function to $t=1$. Using associativity, applying $\phi$ again to the $f_i$'s is the same as applying pull-back by the multiplication map to t. Thus, as functions on $V\times k^*\times k^ *$ (letting $t,u$ be the two coordinates) $$\sum_{i\in \mathbb{Z}}\phi(f_i)u^i=\sum_{i\in \mathbb{Z}} f_i t^i u^i$$ up vote 7 since the pull-back of the coordinate by multiplication is just the product of the coordinates . Thus, $\phi(f_i)=f_it^i$. down vote accepted We can define the grading by letting $f$ be homogeneous of degree $i$ if $\phi(f)=ft^i$. We have already seen that every element can be written uniquely as a sum of such elements (the $f_i$'s), and this is multiplicative since $\phi$ is a ring homomorphism. Alternatively, we can note that we have proven that the span of the $f_i$'s is an finite-dimensional invariant subspace containing $f$, so we can apply your argument. In general, essentially the same argument shows that the action of any affine algebraic group on the coordinate ring of any affine variety by pull-back is a locally finite action: any function is contained in a finite-dimensional invariant subspace. 1 Wonderful! This is just perfect. – Jesko Hüttenhain Aug 15 '12 at 11:47 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry rt.representation-theory geometric-invariant-theor or ask your own question.
{"url":"http://mathoverflow.net/questions/104756/action-of-k-on-a-variety-induces-grading","timestamp":"2014-04-19T05:14:29Z","content_type":null,"content_length":"58165","record_id":"<urn:uuid:2511315c-22f1-4c8b-8f83-65d7ebcdf2bf>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
overlapping instances and constraints Bulat Ziganshin bulat.ziganshin at gmail.com Tue Feb 28 05:29:34 EST 2006 Hello John, Tuesday, February 28, 2006, 4:23:24 AM, you wrote: >> i had plans to propose the same and even more: >> instance C2 a b | a/=b JM> I was thinking it would be all kinds of useful if we had two predefined JM> classes JM> class Eq a b JM> class NEq a b JM> where Eq has instances exactly when its two types are equal and NEq has JM> instances exactly when its two types are not equal. JM> Eq should be straightforward to implement, declaring any type JM> automatically creates its instances. (sort of an auto-deriving). NEq JM> might be more problematic as that would involve a quadratic number of JM> instances so its implementation might need to be more special. but JM> perhaps we can do with just 'Eq'. with 'Eq' class we can't do anything that is impossible without it the whole devil is to make general instance NON-OVERLAPPING with specific one by EXPLICITLY specifying EXCLUSIONS with these "/=" rules: class Convert a b where cvt :: a->b instance Convert a a where -- are we need Eq here? :) cvt = id instance (NEq a b) => Convert a b where cvt = read.show ... yes, i recalled! my proposal was to allow "!" in instance headers: instance C Int where ... instance (!Int a, Integral a) => C a where ... instance (!Integral a, Enum a) => C a where ... adding your Eq class, it will be all we can do on this way interesting, that the language theoretics can say about decidability, soundness, and so on of this trick? :) Best regards, Bulat mailto:Bulat.Ziganshin at gmail.com More information about the Haskell-prime mailing list
{"url":"http://www.haskell.org/pipermail/haskell-prime/2006-February/000802.html","timestamp":"2014-04-19T21:22:30Z","content_type":null,"content_length":"4223","record_id":"<urn:uuid:5696e072-4b82-4d0a-b000-4ef532596775>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: September 2008 [00400] [Date Index] [Thread Index] [Author Index] Re: Re: Apparent error integrating product of DiracDelta's • To: mathgroup at smc.vnet.net • Subject: [mg92030] Re: [mg91997] Re: Apparent error integrating product of DiracDelta's • From: Daniel Lichtblau <danl at wolfram.com> • Date: Thu, 18 Sep 2008 06:10:08 -0400 (EDT) • References: <gag2lg$39k$1@smc.vnet.net> <gal3ht$dv1$1@smc.vnet.net> <200809162324.TAA24634@smc.vnet.net> magma wrote: > On Sep 15, 9:40 am, "Nasser Abbasi" <n... at 12000.org> wrote: >> "Michael Mandelberg" <mmandelb... at comcast.net> wrote in message >> news:gag2lg$39k$1 at smc.vnet.net... >>> How do I get: >>> Integrate[DiracDelta[z- x] DiracDelta[z- y], {z-Infinity, Infinity}= > ] >>> to give DiracDelta[x-y] as the result? Currently it gives 0. I ha= > ve >>> all three variable assumed to be Reals. I am using 6.0.0. >>> Thanks, >>> Michael Mandelberg >> I think you have synatx error in the limit part. I assume you mean to wri= > te >> {z, -Infinity,Infinity} >> Given that, I think zero is the correct answer. When you multiply 2 de= > ltas >> at different positions, you get zero. Integral of zero is zero. >> Nasser > No Nasser, the correct value of the integral should be DiracDelta[x- > y], as Michael said. > This value is indeed 0 if x != y but it is not 0 if x==y. It is not 0 at x==y, but neither is it DiracDelta[x-y]. The value there is undefined. > Mathematica correctly calculates: > Integrate[f[z - x] DiracDelta[z - y], {z, -Infinity, Infinity}, > Assumptions -> y \[Element] Reals] > as > f[-x + y] This is making a tacit assumption that f is a "nice" function. Nice, in this context, means it is an element of Schwartz space S: C^infinity and vanishing faster than any polynomial at +-infinity. DiracDelta, suffice it to say, is not an element of S (it's not even a function). > However it fails to recognize that if f[z-x] is replaced by > DiracDelta[z-x], the result should be > DiracDelta[-x + y] > or the equivalent > DiracDelta[x - y] This is not a failure but rather an active intervention. > In the help file, under "possible issues" it is mentioned that > "Products of distributions with coinciding singular support cannot be > defined:" This is a statement of mathematics and not specific to Mathematica. > So perhaps at the moment the only way to do the integral is: > Integrate[f[z - x] DiracDelta[z - y], {z, -Infinity, Infinity}, > Assumptions -> y \[Element] Reals] /. f -> DiracDelta > hth Here is a general rule of thumb. If you are working with DiracDelta function(al)s, instead approximate them as ordinary functions. If different methods of approximation will lead to different results, then what you have cannot be defined. One can use this notion to see that, for example, DiracDelta[x]^2 is not defined. Daniel Lichtblau Wolfram Research • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2008/Sep/msg00400.html","timestamp":"2014-04-20T08:30:50Z","content_type":null,"content_length":"28378","record_id":"<urn:uuid:fe2bf767-b4bc-4309-8151-4efa45fac591>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
Dickinson, TX Algebra Tutor Find a Dickinson, TX Algebra Tutor ...I am a retired state certified teacher in Texas both in composite high school science and mathematics. I offer a no-fail guarantee (contact me via WyzAnt for details). I am available at any time of the day; I try to be as flexible as possible. I try as much as possible to work in the comfort of your own home at a schedule convenient to you. 35 Subjects: including algebra 2, algebra 1, chemistry, physics ...Of course, good writing takes time and much effort; however, since it is so fresh on my mind, I am in a great position to help students now. In December of 2011, I earned a Master's degree in Theology and a certification in Spiritual Direction. My course work include Old and New Testament, Chri... 13 Subjects: including algebra 1, English, chemistry, grammar ...Get the Help you need. I can explain the material in ways that make sense.I hold a B.E.E from the University of Minnesota, a 5 year, 167 semester hour curriculum. I have been active in Engineering for 50 years. 10 Subjects: including algebra 2, calculus, physics, trigonometry ...I provide fundamental skill training and practice that will ensure that students grasp the concepts, not just memorize the material. I also perform assessments before and after tutoring to measure the progress of the student. I look forward to hearing from you and helping you reach your educational goals!I am a biochemist who uses genetic mouse models daily in my work. 18 Subjects: including algebra 1, algebra 2, chemistry, reading ...I obtained a Biology degree from UT-Austin and then an Epidemiology degree from UT School of Public Health. I will be attending a health professions school in Houston in the Fall. During college, I tutored elementary school students. 17 Subjects: including algebra 2, chemistry, algebra 1, reading Related Dickinson, TX Tutors Dickinson, TX Accounting Tutors Dickinson, TX ACT Tutors Dickinson, TX Algebra Tutors Dickinson, TX Algebra 2 Tutors Dickinson, TX Calculus Tutors Dickinson, TX Geometry Tutors Dickinson, TX Math Tutors Dickinson, TX Prealgebra Tutors Dickinson, TX Precalculus Tutors Dickinson, TX SAT Tutors Dickinson, TX SAT Math Tutors Dickinson, TX Science Tutors Dickinson, TX Statistics Tutors Dickinson, TX Trigonometry Tutors Nearby Cities With algebra Tutor Alvin, TX algebra Tutors Bacliff algebra Tutors Beach City, TX algebra Tutors El Lago, TX algebra Tutors Hitchcock, TX algebra Tutors Kemah algebra Tutors La Marque algebra Tutors League City algebra Tutors Manvel, TX algebra Tutors Nassau Bay, TX algebra Tutors Santa Fe, TX algebra Tutors Seabrook, TX algebra Tutors Taylor Lake Village, TX algebra Tutors Texas City algebra Tutors Webster, TX algebra Tutors
{"url":"http://www.purplemath.com/dickinson_tx_algebra_tutors.php","timestamp":"2014-04-20T23:56:22Z","content_type":null,"content_length":"24010","record_id":"<urn:uuid:b685068d-14b5-4799-958c-2d180133b8dd>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: problem exactly in one pass; instead, solve it approximately, then iterate. Multigrid methods, perhaps the most important development in numerical computation in the past twenty years, are based on a recursive application of this idea. Even direct algorithms have been affected by the new manner of computing. Thanks to the work of Skeel and others, it has been noticed that the expense of making a direct method stable---say, of pivoting in Gaussian elimination---may in certain contexts be cost­ineffective. Instead, skip that step---solve the problem directly but unstably, then do one or two steps of iterative refinement. ``Exact'' Gaussian elimination becomes just another preconditioner! Other problems besides Ax = b have undergone analogous changes, and the famous example is linear programming. Linear programming problems are mathematically finite, and for decades, people solved them by a finite algorithm: the simplex method. Then Karmarkar announced in 1984 that iterative, infinite algorithms are sometimes better. The result has been controversy, intellectual excitement, and a perceptible shift of the entire field of linear programming away from the rather anomalous position it has traditionally occupied towards the mainstream of numerical computation. I believe that the existence of finite algorithms for certain problems, together with other historical forces, has distracted us for decades from a balanced view of numerical analysis. Rounding errors and instability are important, and numerical analysts will always be the experts in these subjects and at pains to ensure that the unwary are not tripped up by them. But our central mission is to compute quantities that are typically uncomputable,
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/978/3849431.html","timestamp":"2014-04-21T13:17:44Z","content_type":null,"content_length":"9526","record_id":"<urn:uuid:daa9e1a6-1176-4c8f-b287-1f24ecdde2a6>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
Generalization(s) of Subadditive Ergodic Theorems up vote 7 down vote favorite I am interested in dynamical gadgets which can be described by sampling along the orbits of points in some ergodic system $(\Omega,\mu,T)$. When $\mu$ is a probability measure, the theory of such objects is quite well understood, so I would like to study what happens when one allows the measure $\mu$ to be a properly $\sigma$ - finite measure (i.e. infinite and $\sigma$-finite). In this setting, a $\mu$-preserving transformation $T:\Omega \to \Omega$ is called ergodic if $T^{-1}E=E \Longrightarrow \mu(E) = 0 $ or $\mu (\Omega - E) = 0$. In this setting, some facts from finite ergodic theory carry over - for example, one still has that any $T$-invariant measurable function $\Omega \to \mathbb{R}$ must be $\mu$-almost surely constant. However, some of the nicer results, such as the Birkhoff ergodic theorem fail to be true. In the probability measure setting, we have the following very pleasant subadditive ergodic theorem, due to Kingman: Theorem (Kingman): Suppose $(\Omega,\mu,T)$ is ergodic, and $f_n:\Omega \to \mathbb{R}$ are measurable, obey the subadditivity condition $f_{n+m}(x) \leq f_n(x)+f_m(T^n(x))$, and satisfy $\|f_n\|_{\ infty} \leq C\cdot n$. Then the limit $\displaystyle\lim_{n \to \infty}\frac{1}{n}\int_{\Omega} f_n(x) d \mu(x) $ exists and is equal to $\lim_{n \to \infty}\frac{1}{n} f_n(x) $ for $\mu$ almost every $x \in \Omega$. My question is the following: does Kingman carry over to the infinite measure setting? If not, are there any weakened generalizations that one can obtain if one makes additional "niceness" assumptions about $\mu$ and\or $T$? ds.dynamical-systems ergodic-theory 2 Birkhoff's theorem is a corollary of the subadditive ergodic theorem so you can't hope for a better subadditive theorem than an ergodic theorem – Anthony Quas Apr 22 '12 at 7:36 add comment 2 Answers active oldest votes I have not ever tried to use it, but there is some infinite measure generalization of the subadditive ergodic theorem in the spirit of the ratio ergodic theorem in this paper: Akcoglu, M. A.; Sucheston, L. A ratio ergodic theorem for superadditive processes. Z. Wahrsch. Verw. Gebiete 44 (1978), no. 4, 269–278. 28D05 (60F15) up vote Here is the math review: 5 down vote A Markov operator is a positive linear operator $T$ on $L_1(X,F,μ)$ such that $T*1=1$. A sequence of $L_1+$ functions $f_0$,$f_1$,$f_2$,⋯ is superadditive if $s_k+n \geq s_k+T^k s_n$, where $s_n=f_0+f_1+⋯+f_n−1$ and $s_0=0$. An exact dominant of such a sequence is an $L_1^+$ function $δ$ such that $\sum_{i=0}^{n-1} T^i δ≥ s_n$ and $\int δ \, dμ= \lim_n\frac{1}{n}\int s_n \, dμ$. The authors show that a Markov operator always has an exact dominant, by generalizing an earlier idea of J. F. C. Kingman [J. Roy. Statist. Soc. Ser. B 30 (1968), 499--510; MR0254907 (40 #8114)]. The authors then use their result to prove a generalization of Kingman's ergodic theorem and the R. V. Chacon and D. S. Ornstein theorem [Illinois J. Math. 4 (1960), 153--160; MR0110954 (22 #1822)]. add comment Maybe you can read K. Schurger. Almost subadditive extensions of Kingman's ergodic theorem. Ann. Probab. 1991. But not sure why you are interesting in it. up vote 0 down vote add comment Not the answer you're looking for? Browse other questions tagged ds.dynamical-systems ergodic-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/94782/generalizations-of-subadditive-ergodic-theorems/121221","timestamp":"2014-04-19T10:13:07Z","content_type":null,"content_length":"57616","record_id":"<urn:uuid:7a18155f-796b-482b-bede-cc309504737b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
dagger-compact category Monoidal categories With symmetry With duals for objects With duals for morphisms With traces Closed structure Special sorts of products In higher category theory A $\dagger$-compact category is a category which is a and a in a compatible way. So, notably, it is a monoidal category in which A category $C$ that is equipped with the structure of a symmetric monoidal †-category and is compact closed is $\dagger$-compact if the dagger-operation takes units of dual objects to counits in that for every object $A$ of $C$ we have $\array{ && A \otimes A^* \\ & {}^{\epsilon_A^\dagger}earrow \\ I && \downarrow^{\mathrlap{\sigma_{A \times A^*}}} \\ & {}_{\eta_A}\searrow \\ && A^* \otimes A } \,.$ • The category of Hilbert spaces (over the complex numbers) with finite dimension is a standard example of a $\dagger$-compact category. This example is complete? for equations in the language of $ \dagger$-compact categories; see Selinger 2012. • For $C$ a category with finite limits the category $Span_1(C)$ whose morphisms are spans in $C$ is $\dagger$-compact. The $\dagger$ operation is that of relabeling the legs of a span as source and target. The tensor product is defined using the cartesian product in $C$. Every object $X$ is dual to itself with the unit and counit given by the span $X \stackrel{Id}{\leftarrow} X \ stackrel{Id \times Id}{\to} X \times X$. See □ John Baez, Spans in quantum theory (web, pdf, blog) • The finite parts of quantum mechanics and quantum computation are naturally formulated as the theory of $\dagger$-compact categories. For more on this see at finite quantum mechanics in terms of †-compact categories. Relation to self-duality If each object $X$ of a compact closed category is equipped with a self-duality structure $X \simeq X^\ast$, then sending morphisms to their dual morphisms but with these identifications pre- and $(-)^\dagger \;\colon\; (X \stackrel{f}{\longrightarrow} Y) \mapsto (Y \stackrel{\simeq}{\to} Y^\ast \stackrel{f^\ast}{\longrightarrow} X^\ast \stackrel{\simeq}{\to} X)$ constitutes a dagger-compact category structure. See for instance (Selinger, remark 4.5). Applied for instance to the category of finite-dimensional inner product spaces this dagger-operation sends matrices to their transposed matrix?. The concept was introduced in • Samson Abramsky, Bob Coecke, A categorical semantics of quantum protocols, in Proceedings of the 19th IEEE conference on Logic in Computer Science (LiCS’04), IEEE Computer Science Press, 2004. ( with an expanded version in under the name “strongly compact” and used for finite quantum mechanics in terms of dagger-compact categories. The topic was taken up • Peter Selinger, Dagger compact closed categories and completely positive maps, in Proceedings of the 3rd International Workshop on Quantum Programming Languages (QPL 2005), ENTCS 170 (2007), 139–163. (web, pdf) where the alternative terminology “dagger-compact” was proposed, and used for the abstract characterization of quantum operations (completely positive maps on Bloch regions of density matrices). The examples induced from self-duality-structure are discussed abstractly in • Peter Selinger, Autonomous categories in which $A \simeq A^\ast$, talk at QPL 2010 (pdf) That finite-dimensional Hilbert spaces are “complete for dagger-compactness” s shown in
{"url":"http://ncatlab.org/nlab/show/dagger-compact+category","timestamp":"2014-04-23T14:30:33Z","content_type":null,"content_length":"32610","record_id":"<urn:uuid:a70d7e9a-a2b8-42d0-8cc9-d1a51c8cdecf>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00522-ip-10-147-4-33.ec2.internal.warc.gz"}