content
stringlengths
86
994k
meta
stringlengths
288
619
Spectral theory of real symmetric matrices with random diagonal elements up vote 0 down vote favorite Can you point me in the direction of any research done on the spectral theory (i.e. eigenvalues and eigenvectors) of real symmetric matrices with random (Gaussian or Levy) diagonal elements and fixed off-diagonal ones? Any links to papers, theorems or books will be appreciated. sp.spectral-theory mp.mathematical-physics random-matrices 2 What about the off diagonal elements? Should they be fixed? – Igor Rivin Aug 6 '11 at 19:01 Yes, they are fixed. – Katastrofa Aug 7 '11 at 15:55 add comment 1 Answer active oldest votes There are two cases to consider depending on how your off-diagonal looks like: If only two off-diagonal are non-zero, you are in the realm of random Schr\"odinger operators or random Jacobi operators. Then the special structure of the random variables is preserved up to a renormalization. In particular, the eigenvalues obey Poisson statistics in the limit as the matrix size goes to infinity. up vote 4 down The proof of this splits into two parts: First show that the eigenvectors decay exponentially in space. Then use this to decouple the eigenvalues. vote accepted A similar strategy should work as long as the number of non-zero off-diagonals remains small compared to the matrix size. But I am not sure if this has been proven. Probably not. I have no idea what happens if many off-diagonals are non-zero. My best guess is that it's a mess. Thanks. Does the proof assume anything about the distribution of the noise on the diagonal? – Katastrofa Aug 8 '11 at 6:12 1 Poisson statistics needs some a.c. distribution with maybe bounded density. It's in a paper by Minami in CMP in the 90s. – Helge Aug 8 '11 at 15:53 add comment Not the answer you're looking for? Browse other questions tagged sp.spectral-theory mp.mathematical-physics random-matrices or ask your own question.
{"url":"http://mathoverflow.net/questions/72242/spectral-theory-of-real-symmetric-matrices-with-random-diagonal-elements","timestamp":"2014-04-19T10:02:34Z","content_type":null,"content_length":"56120","record_id":"<urn:uuid:9a025367-4bf8-4efe-ab6e-88d0f8b1ba05>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
from The American Heritage® Dictionary of the English Language, 4th Edition • n. Mathematics A regular solid having six congruent square faces. • n. Something having the general shape of a cube: a cube of sugar. • n. A cubicle, used for work or study. • n. Mathematics The third power of a number or quantity. • n. Slang Cubic inches. Used especially of an internal combustion engine. • transitive v. Mathematics To raise (a quantity or number) to the third power. • transitive v. To determine the cubic contents of. • transitive v. To form or cut into cubes; dice. • transitive v. To tenderize (meat) by breaking the fibers with superficial cuts in a pattern of squares. from Wiktionary, Creative Commons Attribution/Share-Alike License • n. A regular polyhedron having six identical square faces. • n. Any object more or less in the form of a cube. • n. The third power of a number, value, term or expression. • n. A data structure consisting of a three-dimensional array; a data cube • v. To raise to the third power; to determine the result of multiplying by itself twice. • v. To form into the shape of a cube. • v. To cut into cubes. • v. to use a Rubik's cube. • n. A cubicle, especially one of those found in offices. from the GNU version of the Collaborative International Dictionary of English • n. A regular solid body, with six equal square sides. • n. The product obtained by taking a number or quantity three times as a factor. • transitive v. To raise to the third power; to obtain the cube of. from The Century Dictionary and Cyclopedia • n. In geometry, a regular body with six square faces; a rectangular parallelopiped, having all its edges equal. • n. In arithmetic and algebra, the product obtained by multiplying the square of a quantity by the quantity itself; the third power of a quantity: as, 4 × 4 × 4 = 64, the cube of 4; a is the cube of a, or x of x. • To raise to the cube or third power. See cube, n., 2. • To measure the cubic capacity of a hollow object, like that of a skull. from WordNet 3.0 Copyright 2006 by Princeton University. All rights reserved. • n. a three-dimensional shape with six square or rectangular sides • n. a hexahedron with six equal squares as faces • n. the product of three equal terms • v. cut into cubes • v. raise to the third power • n. any of several tropical American woody plants of the genus Lonchocarpus whose roots are used locally as a fish poison and commercially as a source of rotenone • n. a block in the (approximate) shape of a cube Latin cubus, from Greek kubos. N., sense 2b, short for cubicle. (American Heritage® Dictionary of the English Language, Fourth Edition) From Old French cube, from Latin cubus, from Ancient Greek κύβος (kubos). (Wiktionary) Clipped form of cubicle (with intentional reference to their common shape per cube, etymology 1), which from Latin cubiculum ("a small bedchamber or lounge"), from cubare ("to lie down"). Log in or sign up to get involved in the conversation. It's quick and easy.
{"url":"https://wordnik.com/words/cube","timestamp":"2014-04-23T12:09:52Z","content_type":null,"content_length":"44488","record_id":"<urn:uuid:06157e0e-3b1f-4ef6-8ac9-e8f8f152fffa>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
Precalculus: Graphs & Models and Graphing Calculator Manual Package Why Rent from Knetbooks? Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option for you. Simply select a rental period, enter your information and your book will be on its way! Top 5 reasons to order all your textbooks from Knetbooks: • We have the lowest prices on thousands of popular textbooks • Free shipping both ways on ALL orders • Most orders ship within 48 hours • Need your book longer than expected? Extending your rental is simple • Our customer support team is always here to help
{"url":"http://www.knetbooks.com/precalculus-graphs-models-graphing/bk/9780321501523","timestamp":"2014-04-19T16:09:20Z","content_type":null,"content_length":"37994","record_id":"<urn:uuid:e6841030-d6ab-4b16-812c-cafaa6e7cd74>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving two trig equations November 26th 2013, 07:13 PM #1 Nov 2013 United States Solving two trig equations So I'm trying to find the horizontal and vertical tangents to a polar curve, and what it comes down to is solving the following equations: 4cos(4x)sin(x) + sin(4x)cos(x) = 0 4cos(4x)cos(x) - sin(4x)sin(x) = 0 I'm fairly lost; I keep messing around with the equations, but it doesn't seem to get any simpler. Anything I do to it ends up being equally complex. Any suggestions? Re: Solving two trig equations I forgot to mention this is over the interval [0,2pi). For 4cos(4x)sin(x) + sin(4x)cos(x) I can easily see that at 0 and pi the sin portions will produce solutions, but I can't think of a way to find where else 4cos(4x)sin(x) = -sin(4x)cos(x) especially since the cos(4x) and cos(x) will never both equal 0 for the same angle. I have tried reducing cos(4x) and sin(4x) down using half angle formulas, but the end result just seems more messy. I was able to change both equations into forms of tan, with the first equation being: -tan(4x) = 4tan(x) This just feels like a step in the wrong direction though. Re: Solving two trig equations Each of these functions have plenty of zeros on [0,2pi) but they don't seem to have any in common. Do you need to solve these equations simultaneously? If so there is no solution. You might be able to take tan(4x) and express it in terms of sin(x) and cos(x), it looks pretty nasty though. Are you sure the factor of 4 at the front of each is correct? If that wasn't there this would just be expressions for sin(5x) and cos(5x). Perhaps post some of your work that led up to these two equations. Re: Solving two trig equations Oh sorry no I am not trying to solve them simultaneously. Here is what I did leading up to it. This is a calculus problem, but it ends up just being a trig problem in the end. I had put x but it is really theta so I'll switch back to that to avoid confusion. I started trying to find the vertical and horizontal tangents of the polar curve: $r = \sin(4\theta)$ $x = r\cos(\theta)$ $y = r\sin(\theta)$ $\frac{dy}{dx} = \frac{\frac{d}{d\theta}(r\sin(\theta))}{\frac{d}{d \theta}(r\cos(\theta))} = \frac{\frac{d}{d\theta}(\sin(4\theta)\sin(\theta)) }{\frac{d}{d\theta}(\sin(4\theta)\cos(\theta))} = \frac{4\cos(4\theta)\sin(\theta) + \sin(4\theta)\cos(\theta)}{4\cos(4\theta)\cos(\the ta) - \sin(4\theta)\sin(\theta)}$ so you see I am left with two different trig equations to solve, $4\cos(4\theta)\sin(\theta) + \sin(4\theta)\cos(\theta) = 0$ $4\cos(4\theta)\cos(\theta) - \sin(4\theta)\sin(\theta) = 0$ I guess I should probably repost this in the calculus forum now that I think about it. Re: Solving two trig equations I did a pretty picture The red on is the the first one The blue on is the second one. I only just discovered gyazo today so I cannot resist the temptation of seeing it work. Re: Solving two trig equations I can show you what I was able to do $4 \sin (q) \cos (4 q)+\sin (4 q) \cos (q)$ $\frac{5 \sin ^5(q)}{2}+\frac{3 \sin ^3(q)}{2}+\frac{25}{2} \sin (q) \cos^4(q)-25 \sin ^3(q) \cos ^2(q)-\frac{9}{2} \sin (q) \cos ^2(q)$ Now substitute cos^2(q) terms with (1 - sin^2(q)) $\frac{5 \sin ^5(q)}{2}+\frac{3 \sin ^3(q)}{2}-\frac{9}{2} \left(1-\sin^2(q)\right) \sin (q)-25 \left(1-\sin ^2(q)\right) \sin^3(q)+\frac{25}{2} \sin (q) \cos ^4(q)$ and cos^4(q) terms with (1 - sin^2(x))^2 $\frac{5 \sin ^5(q)}{2}+\frac{3 \sin ^3(q)}{2}+\frac{25}{2} \left(1-\sin^2(q)\right)^2 \sin (q)-\frac{9}{2} \left(1-\sin ^2(q)\right) \sin(q)-25 \left(1-\sin ^2(q)\right) \sin ^3(q)$ and simplify it all (let mathematica simplify it ) $\frac{1}{2} (5 \sin (5 q)-3 \sin (3 q))$ setting above to 0 you get $5 \sin (5 q)=3 \sin (3 q)$ It's not particularly solvable but it's a remarkable form. If you do the same thing with the denominator you obtain $\frac{1}{2} (3 \cos (3 q)+5 \cos (5 q))$ and setting to 0 you get $3 \cos (3 q)=-5 \cos (5 q)$ ok you can play with it from here. Re: Solving two trig equations Wow it's ridiculous that it can be simplified down so much, thanks a ton. November 26th 2013, 08:43 PM #2 Nov 2013 United States November 26th 2013, 09:13 PM #3 MHF Contributor Nov 2013 November 26th 2013, 10:37 PM #4 Nov 2013 United States November 26th 2013, 11:02 PM #5 Nov 2013 November 27th 2013, 02:41 AM #6 MHF Contributor Nov 2013 November 27th 2013, 09:54 AM #7 Nov 2013 United States
{"url":"http://mathhelpforum.com/trigonometry/224642-solving-two-trig-equations.html","timestamp":"2014-04-17T19:14:11Z","content_type":null,"content_length":"51184","record_id":"<urn:uuid:c318da9e-acf7-406c-bd16-8d41cdb86eb7>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
A Suite of Cool Logic Programs You may have heard about the Tarski-Seidenberg theorem, which says that the first-order theory of the reals is decidable, that the first-order theory of the complex numbers is similarly decidable, or that the first order theory of the integers without multiplication is decidable. In the course of John Harrison‘s logic textbook Handbook of Practical Logic and Automated Reasoning, all three of these algorithms (and many more) are implemented. Furthermore, you can download and play with them for free. (However, I still recommend checking out the book: especially if you are looking for a good textbook for a course on logic with a concrete, computational bent.) Below, I’ll describe how to install the programs and try them out. There are many more interesting functions in this suite that I haven’t described. The software is written in OCaml and can be run interactively in an OCaml toplevel (don’t worry, you won’t actually need to know any OCaml). Download and install OCaml as well as its preprocessor Camlp5 (which is used for formatting formulas nicely). Then, download the code from here (under “All the code together”) and unzip it somewhere. To run it, go to wherever you unzipped it and type make interactive in a shell. (At least, that’s what worked for me on a Mac OSX. Other systems may be different.) The Tarski-Seidenberg Theorem. The Tarski-Seidenberg theorem implies that there is a decision procedure which, given a first-order sentence over $\mathbb{R}$ using plus, times, 0, and 1, will tell you if it’s true or not. The function real_qelim implements this. Let’s try it out. (The symbol # indicates the beginning of the prompt; don’t type that, just type in what’s after it.) This function knows that not all quadratic polynomials have roots, but all cubics do. # real_qelim <<forall b c. exists x. x^2 + b*x + c = 0>>;; - : fol formula = <<false>> # real_qelim <<forall b c d. exists x. x^3 + b*x^2 + c*x + d = 0>> - : fol formula = <<true>> Many geometric puzzles can, in theory, be solved automatically by this function. Unfortunately, it is too slow for most interesting ones. Harrison notes that there are open problems about kissing numbers of high-dimensional spheres which could be solved in theory by this algorithm, although in practice it is an unworkable approach. This algorithm actually does something stronger than decide the truth of first-order sentences: it does quantifier-elimination, which means that if you give it a formula with free variables, it will give you a quantifier-free formula in those same free variables (in the case of a sentence, which has no free variables, that means either the formula “true” or the formula “false”). For example, if you’ve forgotten the quadratic formula and want to know what the condition is for a quadratic polynomial to have a root: # real_qelim <<exists x. x^2 + b*x + c = 0>>;; - : fol formula = <<(0 + c * 4) + b * (0 + b * -1) = 0 \/ ~(0 + c * 4) + b * (0 + b * -1) = 0 /\ ~(0 + c * 4) + b * (0 + b * -1) > 0>> Note that there is no claim that the formula it gives you will be completely simplified, only that it will be correct. Deciding Sentences over the Complex Numbers We can similarly use the function complex_qelim to do quantifier elimination over the complexes. The fact that this possible is easier to prove than the corresponding fact for the reals, and the algorithm is similarly faster. # complex_qelim <<forall x. x^3 = 1 ==> x = 1>>;; - : fol formula = <<false>> The following sentence is also true over the reals (although for a different reason than why it’s true over the complexes), but it takes significantly longer for the real quantifier elimination algorithm to decide it. # complex_qelim <<forall x1 x2 x3. (x1^3 = 1 /\ x2^3 = 1 /\ x3^3 = 1 /\ ~(x1 = x2) /\ ~(x1 = x3) /\ ~(x2 = x3)) ==> x1 + x2 + x3 = 0>>;; - : fol formula = <<true>> Suppose we read on wikipedia that the translation of the limaçon $r = b + a\cos\theta$ to rectangular coordinates is $(x^2 + y^2 - ax)^2 = b^2(x^2 + y^2)$. We can verify this (I’ve used s to represent $\sin\theta$ and c to represent $\cos\theta$): # complex_qelim << forall r s c x y. (x^2 + y^2 = r^2 /\ r * c = x /\ r * s = y ==> forall a b. (r = b + a * c ==> (x^2 + y^2 - a * x)^2 = b^2 * (x^2 + y^2)))>>;; - : fol formula = <<true>> Presburger Arithmetic Finally, first-order sentences with plus and less-than over the integers and over the natural numbers are decidable. The relevant functions are integer_qelim and natural_qelim. Even though multiplication of variables is prohibited, we can still multiply by constants (since for example, instead of $2x$ we could have written $x + x$ anyway). An example Harrison gives is: There is an old (easy) puzzle which is to show that, with 3- and 5-cent stamps, you can make an $n$-cent stamp for any $n\geq 8$. # natural_qelim <<forall n. n >= 8 ==> exists x y. 3 * x + 5 * y = n>>;; - : fol formula = <<true>> 2 responses to “A Suite of Cool Logic Programs” 1. I’m just curious about what the program would do in response to the notorious Steiner-Lehmus theorem in Euclidean geometry. Can the program be modified so that it searches for direct proofs? 2. It would prove it, probably after a very long time. There are weaker methods of geometric theorem proving available in the package (see geom.ml), but I’m not sure that any of them correspond to a “direct proof” (in fact, I’m pretty sure they don’t as they are too strong). E.g., using Gröbner bases or this thing called Wu’s method, both of which can be used to prove statements which can be put in the form $\forall{\vec{x}}(P(\vec{x})\rightarrow Q(\vec{x})$, where $P$ and $Q$ are polynomials. Embarrassingly, I can’t find the book right now, and that’s all I know about the situation.
{"url":"http://xorshammer.com/2009/05/14/a-suite-of-cool-logic-programs/","timestamp":"2014-04-18T00:13:41Z","content_type":null,"content_length":"68502","record_id":"<urn:uuid:bff11568-67e8-49cd-a2c6-1b701d80ae74>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptology ePrint Archive: Report 2004/195 Signed Binary Representations RevisitedKatsuyuki Okeya and Katja Schmidt-Samoa and Christian Spahn and Tsuyoshi TakagiAbstract: The most common method for computing exponentiation of random elements in Abelian groups are sliding window schemes, which enhance the efficiency of the binary method at the expense of some precomputation. In groups where inversion is easy (e.g. elliptic curves), signed representations of the exponent are meaningful because they decrease the amount of required precomputation. The asymptotic best signed method is wNAF, because it minimizes the precomputation effort whilst the non-zero density is nearly optimal. Unfortunately, wNAF can be computed only from the least significant bit, i.e. right-to-left. However, in connection with memory constraint devices left-to-right recoding schemes are by far more valuable. In this paper we define the MOF (Mutual Opposite Form), a new canonical representation of signed binary strings, which can be computed in any order. Therefore we obtain the first left-to-right signed exponent-recoding scheme for general width w by applying the width w sliding window conversion on MOF left-to-right. Moreover, the analogue right-to-left conversion on MOF yields wNAF, which indicates that the new class is the natural left-to-right analogue to the useful wNAF. Indeed, the new class inherits the outstanding properties of wNAF, namely the required precomputation and the achieved non-zero density are exactly the same. Category / Keywords: foundations / addition-subtraction chains, scalar multiplication, exponentiation, signed binary, elliptic curve cryptosystem, Publication Info: Paper without appendix is published in the proceedings of Crypto 2004 Date: received 11 Aug 2004Contact author: samoa at informatik tu-darmstadt deAvailable format(s): Postscript (PS) | Compressed Postscript (PS.GZ) | PDF | BibTeX Citation Version: 20040812:045802 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
{"url":"http://eprint.iacr.org/2004/195","timestamp":"2014-04-20T00:45:31Z","content_type":null,"content_length":"3370","record_id":"<urn:uuid:b20f3450-be7c-498e-947a-d8077eb860ea>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
Homology of special linear group over local field up vote 4 down vote favorite I am trying to compute the group $H_1(Sl_2(\mathbb{Z}_2),M)$, where $\mathbb{Z}_2$ are $2$-adic integers and M is a module $\mathbb{Z}_2 \oplus \mathbb{Z}_2$. I suppose that the group acts on $M$ by matrix multiplication. I found a similar-looking computation in the paper of Dupont and Sah "Homology of Euclidean groups of motion made discrete and Euclidean scissors congruences". It was shown there that $H_1(SO_3(\ mathbb{R}),\mathbb{R^3}) = \Omega^1_\mathbb{R}.$ I would be very grateful for any help with computing the group or for any interpretation of its elements. ac.commutative-algebra kt.k-theory-homology add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged ac.commutative-algebra kt.k-theory-homology or ask your own question.
{"url":"http://mathoverflow.net/questions/136794/homology-of-special-linear-group-over-local-field","timestamp":"2014-04-16T10:37:39Z","content_type":null,"content_length":"44898","record_id":"<urn:uuid:a94d6e3c-001d-4ccb-ba30-f94650e6ead1>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof by induction Use mathematical induction, to prove that [tex]\frac{n^{3}+5n}{3}[/tex] is an even integer for each natural number n. I am fimilar with proof by induction... Put n+1 in place of n. (n+1)^3 + 5(n+1) = n^3+3n^2+3n+1+5n+5 = (n^3+5n) + 3n(n+1) + 6. Now divide each term by 3 and see what kind of number you get. Since you are familiar with induction, this should be enough.
{"url":"http://www.physicsforums.com/showthread.php?t=315407","timestamp":"2014-04-18T13:58:01Z","content_type":null,"content_length":"25367","record_id":"<urn:uuid:e94a3396-e6a9-4518-9523-5102783cf7fd>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
Buying A Home: Should I Pay Points Does It Make Sense to Pay Points? Well...let us first define a "point". A point, often referred to as a "discount point" or "origination fee", is equal to one percent of the loan amount. Points are charged by the lender and are paid at closing. "Discount points" allow the buyer to "buy down" the interest rate for the loan. Initially, this may sound like a good idea, but you'll want to consider a couple things. First, how low are current interest rates? If rates are fairly low (hovering around 6.5% at the time of this writing), there's really no need to pay points. Buying down your interest rate when rates are already low really only increases your up-front costs, rather than saving you money. Second, how long do you plan on staying in your home? Let's look at an example. In today's market, you might find a 30-year fixed rate loan for $170,000 at 6.00 percent with 2 points. This means, that for the life of the loan (all 30 years), you will have an interest rate of 6 percent. All that's required of you is $3400 ($170,000 x 2 percent) at closing for this example (this would be in addition to other closing costs). On the other hand, the same lender may offer you a rate of 6.5 percent with no points. Now, which way is the better deal? The monthly principal and interest (P&I) payment at 6.00 percent on $170,000 is $1,019.23. At 6.50 percent the P&I payment increases to $1,074.51 per month -- a difference of $55.28 per month. If we divide $3,400 by $55.28 (the In order to calculate a true payback period, we will assume that your $3,400 could make some kind of interest sitting in the bank. Let's assume your bank is paying three percent interest on a standard savings account. A balance of $3,400 balance would earn about $8.50 per month at the three percent. If you pay the two points, rather than sticking this money into a savings account at your bank, this is effectively interest you would never receive. So, we must subtract $8.50 from the $55.28. This leaves you with a figure of $46.78. To figure your true payback period, simply divide the $46.78 into the $3,400 and your payback period increases to just over 72 months (approx 6 years 19 days). The answer? Statistically speaking, many people don't hold onto their mortgages for six years before selling or refinancing. You must remember, points are never refundable. If you decide to sell or refinance your home before the payback period ends, you've actually lost money. For most people, the answer would be No...it doesn't make sense to pay points. You would be better off to take the higher interest rate and put that money to better use. On the other hand, if you are absolutely positive that you are going to keep the mortgage beyond the payback period (preferably well beyond the payback period) then paying points may be an option worth considering. One would want to consider, however, just how positive we really are about anything.
{"url":"http://www.oceanislebeachhomesforsale.com/buying-a-home/paying-points.aspx","timestamp":"2014-04-21T14:40:31Z","content_type":null,"content_length":"27874","record_id":"<urn:uuid:b32c0ea2-6116-47bc-a711-ccd3a3c7d35a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
'simple' proof December 7th 2007, 09:26 AM #1 Junior Member Oct 2007 'simple' proof Is there an equilateral triangle with all its vertices in the knots of the integral square lattice? If the answer is YES, give an example, if the answer is NO, prove it. You might as well assume that one vertex is at the origin, and that another one is at (p,q), for some integers p and q. The third vertex will be at a point which is the image of (p,q) under a rotation about the origin through an angle π/3 radians (or 60°). Find the coordinates of this point, and see whether they can be integers or not. December 7th 2007, 11:46 AM #2
{"url":"http://mathhelpforum.com/discrete-math/24386-simple-proof.html","timestamp":"2014-04-21T10:05:04Z","content_type":null,"content_length":"33101","record_id":"<urn:uuid:df99b1ce-559e-48e5-8415-9466d48cce4f>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of Multiprocessor scheduling computer science multiprocessor scheduling is an optimization problem. The problem statement is: "Given a set of jobs where job has length and a number of processors , what is the minimum possible time required to schedule all jobs in processors such that none overlap?" The applications of this problem are numerous, but are, as suggested by the name of the problem, most strongly associated with the of computational tasks in a A simple often used algorithm is the LPT-Algorithm (Longest Processing Time) which sorts the jobs by its processing time and then assigns them to the machine with the first end time. This algorithm achieves a sharp bound of 4/3 - 1/(3m) OPT. Similar Problems Since multiprocessor scheduling is , it can be restated as any other NP-Complete problem. One of the simplest restatements of the problem is as a linear bin packing problem , where each processor is a "bin", and each job is represented by an object to pack, whose length is proportional to the job's time. Thus, the approximation algorithms used with bin packing can easily be adapted to multiprocessor scheduling.
{"url":"http://www.reference.com/browse/Multiprocessor+scheduling","timestamp":"2014-04-16T07:14:22Z","content_type":null,"content_length":"77944","record_id":"<urn:uuid:274f89e1-4d6e-413c-b5ba-96725375656c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
Urgent, please help! Solving M/M/3 queuing system problems using Monte Carlo method April 29th 2009, 01:54 AM #1 Apr 2009 South England I have an M/M/3 queuing system with given parameters as follows: lambda = 2.1 mu = 0.8 L= 8.04 Lq= 5.41 W= 3.83 Wq= 2.58 I have to find the bulk probability given by B(3, lambda/mu) I also have to use 12 given random numbers to simulate the arrivals of 6 customers with inter arrival times given by epsilon(lambda) and service time epsilon(mu). How do I do this??
{"url":"http://mathhelpforum.com/advanced-statistics/86397-urgent-please-help-solving-m-m-3-queuing-system-problems-using-monte-carlo-method.html","timestamp":"2014-04-17T17:07:08Z","content_type":null,"content_length":"30778","record_id":"<urn:uuid:42419a4b-1a75-453c-86d2-e75906db4f78>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Jus on Tuesday, May 19, 2009 at 6:25pm. One day, a controversial video is posted on the Internet that seemingly gives concrete evidence of life on other planets. Suppose that 50 people see the video the first day after it is posted and that this number doubles everyday after that. a) write an expression to describe the number of people who have seen the video t days after it is posted. b) one week later, a second video is posted that reveals the first as a hoax. Suppose that 20 people see this video the first day after it is posed and that this number triples every day after that. Write an expression to describe the number of people who have seen the 2nd video t days after it is posted. c) set the two expressions from parts a and b to equal each other and then solve for t. What does this solution mean? Ok can you just check my answers for parts a and b, except for part c i dont know what this means. a) A(t)=50(2)^t b) A(t)=20(3)^t c) I set them equal to each other and the variable disappeared on me..ie cancelled out What does this mean? • pre-calc - bobpursley, Tuesday, May 19, 2009 at 6:42pm right on a,b. 50 2^t = 20 3^t take the log of each side log50 + t*log2 = log 20 + t log3 solve for t log to any base, your calculator is your friend on this. • pre-calc - Jus, Tuesday, May 19, 2009 at 7:06pm Ok but I have Now what? • pre-calc - Jus, Tuesday, May 19, 2009 at 7:07pm Related Questions STATISTICS - A statistics professor conducted a study on the effect of playing ... algebra - Suppose a video store charges nonmembers $4 to rent each video. A ... Math - Define variables and write a system of equations for this situation. ... Video - Does anyone know where I can learn how to make a video infomercial for ... psy - Cathy owes six overdue movies to her local video store. Since each of ... HLEP! - HOW CAN VIDEO GAMES BE GOOD FOR YOU? Video games may improve eye-hand ... algebra - Write a rule in words and as an algebraic expression to model the ... Reading - Many people are fascinated by outer space. School children learn about... psy - )]}> Many are concerned that playing violent games may lead to violent ... Algebra 1 - jasper wants to join a video store, blockbuster offers $9.99 for one...
{"url":"http://www.jiskha.com/display.cgi?id=1242771902","timestamp":"2014-04-16T11:21:38Z","content_type":null,"content_length":"9724","record_id":"<urn:uuid:883bda5a-e50c-4dba-ada7-8c1029dbab29>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
We consider a fashion discounter distributing its many branches with integral multiples from a set of available lot-types. For the problem of approximating the branch and size dependent demand using those lots we propose a tailored exact column generation approach assisted by fast algorithms for intrinsic subproblems, which turns out to be very efficient on our real-world instances.
{"url":"http://opus.ub.uni-bayreuth.de/opus4-ubbayreuth/solrsearch/index/search/searchtype/collection/id/13330/start/0/rows/10/author_facetfq/J%C3%B6rg+Rambau/subjectfq/Standortproblem","timestamp":"2014-04-19T04:32:34Z","content_type":null,"content_length":"16662","record_id":"<urn:uuid:2eb8bb89-1285-45bd-93b0-4e4d641e0966>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Length of the vector (electrostatic cylinder) My problem is that I'm confused about a hint I was given in this problem. I usually use the law of cosine to find the length of [itex]\vec{r}-\vec{r'}[/itex]. But the hint here says that I should make it [itex][r^2 + (z - z_0)^2]^{1/2}[/itex] Where does this come from? I can't quite get my head around the geometrical idea of this hint. Can't the law of cosine be used here?
{"url":"http://www.physicsforums.com/showpost.php?p=4165586&postcount=1","timestamp":"2014-04-17T12:44:06Z","content_type":null,"content_length":"8835","record_id":"<urn:uuid:d3f81b2e-b403-46ba-9970-99d46e525815>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
The Case of the Disappearing Mathematician Starting in the 1930s, Nicolas Bourbaki published dozens of papers, becoming a famous mathematician. There was just one problem: He didn't exist. Author and scholar Amir Aczel talks about the genius mathematician who wasn't. IRA FLATOW, host: Up next, another sort of mystery about a mathematician. I want you to ask your nearest mathematician, of course the person sitting next to you, what he or she knows about Nicolas Bourbaki. And you're in for a long answer because author Amir Aczel first encountered this legend as an undergraduate math major in the University of California at Berkeley. That was some time ago. And it goes something like this. And I'm going to have him fill in the end. I'll just tell you the beginning. In the 1930s, a French mathematician by the name of Nicolas Bourbaki set out to unify and redefine mathematics. And he published many papers, many books. His works were considered some of the most widely influential mathematic text of the 20th century. For example, do you remember the new math you learned as a kid? Well he's probably responsible for that. There's only one little fly in this ointment, and that is Nicolas Bourbaki never existed. He was all a hoax. The math was real; the person wasn't. So who was he? My next guest, Amir Aczel, has written extensively on science and mathematics, including the books, The Riddle of the Compass, Fermat's Last Theorem; his latest book is The Artist and the Mathematician, tells the story of how Nicolas Bourbaki came to be and how the longest running joke in mathematics has changed the world. Dr. Aczel is a visiting scholar in the history of science at Harvard and a research fellow at Boston University. And he joins us today in our SCIENCE FRIDAY studios. Welcome back to the program. Dr. AMIR ACZEL (Author, The Artist and the Mathematician): Thank you. It's a pleasure to be here. FLATOW: Did mathematicians know this was a hoax all this time when you were an undergraduate and you heard about him or - and you just play along with this, or what's the story on this? Dr. ACZEL: Well, when I was an undergraduate he was already well known in America. FLATOW: Yeah. Dr. ACZEL: In France, it was well known earlier. In one of the stories that - the funniest stories about Bourbaki is that they weren't sure whether Americans and people in other countries knew about them. And at some point Bourbaki - actually Andre Weil who was one of the main members, he pronounced it Vay(ph), one of the founders - they key founder wrote a letter to the American Mathematical Society in Providence, Rhode Island, saying I request membership in the American Mathematical Society. (Soundbite of laughter) FLATOW: Under that name. Dr. ACZEL: Yes. FLATOW: Why was that name chosen? Dr. ACZEL: Well the name was chosen because there was a general, a German - I'm sorry, a Greek - of Greek origin, a French of Greek of origin by the name of Charles Bourbaki, who was a general and lost France, a major battle against oppression. So he was an anti-hero. And he tried to commit suicide and couldn't succeed either. So he was a real loser. And these French mathematicians, who loved jokes and pranks, chose his name for their I wanted to add, though, that Ralph Boas, who was the secretary of the American Mathematical Society, already knew who Bourbaki was. FLATOW: Ah. Dr. ACZEL: This was in the '50s. FLATOW: Right. Dr. ACZEL: So he wrote a letter back to France saying, I understand this is not an application from an individual. You'll have to pay the institutional rate, which is much higher. (Soundbite of laughter) FLATOW: Talking with Amir Aczel, the author of The Artist and the Mathematician: The Story of Nicolas Bourbaki, the Genius Mathematician Who Never Existed on TALK OF THE NATION: SCIENCE FRIDAY from NPR News. Why create him? I mean, why - was this just a joke, really, of mathematicians who are because they're kind of funny people creating this fictitious person? Dr. ACZEL: Yes. They loved pranks. But in addition to that there was a reason behind that. And the reason was that French mathematics was not doing very well in the beginning part of the 20th There was a book by a person named Gorsa(ph), who was really not a very good textbook writer. And this was used in their calculus sequence, which is the most important basic math course. FLATOW: Right. Dr. ACZEL: And students were not doing well. It was poorly written. The examples were bad. It was just a very bad textbook. So these six mathematicians by the name of Cartan, Chevalley, Delsarte, Dieudonne, Possel, and Weil, met together in a café in Paris. And the café is in the best place in Paris, of course. FLATOW: Of course. But of course. Dr. ACZEL: Yes. It's on the Boulevard Saint-Michel, right next to the, below the Pantheon in the center. Today it's a like a McDonald's, a French type of McDonald's. But there used to be a great café there. They met there. And they said, let's beat this guy, Gorsa. We'll write our own textbook. And since textbooks can't be written or they didn't think could be written by six people or they - and they didn't want to reveal their identities. They were the young Turks who were trying to take over French mathematics on the older generation. So they decided to invent this person that didn't exist. Now there's a history behind it. Andre Weil was really into these jokes. He just loved these jokes. And a few years earlier in 1923 a person by the name of Raoul Husson, H-u-s-s-o-n, played a prank on all the entering class in mathematics at the Ecole Normal Superieure, one of the main - the most prestigious French schools where mathematics was very important. And he gathered all the freshman in a room and came dressed with a fake beard and strange outfit to look like the general, and wrote on the board, Theorem of Bourbaki, you are to prove the following. And of course it was all nonsense. And the people were sitting there scratching their heads trying to figure out what it was. And they eventually knew it was a prank. He wasn't one of the people who were there. He had heard about it. Somebody came to him and said, you know, I really did understand that theorem. So it was a joke that seemed to work on people. FLATOW: Now what about all the mathematics that was behind it? Was it still - was the math a joke also or was it really good mathematics that they were coming out with? Dr. ACZEL: What they were creating was excellent mathematics. FLATOW: Right. Dr. ACZEL: And it is still affecting us today. FLATOW: Give us an idea of the kind of things. Dr. ACZEL: Well mathematics was not done very rigorously and not with great abstraction or generality at the time of Bourbaki in the '30s. The meeting in that café was 1934. At that time, for example, Poincare, Henri Poincare, who is a very famous now because of the Poincare conjecture, was the epitome - he epitomized mathematics that was imprecise. He had a great insight and could do mathematics very, very well. But he didn't care about details, Epsilon, Delta type things, as mathematicians would say. He didn't care about very rigorous proofs. And that frame of mind entered into instruction mathematics. FLATOW: All right. We're going to have Amir take a breath, take a time out here. We're all going to come back and talk about this hoax of a mathematician. Stay with us. We'll be right back after this short break. I'm Ira Flatow. This is TALK OF THE NATION: SCIENCE FRIDAY from NPR News. (Soundbite of music) FLATOW: This is TALK OF THE NATION: SCIENCE FRIDAY. I'm Ira Flatow. A brief program note, coming up on Monday Neal Conan talks with comedienne Paula Poundstone about her new book, The Best Part of Her Arrest, and why she never wins on WAIT, WAIT, DON'T TELL ME. Plus, O.J. Simpson's latest collision with the media spotlight. That's on Monday's TALK OF THE NATION. We're talking this hour on SCIENCE FRIDAY with Amir Aczel, author of The Artist and the Mathematician: The Story of Nicolas Bourbaki - got to get it right - The Genius Mathematician Who Never Existed, published this year by Thunder's Mouth Press. Amir, tell us about the accomplishments they did make, including, if I read correctly, this new math that we were all taught as kids. Dr. ACZEL: Right. So they were sitting in this café in the heart of the Left Bank in Paris. And they're talking about, these six mathematicians, - later, a seventh joins later, some other - so the number after the first meeting is never precise. Nobody knows. FLATOW: They could keep this secret that this is a fictitious mathematician that they were imitating? Dr. ACZEL: Right. They could keep the secrets for quite a while. FLATOW: Yeah. Dr. ACZEL: And so they start with a very limited goal, which is to rewrite the calculus textbook of Gorsa for the next 25 years. But once their work starts developing, and they meet in resort towns. These French mathematicians love to spice up their life. FLATOW: Why not? Why not? Dr. ACZEL: Yes. They go to resorts, beaches, country inns, places like that, skiing areas... FLATOW: I'm living in the wrong era and the wrong business. Go ahead. Dr. ACZEL: So they really have a great time. And they realize that their project is growing and growing and becoming more important. And what it really becomes is Elements of Mathematics, as the volumes that they produce together. And that's named after Euclid's Elements. And instead of for the next 25 years, it's for the next 1,000 years. They're trying to rewrite mathematics for the next 1,000 years. And what they do is they start with set theory, and that's where the new math comes in. Since they decided to build mathematics up from the foundations, from set theory as a foundation, that gave people the idea that mathematics could be taught and new math could be taught starting the sets operations rather and numbers and equations and things like that. FLATOW: 1-800-989-8255 if you want to talk math with Amir Aczel, author of The Artist and the Mathematician. I guess for a joke they sure took it pretty seriously, didn't they? Dr. ACZEL: Yes. They took their jokes really seriously. FLATOW: Did they give Bourbaki a fake history, the whole bit? Did they have to defend who he was? Dr. ACZEL: Yes. They created a baptismal certificate for him... FLATOW: No kidding. Dr. ACZEL: ...and a godmother whose name was Evalin de Possel(ph). She was the godmother. And he sprang into life already as an adult because he had a daughter who was getting married. Bourbaki's daughter is Betty, and she has wedding invitations sent, you know, in her name. FLATOW: Wow. Tell us about how Bourbaki nearly cost Andre Weil his life. Dr. ACZEL: Andre Weil was so taken with this joke. And there are other jokes that he liked. At one point there was somebody playing - Boulevard Mount Parnassus in Paris is called after Mount Parnassus which is - there was a pile of garbage there at the bottom of the center of Paris. And at some point there was somebody playing a prank on a passerby. He was standing behind a podium on the little stage there asking for money for the nation of Pauldavia(ph). He's the Prime Minister of Pauldavia whose people are so poor they have no money for pants. And then he steps away from the podium and you see he's wearing no pants. He's in his underwear. So Weil loved these jokes. And at some point, you know, he also didn't want to serve in the French army. So escaped to Finland and he's caught by the - in November 1939, as the Russians start bombing Helsinki his - the police suspect him. They arrest him and they find a fake identity that he has in the name of Nicolas Bourbaki and wedding invitations for Betty Bourbaki and calling cards in the name of Bourbaki. So they think obviously he's a spy. In addition, they find letters in Russian inviting him to give talks, mathematical talks in Russia. FLATOW: No kidding. Dr. ACZEL: But mathematician, he was no mathematician in (unintelligible). So they're sure he's a spy and they want to execute him. There's no trial here, nothing. But Weil's life was just like a fable. It's very strange, or a fairly tale. They're going to execute him as a spy. There's absolutely no doubt in anybody's mind that this is a Russian spy. In addition, he has been on the frontier area with Russia with his wife looking at things and writing things down. That was his papers. He was writing his mathematical papers. So they go to this official named Nevanlinna, Rolf Nevanlinna. And they say, tomorrow we're going to execute this guy but he says he knows you. Nevanlinna was related to - did some mathematics. And he says, what's his name? And he says Andre Weil. And he says, yes, I do know him. Do you really have to execute him? Why don't you just deport him to Sweden? And the head of police says, oh, that's something I hadn't thought of. So they deport him to Sweden instead of executing him. So it almost cost him his life, this idea of the prank of Bourbaki. FLATOW: Wow. That's very interesting. 1-800-989-8255. You know, I think if you ask anybody what they think of the stereotypical Hollywood mathematician or what happens when a group of mathematicians get together, they think it's going to be a lot of quiet little scribbling on a notepad. But that's not what happened when these guys got together, was it? Dr. ACZEL: No. They had wild parties everywhere. They had a good time, and in fact Bourbaki championed doing mathematics in nature. So they'd take a blackboard and put it outside in the park somewhere and do mathematics in the open. They really not only redid mathematics starting at the foundations of set theory and really introducing rigor into mathematics. Proofs had to proofs, not just hand waving, as mathematicians would say. So the mathematical proofs had to be correct and have some generality and abstractions so they are very valuable to mathematics today. So in a sense, what Bourbaki did - and this is being confronted by American and other mathematicians too, not just the French. Because of course the French would tell you Bourbaki was everything. But other mathematicians say too that in fact Bourbaki did introduce into 20th century mathematics this rigor and abstraction which we have today. So the reason that we have mathematical proofs done correctly and elegantly is due to Bourbaki, or started with the work of Bourbaki. FLATOW: When did the word get out? When was the cover blown? Dr. ACZEL: Well, the cover was blown during the war because the group dispersed. And Andre Weil came to this country. He was in New York in 1942, '43, then he was a - I'm sorry - he went to the University of Chicago and he died at Princeton in 1998. And once they spread around the world because of the Second World War, the word, you know, leaked out about Bourbaki. FLATOW: Are there still any groups that get together now? Dr. ACZEL: Yes. FLATOW: Any remnants of them? Dr. ACZEL: Yes. The Bourbaki group still exists in paper, but former members swear that Bourbaki is dead because they left. They had to leave at 50. Everybody who reaches age 50 is no longer a member of Bourbaki. And they make a strong case that the members of Bourbaki today, none of them are among the top French mathematician, top 40 French mathematicians, and therefore the group no longer exists, they say, and Bourbaki is dead. I did go to a seminar Bourbaki - a Bourbaki seminar in Paris and these were - in the heydays the rooms were full and there was excitement. A lot of mathematics - important mathematics was done there. When I came to this room in Ecole Normal Superieure where all these pranks took place - actually Institute on Refrancorei(ph), which is nearby. There's a little room there and probably 15 mathematicians sitting, half of them asleep, somebody writing on the board a theorem, and then they all left the room. So the excitement of the Bourbaki group is no longer there. FLATOW: Let me ask you this question, and just peripherally, because I watch this program called Numbers. Have you ever seen Numbers on television? Dr. ACZEL: No. FLATOW: This is a program that actually uses mathematics to solve crimes. You know, there are always crime-solving programs, but they use numbers and mathematics. And it's kind of interesting because they do generate an excitement of the kind that you are talking about, at least amongst the little group. Because you don't find very much excitement, you know, amongst students these days, you know, to study mathematics or to get them to study mathematics. Dr. ACZEL: Well, that's the problem with mathematics. What is mathematics? Mathematics is a very - it can be a very abstract structure up in the air here that has no real connection directly with the real world. Although, to argue that is to miss how mathematics developed. Of course, the ancient Greeks thought of it as a very abstract discipline. They called it geometry, and much of it was geometry. And they worked on theorems. They had no applications in the real world. But of course the calculus was developed by Newton and at the same time by Leibniz in Germany - he was actually living in Paris at that time - as a way of solving problems of the real world. And that's the beauty of the calculus. And there are other areas in mathematics that have very strong applications. So when you find amazing applications, that makes mathematics very exciting. And in fact Bourbaki did have some connections with the real world, despite the fact that all said their pure mathematicians with no interest in the world around him. FLATOW: 1-800-989-8255. Let's get a phone call or two. Mike in Kansas City, Kansas. Hi, Mike. MIKE (Caller): Hi. Hey, I've got and survived college calculus - and actually even use it every now and then - but I don't really remember how I learned the basic math and I don't quite understand what you mean by talking about formulas versus sets. And how do people learn from the beginning? FLATOW: When you talk about the - when you talked about they brought rigorousness to mathematics what do you mean by that? MIKE: Right. FLATOW: Until the late (unintelligible) Dr. ACZEL: Well, what I mean is there is a substructure behind mathematics that tells you how to do a proof, a mathematical proof. Now let me try to answer the question. When you're talking about calculus, you're talking about a certain function. For example, you're taking the derivative or constructing the integral of that function. For example, x-squared, the integral would be x-cubed over three, the indefinite integral. So that's what you would be doing in calculus. There are no sets here. We're talking about a function. MIKE: Right. Dr. ACZEL: And you're finding the integral or the derivatives, which is the opposite of finding an integral. So when you do that there's rarely any idea of a set behind it. But when you start doing mathematics, you're talking about open sets and closed sets. FLATOW: Sets means a group of numbers or range. Dr. ACZEL: Yes. Exactly. Now in the case of calculus, usually it would be talking about an interval of numbers as the sets. It could be an open interval from zero to five, not including zero and MIKE: Right. Dr. ACZEL: Or it could be the closed interval, which means including the endpoints. So here you have the set as a set of numbers on which you are operating. You are trying to find the integral, the derivative of a certain range of numbers. That range of numbers is the set. So these sets sort of underlie the calculus that's above you. FLATOW: I understand now my calc 101 class 35 years ago. It was 8 o'clock in the morning at Buffalo. It's not easy to study calculus. That first day when (unintelligible) professor wrote a big thing of sets down - I couldn't understand what it had to do with calculus. Now, it's forty years later I figured - I'm glad you came today. Dr. ACZEL: Okay. FLATOW: 1-800-989-8255 is our number. Let's go to the phones. Let's go Joe in Oxford, Ohio. Hi, Joe. JOE (Caller): Hello. I have a comment about Bourbaki that you might be interested in hearing. I had Zorn for a text as a course in the late '50s. Zorn is the man from him Zorn's... FLATOW: Zorn's lemma, right. JOE: Well, anyway, Zorn had us use for a text Bourbaki's set theory. FLATOW: Wow. JOE: It was written in French. Zorn was a - is a German and he spoke English - or German with English words in it, and it was hard to understand what was going on. And he was just as abstract as he could possibly be. He would never tell you anything. He would write stuff on the board that didn't make any sense and you had to scramble to try to understand it. It was a beautiful course. I don't think I ever had a course that I enjoyed any more. Just struggling all the time to try to make some sense of it and in the - as a result you learned a great deal of mathematics. It was FLATOW: Let me just remind everybody that this is TALK OF THE NATION: SCIENCE FRIDAY from NPR News. I'm Ira Flatow talking with Amir Ace, author of the Artist and the Mathematician. Amir, what do you react to... Dr. ACZEL: I'm not surprised that it was abstract because Zorn's lemma is a very abstract lemma in mathematics... JOE: Yes. Dr. ACZEL: ...in the foundations of mathematics. JOE: Yes. This is a - so difficult course. FLATOW: Did you have - Joe, did you have to teach yourself this basically, then? JOE: Well, yeah. A lot of it, yeah. It was very - you just scrambled all the time trying to guess what was going on. But that was - the best part of it was trying to make some sense out of the thing. FLATOW: Yeah. JOE: And as a result, you learned a great deal about yourself and about the material. FLATOW: Thanks for calling. Have a good weekend. JOE: Thank you. FLATOW: You said that there were some practical things that came out of this... Dr. ACZEL: Yes. FLATOW: ...as Bourbaki. Can you give us - well, I assume... Dr. ACZEL: Oh, yes. I have a wonderful story that's my favorite story in the whole book. FLATOW: Well, we've got to hear your favorite story. Dr. ACZEL: This happened in New York in 1943. Andre Weil came here, by the way, after avoiding being executed. He had to be deported back to France from Sweden and Britain. FLATOW: Wow. Dr. ACZEL: And he - they put him - he was supposed to serve in the army, which is why he was in trouble, as an officer but he became a private to avoid greater punishment than that. He sort of escaped to Britain with the rest of the troops, was repatriated to France and ended up in New York, like many Jewish refugees during the war. So he was here in New York. And Claude Levi-Strauss, the famous anthropologist was here too; also being Jewish, also playing the Nazis who lived here in New York and worked at the school for - the new school. FLATOW: Right. Dr. ACZEL: For social research. And he was working on a very interesting problem about Australian aborigines, and I actually went to Australia in part to research that story. The tribes of aborigines - they live under amazing laws that go back perhaps 50,000 years because Australian aborigines supposedly came 50,000 years ago to Australia. And their societies haven't really changed. They are contiguous, they remain there and they're descendants of descendants and so on. And the rules are very strange. You must marry your father's sister's daughters if you're a guy, if such a person exists. And you are not allowed to marry your mother's brother's daughter. So these are cross cousins, one is taboo and the other one is a And I actually interviewed the woman who - a white woman who lived among these tribes for a while, and she said people who are taboo, you're not even allowed to look at them. And everybody in the tribe knows who is taboo and who is must-marry. So are - and people - other people are sort of neutral. FLATOW: Right. Dr. ACZEL: So Claude Levi-Strauss was working on the very beginnings of structural anthropology and trying to solve the mystery of these marriages. Why? Why do you have these rules? FLATOW: Right. Dr. ACZEL: And what do they tell you about the society that has these such rules. Is it one society or is it really several groups living together who never intermarry. And he couldn't solve it in any way and he realized that mathematics could give him the answer. He came to a person named Jacques Hadamard, which is a very famous French mathematician. He was very - he was rather old at the time, and also Jewish, was also escaping Europe for New York. And he came to him, and Hadamard looks at him in a very typically French way. He says mathematics has four operations. He meant addition, subtraction, multiplication, division. of course there's exponentiation, too FLATOW: I got 45 seconds, Amir... Dr. ACZEL: Sure. And then he says mathematic - I'm sorry - marriage is not one of these operations. But Weil solved his problem using abstract algebra. He used group theory. FLATOW: Wow. Dr. ACZEL: A very abstract area to solve practice problem of Australian aboriginal on marriage laws. FLATOW: That's a great - and it's in your book. Dr. ACZEL: Thank you. Yes. FLATOW: If you want to read the book, I highly recommend it. It's the Artist and the Mathematician: The Story of Nicholas Bourbaki, the Genius Mathematician Who Never Existed. It's been my pleasure to have Amir Aczel back here with us on SCIENCE FRIDAY. Good luck to you. Dr. ACZEL: Oh, thanks. FLATOW: Thank you for coming on and being with us today. Dr. ACZEL: Sure, I do. FLATOW: Have a great weekend. We'll see you next week. I'm Ira Flatow in New York. You must be signed in to leave a comment. Sign In / Register Please keep your community civil. All comments must follow the NPR.org Community rules and terms of use, and will be moderated prior to posting. NPR reserves the right to use the comments we receive, in whole or in part, and to use the commenter's name and location, in any medium. See also the Terms of Use, Privacy Policy and Community FAQ.
{"url":"http://www.npr.org/templates/story/story.php?storyId=6503411","timestamp":"2014-04-19T14:35:26Z","content_type":null,"content_length":"74367","record_id":"<urn:uuid:a0e14222-c5ad-4659-8a81-e44cf1069ea6>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: DETECTION SYSTEM AND SIGNAL PROCESSING METHOD THEREOF Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A signal processing method is adapted for dealing with a plurality of vector matrixes to detect the image of a predetermined range, and the vector matrix data are generated by reflecting a plurality of ultrasonic beams in the predetermined range. The signal processing method of the present invention is that summing all vector matrix data in a predetermined time interval so as to generate a total correlation matrix. In addition, obtaining a correlation matrix through the total vector matrix multiplied by a transposed total vector matrix, and obtaining a weight value according to inversion correlation matrix. Then, a weighting operation is performed for the vector matrix data in the predetermined time interval according to the weight value, so as to obtain a weighting operation result for performing an image synthesis procedure. A detection system, comprising: an ultrasonic module including a plurality of ultrasonic units arranged in array, and the plurality of ultrasonic units continuously emitting a plurality of ultrasonic beams in a predetermined range; a plurality of receiving units respectively receiving reflected ultrasonic beams and generating a plurality of channel signals; a plurality of analog-digital converters respectively converting the channel signals into digital data so as to generate a vector matrix data; a processing module obtaining a total vector matrix data by summing the vector matrix data received in a predetermined time interval, and further obtains a correction matrix through the total vector matrix data multiplied by the transposed total vector matrix data; the processing module further performing an inversion operation of the correction matrix and obtaining a weight value according to the inversion correction matrix, so that a weighting operation being performed for the vector matrix data in the predetermined time interval according to the weight value, so as to obtain a weighting operation result; and an image synthesis unit obtaining an image massage according to the weighting operation result. The detection system according to claim 1, wherein the processing module further comprising: a weighting operation unit used for generating the correction matrix and the weight value; a parameter operation unit generating a relative parameter function according to the vector matrix data; and a multiplier coupled to the weighting operation unit and the parameter operation unit, so that the weighting operation of the vector matrix data being performed through the relative parameter function multiplied by the weight value to obtain the weighting operation result. The detection system according to claim 1, further comprising a plurality of amplifiers respectively coupled to the plurality of receiving units, for amplifying the channel signals and transmitting the amplified channel signals to the analog-digital converters. The detection system according to claim 1, further comprising: a plurality of demodulators respectively coupled to the plurality of analog-digital converters, for demodulating the digital data; a plurality of first buffers respectively coupled to the plurality of demodulators, for receiving demodulated digital data; and a plurality of devices for time delay and phase rotation respectively coupled to the plurality of first buffers, for performing time delay and phase rotation for the demodulated digital data, and further generating the vector matrix data. The detection system according to claim 1, further comprising: a second buffer coupled to the processing module, for receiving the weighting operation value; and a low-pass filter coupled to the second buffer, for performing a low-pass filtering procedure of the weighting operation value to filter the noise, and transmitting the weighting operation value after low-pass filtering to the image synthesis unit. A signal processing method, adapted for dealing with a plurality of vector matrixes to detect the image of a predetermined range, and the vector matrix data being generated by reflecting a plurality of ultrasonic beams in the predetermined range, the signal processing method comprising: summing all vector matrix data in a predetermined time interval so as to generate a total correlation matrix; obtaining a weight value according to an inversion correlation matrix; and a weighting operation being performed for the vector matrix data in the predetermined time interval according to the weight value, so as to obtain a weighting operation result for performing an image combination procedure. The signal processing method according to claim 6, wherein the step of generating the inversion correction matrix comprising performing the following operation: ( y ( t ) y H ( t ) + δ I ) - 1 = 1 δ I - 1 δ 2 y ( t ) y H ( t ) 1 + 1 δ y H ( t ) y ( t ) ##EQU00006## wherein y(t) is the total vector matrix, δ is a constant, and I is a unit matrix. The signal processing method according to claim 6, wherein the step of generating the weight value comprising performing the following operation: R ^ XX - 1 ( t ) a a H R ^ XX ( t ) a ##EQU00007## wherein {circumflex over (R)} (t) is the correction matrix, and a is a unit matrix. The signal processing method according to claim 6, wherein the step of obtaining the weighting operation result is the weight value multiplied by a flexible parameter function, and the step of obtaining the flexible correction parameter function comprising performing the following operation: ( n = 0 N - 1 x n ( t ) N n = 0 N - 1 x n ( t ) 2 ) m ##EQU00008## wherein xn(t) is a vector function corresponding to each of the reflected ultrasonic beams, N is a total number of the ultrasonic beams, and m is a greater than 0 and less than or equal to FIELD OF THE INVENTION [0001] The present invention relates to signal processing methods, and more particularly to a signal processing method used in an ultrasonic imaging system. BACKGROUND OF THE INVENTION [0002] Ultrasonic wave is usually a mechanical vibration wave generated by a piezoelectric crystal in an electric field. A mechanical vibration wave having a frequency of 20 kHz is commonly identified as an ultrasonic wave. At present, ultrasonic wave is mainly used as a tool for testing, measuring or control, such as measuring thickness, measuring distance, medical treat, medical diagnosis or ultrasonic wave imaging. In addition, ultrasonic wave can also be used for disposing material so as to change or speeding up changing some physical, chemical and biological character or state, for example, using a void effect of the ultrasonic wave in liquid for machining, cleaning, soldering, emulsifying, smashing, degassing, catalyzing chemical reaction or medical treatment. In an ultrasonic imaging system in prior art, when a vector matrix generated by a reflected ultrasonic wave beam is received, a correction matrix can be obtained through the vector matrix multiplied by a transposed vector matrix. Then, all vector matrix data in a predetermined time interval is summed so as to generate a total correlation matrix. A weight value is obtained as a parameter of subsequent image synthesis according to an inversion operation of the total correlation matrix. Inversion operation of the vector matrix for obtaining correction matrix is always needed after obtaining the vector matrix, thus, a process time is prolonged and operation complexity is larger. Besides, due to the total correction matrix is very large, the operation complexity can also be raised in the inversion matrix operation. Therefore, the complexity of total system can be raised. SUMMARY OF THE INVENTION [0005] The present invention provides a detection system for detecting image information in a predetermined range. The present invention also provides a signal processing method used in an ultrasonic wave imaging system for simplifying system operation. The present invention provides a detection system, which comprises an ultrasonic module, a plurality of receiving units, a plurality of analog-digital converters, a processing module and an image synthesis unit. The ultrasonic module comprises a plurality of ultrasonic units arranged in array, and the plurality of ultrasonic units continuously emit a plurality of ultrasonic beams in a predetermined range. When the ultrasonic beams is reflected in the predetermined range and received by the plurality of receiving units, the plurality of receiving units respectively receive generate a plurality of channel signals. Each of the channel signals is converted to a digital data by a corresponding analog-digital converter, so as to generate a vector matrix data. The processing module obtains a total vector matrix data by summing the vector matrix data received in a predetermined time interval, and further obtains a correction matrix through the total vector matrix data multiplied by the transposed total vector matrix data. The processing module further performs an inversion operation of the correction matrix and obtains a weight value according to the inversion correction matrix, so that a weighting operation is performed for the vector matrix data in the predetermined time interval according to the weight value, so as to obtain a weighting operation result. In an embodiment of the present invention, the processing module includes a weighting operation unit, a parameter operation unit and a multiplier. The weighting operation unit generates the correction matrix and the weight value according to the vector matrix data. The parameter operation unit generates a relative parameter function according to the vector matrix data. In addition, the multiplier is coupled to the weighting operation unit and the parameter operation unit, so that the weighting operation for the vector matrix data is performed through the relative parameter function multiplied by the weight value to obtain the weighting operation result. In another aspect, the present invention also provides a signal processing method adapted for dealing with a plurality of vector matrixes to detect the image of a predetermined range, and the vector matrix data are generated by reflecting a plurality of ultrasonic beams in the predetermined range. The signal processing method of the present invention includes summing all vector matrix data in a predetermined time interval so as to generate a total correlation matrix. In addition, obtaining a correlation matrix through the total vector matrix multiplied by a transposed total vector matrix, and obtaining a weight value according to inversion correlation matrix. Then, a weighting operation is performed for the vector matrix data in the predetermined time interval according to the weight value, so as to obtain a weighting operation result for performing an image combination procedure. In the present invention, the processing module obtains the total vector matrix, and further calculates the correction matrix and inversion correction matrix. Thus, the operation complexity of the system can be obviously simplified. For above and another objectives, features, and advantages of the present invention being better understood and legibly, accompanying embodiments together with the drawings are particularized. BRIEF DESCRIPTION OF THE DRAWINGS [0012] The present invention will become more readily apparent to those ordinarily skilled in the art after reviewing the following detailed description and accompanying drawings, in which: FIG. 1 is a block diagram of a detection system according to a preferred embodiment of the present invention; and FIG. 2 is a block diagram of the processing module according to a preferred embodiment of the present invention. DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS [0015] The present invention will now be described more specifically with reference to the following embodiments. It is to be noted that the following descriptions of preferred embodiments of this invention are presented herein for purpose of illustration and description only. It is not intended to be exhaustive or to be limited to the precise form disclosed. FIG. 1 shows a block diagram of a detection system according to a preferred embodiment of the present invention. Referring to FIG. 1, in the embodiment, a detection system 100 includes an ultrasonic module 102 having a number of N ultrasonic units as labeled by 104, 106, 108 and 110, and the number of N ultrasonic units is arranged in array, wherein N is a positive integer greater than or equal to 1. In the embodiment, the ultrasonic units 104, 106, 108 and 110 should emit a plurality of ultrasonic beams. Continuingly referring to FIG. 1, the detection system 100 further includes a signal receiving level 120, signal processing level 130 and a back-end image synthesis level 140. The signal receiving level 120 includes a plurality of receiving units 122[0:N], a plurality of amplifiers 124[0:N] and a plurality of analog-digital converters (ADC) 126[0:N]. The plurality of receiving units 122[0:N] can respectively receive the reflected ultrasonic beams in a predetermined range and generate a plurality of channel signals CH[0:N] to the plurality of amplifiers 124[0:N]. Then, the plurality of amplifiers 124[0:N] respectively amplify the received channel signals CH[0:N] and transmit the amplified channel signals CH[0:N] to ADC 126[0:N]. The ADC 126[0:N] convert the amplified channel signals CH[0:N] into a plurality of digital data signals DATA[0:N], and transmit the plurality of digital data signals DATA[0:N] to signal processing level 130. The signal receiving level 130 includes a plurality of demodulators 132[0:N], a plurality of buffers 134[0:N] and a plurality of devices for time delay and phase rotation 136[0:N] and a processing module 138. The plurality of demodulators 132[0:N] is respectively coupled to the plurality of ADC 126[0:N], so as to receive and demodulate the digital data DATA[0:N], and further generate a plurality of demodulating signals De_MOD [0:N]. The plurality of demodulating signals De_MOD [0:N] is transmit to the plurality of devices for time delay and phase rotation 136[0:N] through the plurality of buffers 134[0:N] for time delaying and phase rotating, and further generating a vector matrix data x(t). Then, the vector matrix data x(t) can be transmit to the processing module 138 to be processed. Particularly, in the embodiment, when the processing module 138 receives the vector matrix x(t), the processing module 138 is not able to process the correction matrix operation, but adds all vector matrix data x(t) in a predetermined time interval to generate a total vector matrix. FIG. 2 shows a block diagram of the processing module according to a preferred embodiment of the present invention. Referring to FIG. 2, in the embodiment, the processing module 208 includes a weighting operation unit 202, a parameter operation unit 204 and a multiplier 206. The weighting operation unit 202 is used for receiving the vector matrix data x(t) and summing the vector matrix data x(t) obtained in the predetermined time interval, so as to obtain a total vector matrix y(t), which is presented as: ( t ) = i = - K K x ( t + i ) ##EQU00001## , K is an integer. After obtaining the total vector matrix, obtaining a correlation matrix ({circumflex over (R)} (t)) through the total vector matrix multiplied by a transposed total vector matrix, above description can be presented by as following function ^ xx ( t ) = i = - K K x ( t + i ) ( i = - K K x ( t + i ) ) H + δ I ##EQU00002## δ is a constant, and I is a unit matrix. The weighting operation unit 202 can perform an inversion operation of the correlation matrix {circumflex over (R)} (t) according to a follow function: ^ xx - 1 ( t ) = 1 δ I - 1 δ 2 y ( t ) y H ( t ) 1 + 1 δ y H ( t ) y ( t ) ##EQU00003## In right side of an equal mark of this function, a denominator of a second operation unit is a constant, thus, the calculation of total function is simple. Besides, the weighting operation unit 202 can also be used for calculating a weight value (W (t) according to the inversion correlation matrix {circumflex over (R)} (t), which is presented as: ( t ) = R ^ XX - 1 ( t ) a a H R ^ XX ( t ) a ##EQU00004## Wherein a is a unit vector Referring to FIG. 2 continuingly, in another aspect, the weighting operation unit 204 is also used for receiving the vector matrix data x(t) and obtaining a flexible correction parameter function (FCF(t)) that is presented as: F C R ( t ) = ( n = 0 N - 1 x n ( t ) N n = 0 N - 1 x n ( t ) 2 ) m ##EQU00005## Wherein m is suggested a value greater than 0 and less than or equal to 1. The output end of the weighting operation unit 202 and the parameter operation unit 204 are coupled to the multiplier 206. Therefore, the multiplier 206 processes a weighting operation through the weight value W (t) multiplied by the flexible correction parameter function FCF(t), and further, a weighting operation result M_Data is obtained and transmitted to the back-end image synthesis level 140, so as to perform an image synthesis procedure. The back-end image synthesis level 140 includes a buffer 142, a low-pass filter (LPF) 144 and an image synthesis unit 146. After the weighting operation result M_Date being transmitted to the back-end image synthesis level 140, the weighting operation result M_Date is first received by the buffer 142 and further be output into the LPF 144 for low-pass filtering to filter the noise. The weighting operation result M_Data after low-pass filtering can be transmitted to the image synthesis unit 146. Thus, the image synthesis unit 146 can obtain an image massage IMG in the predetermined As stated above, in the present invention, the processing module firstly sums all the vector matrixes in the predetermined time interval, and further calculates the correction matrix, thus, the operation complexity can be simplified. Besides, an inversion correlation matrix obtained through the manner stated above is relatively simple, thus, the operation complexity of the system can be further simplified. While the invention has been described in terms of what is presently considered to be the most practical and preferred embodiments, it is to be understood that the invention needs not be limited to the disclosed embodiment. On the contrary, it is intended to cover various modifications and similar arrangements included within the spirit and scope of the appended claims which are to be accorded with the broadest interpretation so as to encompass all such modifications and similar structures. Patent applications by An Yeu Wu, Taipei TW Patent applications by Pai-Chi Li, Taipei TW Patent applications by NATIONAL TAIWAN UNIVERSITY Patent applications in class With signal analyzing or mathematical processing Patent applications in all subclasses With signal analyzing or mathematical processing User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20120192650","timestamp":"2014-04-20T12:08:24Z","content_type":null,"content_length":"46005","record_id":"<urn:uuid:23d7ff6d-1d02-4bee-8e78-5dfc1b91b297>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
New Carrollton, MD Algebra 2 Tutor Find a New Carrollton, MD Algebra 2 Tutor ...My tutoring approach includes the following: (1) talk to students on a peer-level to better understand why they are experiencing difficulties in their subject; (2) work with students to improve the skills that will help them succeed; and (3) help students develop a sense of autonomy and empowerme... 17 Subjects: including algebra 2, reading, writing, biology ...I enjoy teaching and helping students understand math. I am able to tutor in areas of basic math through calculus. I worked for Sylvan Learning Centers of America for several years as a tutor of mathematics. 12 Subjects: including algebra 2, calculus, ASVAB, elementary science ...I would love an opportunity to work with you, your family or a friend to strengthen your/their language skills. I have enjoyed playing golf for my entire life (24 years). Last summer, I had a goal to break 80 and ended up shooting a 78. I consistently shoot in the low 80s and feel confident that I could improve the game of beginner golfers up to above-average golfers. 27 Subjects: including algebra 2, reading, Spanish, prealgebra ...My current job requires use of these in finite element analysis, free body diagram of forces, and decomposing forces in a given direction. I have a BS in mechanical engineering and took Algebra 1 & 2 in high school and differential equations and statistics in college. My current job requires use of algebra to manipulate equations for force calculation. 10 Subjects: including algebra 2, physics, calculus, geometry ...Because of my passion for Math, I find it fun to help others understand and appreciate it as well. For two of my college years, I tutored my fellow students in all levels of Algebra and Calculus. After graduation I returned to my hometown of Columbus, OH and continued to tutor privately until I moved to Bowie, MD in May 2013. 8 Subjects: including algebra 2, calculus, geometry, statistics Related New Carrollton, MD Tutors New Carrollton, MD Accounting Tutors New Carrollton, MD ACT Tutors New Carrollton, MD Algebra Tutors New Carrollton, MD Algebra 2 Tutors New Carrollton, MD Calculus Tutors New Carrollton, MD Geometry Tutors New Carrollton, MD Math Tutors New Carrollton, MD Prealgebra Tutors New Carrollton, MD Precalculus Tutors New Carrollton, MD SAT Tutors New Carrollton, MD SAT Math Tutors New Carrollton, MD Science Tutors New Carrollton, MD Statistics Tutors New Carrollton, MD Trigonometry Tutors Nearby Cities With algebra 2 Tutor Beltsville algebra 2 Tutors Berwyn Heights, MD algebra 2 Tutors Bladensburg, MD algebra 2 Tutors Cheverly, MD algebra 2 Tutors College Park algebra 2 Tutors Glenarden, MD algebra 2 Tutors Glenn Dale algebra 2 Tutors Greenbelt algebra 2 Tutors Landover Hills, MD algebra 2 Tutors Lanham algebra 2 Tutors Lanham Seabrook, MD algebra 2 Tutors Riverdale Park, MD algebra 2 Tutors Riverdale Pk, MD algebra 2 Tutors Riverdale, MD algebra 2 Tutors Seabrook, MD algebra 2 Tutors
{"url":"http://www.purplemath.com/New_Carrollton_MD_algebra_2_tutors.php","timestamp":"2014-04-20T02:22:01Z","content_type":null,"content_length":"24586","record_id":"<urn:uuid:1b106978-0211-46a7-a028-ca97e9a25dfc>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Dynamical Systems Seminars Spring 2003 The Dynamical Systems seminar is held on Monday afternoon at 4:00 PM in MCS 149. Tea beforehand at 3:30 PM in MCS 153. • January 20: No seminar • January 27: Jason Ritt (MIT) Irregular Forcing of Phase Oscillators • February 3: Leonid Kalachev (University of Montana) On characterizing domains of attraction for stable solutions of quasi-linear parabolic equations • February 10: Gabriel Soto (University of Minnesota) An integrated model for calcium dynamics during synaptic transmission • February 17: No seminar • February 24: Bert Peletier (University of Leiden) Homoclinic, heteroclinic, and periodic orbits of fourth-order model equations such as the Swift-Hohenberg equation • March 3: Henk Broer (Groningen) Geometry of KAM tori in nearly integrable Hamiltonian systems • March 10: No seminar • March 17: David Cowan (Tufts University) Modeling a gas of hard non-spheres • March 24: Zbigniew Nitecki (Tufts University) Entropy and preimage sets • March 31: Boris Hasselblatt (Tufts University) Differentiability of the Hartman-Grobman linearization • April 7: Stefan Siegmund (University of Augsburg) Nonautonomous dynamical systems and inertial manifolds • April 14: J. Douglas Wright (Boston University) Higher order corrections to the KdV approximation for water waves • April 21: No seminar • April 28: Antonios Zagaris (Boston University)
{"url":"http://math.bu.edu/dynamics/seminar-03s.html","timestamp":"2014-04-19T01:48:11Z","content_type":null,"content_length":"2931","record_id":"<urn:uuid:91a68ff2-cc47-41a8-8fa6-df1894d8918e>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
Roslyn, NY Precalculus Tutor Find a Roslyn, NY Precalculus Tutor ...I will expect that they keep up with their assigned school homework in the subject and make their best attempt at it. If possible, I would like the weekly homework assignment or course syllabus in advance via email so i can be best prepared for our session. Please let me know if you have any fu... 11 Subjects: including precalculus, algebra 1, algebra 2, grammar ...Introduction of topics, detailed oriented explanation and problem solving till a level of confidence is achieved is my pattern. My current students have and are doing exceptionally well and most have been moved to advanced programs. Students leave my home only when they have gained a thorough u... 9 Subjects: including precalculus, calculus, algebra 1, algebra 2 ...Depending on the subject (and especially for computer subjects) before the first session, I will ask that we speak on the phone and will request the student send me via email information regarding materials we'll be going over (syllabus, homework assignments, past exams). Feel free to reach ou... 9 Subjects: including precalculus, algebra 1, algebra 2, trigonometry ...Took and completed the basic college introductory biology class and further completed genetics and animal physiology. Took general and organic chemistry at the University of Michigan. Further completed aquatic chemistry, thermodynamics and environmental chemistry classes. 12 Subjects: including precalculus, chemistry, calculus, physics ...In a classroom setting of approximately seven students, I provided students with additional problems and explained theories in order to enhance their understanding of the material. In addition to MERRP, I also tutored physics, chemistry, and mathematics for UIC campus housing. I am qualified to... 13 Subjects: including precalculus, chemistry, physics, calculus Related Roslyn, NY Tutors Roslyn, NY Accounting Tutors Roslyn, NY ACT Tutors Roslyn, NY Algebra Tutors Roslyn, NY Algebra 2 Tutors Roslyn, NY Calculus Tutors Roslyn, NY Geometry Tutors Roslyn, NY Math Tutors Roslyn, NY Prealgebra Tutors Roslyn, NY Precalculus Tutors Roslyn, NY SAT Tutors Roslyn, NY SAT Math Tutors Roslyn, NY Science Tutors Roslyn, NY Statistics Tutors Roslyn, NY Trigonometry Tutors Nearby Cities With precalculus Tutor Albertson, NY precalculus Tutors Baxter Estates, NY precalculus Tutors Carle Place precalculus Tutors East Hills, NY precalculus Tutors East Williston, NY precalculus Tutors Great Neck Plaza, NY precalculus Tutors Greenvale precalculus Tutors Manorhaven, NY precalculus Tutors Matinecock, NY precalculus Tutors Roslyn Estates, NY precalculus Tutors Roslyn Harbor, NY precalculus Tutors Roslyn Heights precalculus Tutors Russell Gardens, NY precalculus Tutors Saddle Rock, NY precalculus Tutors Sea Cliff precalculus Tutors
{"url":"http://www.purplemath.com/roslyn_ny_precalculus_tutors.php","timestamp":"2014-04-20T19:19:49Z","content_type":null,"content_length":"24317","record_id":"<urn:uuid:d4d95319-e970-426c-823f-2db0275e0283>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
eport #319 Defect Report #319 Previous Defect Report < - > Next Defect Report Submitter: Fred Tydeman (USA) Submission Date: 2005-04-04 Source: WG 14 Reference Document: ISO/IEC WG14 N1094 Version: 1.3 Date: 2006-04-04 Subject: printf("%a", 1.0) and trailing zeros Given that FLT_RADIX is 2, what is the output of: double x = 1.0; printf("%a", x); In particular, are trailing zeros removed or kept? Some choices that occur to me are: 1. use the smallest precision for an exact representation of this particular value; in effect, remove trailing zeros. 2. use the smallest precision for an exact representation of all values of this type; in effect, keep trailing zeros. 3. use the smallest precision for an exact representation of all values of all floating-point types; in effect, promote to long double and keep trailing zeros. 4. implementation defined. 5. unspecified. 6. something else. Some implementations that I have seen do 1, others do 2, and one does both 1 and 2 (value and format dependent). I believe choice 1 is the intended behaviour. Another way to look at this is: should %a act like %e (keep trailing zeros) or %g (remove trailing zeros) with respect to trailing zeros? Should this behaviour depend upon the user specifing a Some parts of 7.19.6.1 The fprintf function that are relavent are: Paragraph 6 on the '#' flag has: "For g and G conversions, trailing zeros are not removed from the result." Paragraph 8, section e,E, has: "... if the precision is zero and the # flag is not specified, no decimal-point character appears." Paragraph 8, section g,G, has: "Trailing zeros are removed from the fractional portion of the result unless the # flag is specified; a decimal-point character appears only if it is followed by a Paragraph 8, section a,A, has: "... if the precision is missing and FLT_RADIX is a power of 2, then the precision is sufficient for an exact representation of the value; ..." Paragraph 8, section a,A, has: "... if the precision is missing and FLT_RADIX is not a power of 2, then the precision is sufficient to distinguish values of type double, except that trailing zeros may be omitted; ..." There are corresponding sections for the wide character versions of the functions in 7.24.2.1 The fwprintf function. Suggested Technical Corrigendum Change 7.19.6.1 The fprintf function sections as follows. Paragraph 6 on the '#' flag, change the above to: "For a, A, g and G conversions, trailing zeros are not removed from the result." Paragraph 8, section a,A, change the above to: "... if the precision is missing and FLT_RADIX is a power of 2, then the precision is the minimum sufficient for an exact representation of all values of type double (removal of trailing zeros depends upon the # flag); ..." Paragraph 8, section a,A, change the above to: "... if the precision is missing and FLT_RADIX is not a power of 2, then the precision is the minimum sufficient to distinguish values of type double (removal of trailing zeros depends upon the # flag); ..." Also, update the corresponding sections for the wide character versions of the functions in 7.24.2.1 The fwprintf function. Add to the Rationale in section 7.19.6.1: %a (without an explicit precision) acts like %g (removes trailing zeros), while %.*a (with an explicit precision) acts like %e (keeps trailing zeros). This was done to allow two forms of behaviour while using only one conversion specifier. Committee Response The Committee does not believe this is a defect, however the Committee may consider establishing a rule for removing or not removing trailing zeros at some point in the future.
{"url":"http://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_319.htm","timestamp":"2014-04-16T11:19:25Z","content_type":null,"content_length":"5516","record_id":"<urn:uuid:a9b75512-6db5-4d0e-9d93-ae10e901f021>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
"Points" in algebraic geometry: Why shift from m-Spec to Spec? up vote 23 down vote favorite Why were algebraic geometers in the 19th Century thinking of m-Spec as the set of points of an affine variety associated to the ring whereas, sometime in the middle of the 20 Century, people started to think Spec was more appropriate as the "set of points". What are advantages of the Spec approach? Specific theorems? 8 I don't think that in the 19th century geometers were really thinking of m-Spec. – Emerton Dec 17 '10 at 2:06 add comment 6 Answers active oldest votes The basic reason in my mind for using Spec is because it makes the category of affine schemes equivalent to the category of commutative rings. This means that if you get confused about what's going on geometrically (which you will), you can fall back to working with the algebra. And if you have some awesome results in commutative algebra, they automagically become results in geometry. There's another reason that Spec is more natural. First, I need to convince you that any kind of geometry should be done in LRS, the category of locally-ringed spaces. A locally-ringed space is a topological space with a sheaf of rings ("the sheaf of (admissible) functions on the space") such that the stalks are local rings. Why should the stalks be local rings? Because even if you generalize (or specialize) your notion of a function, you want to have the notion of a function vanishing at a point, and those functions that vanish at a point should be a very special (read: unique maximal) ideal in the stalk. Alternatively, the values of functions at points should be elements of fields; if the value is an element of some other kind of ring, then you're not really looking at a point. up vote 42 down vote Suppose you believe that geometry should be done in LRS. Then there is a very natural functor LRS→Ring given by (X,O[X])→O[X](X). It turns out that this functor has an adjoint: our hero accepted Spec. For any locally ringed space X and any ring A, we have Hom[LRS](X,Spec(A))=Hom[Ring](A,O[X](X)) ... it may look a little funny because you're not used to contravariant functors being adjoints. This is another reason that spaces of the form Spec(A) (rather than mSpec(A)) are very special. Exercise: what if you just worked in RS, the category of ringed spaces? What would your special collection of spaces be? Hint: it's really boring. Edit: Since there doesn't seem to be much interest in my exercise, I'll just post the solution. The adjoint to the functor RS→Ring which takes a ringed space to global sections of the structure sheaf is the functor which takes a ring to the one point topological space, with structure sheaf equal to the ring. 1 This wasn't my question, but this answer was uncommonly informative. Thanks! – Jeremy West Dec 16 '10 at 21:11 1 I'm impressed by the intrinsicality of this definition of affine schemes. It seems to me that it raises the following naive question. Consider the category of locally "Lie algebra"ed spaces. Does its global section functor have an adjoint ? – Cédric Bounya Apr 14 '11 at 11:02 @Cédric: to get an interesting result, you have to answer the question, "what is a 'local Lie algebra'?" If you don't impose some condition on the stalks of the sheaf of Lie algebras, then the adjoint will simply send a Lie algebra to the one point space with that Lie algebra on it. In the case of rings, we could force the adjoint to be interesting by imposing the condition that the stalks are local rings. – Anton Geraschenko Apr 14 '11 at 15:34 A Lie algebra is local if it has a unique maximal ideal seems a reasonable definition. – Cédric Bounya Apr 14 '11 at 18:46 add comment Atiyah-MacDonald, exercise 1.26, mentions one advantage of spec over max-spec: Given a map of rings A -> B, you get a map spec B -> spec A, but not necessarily a map max-spec B -> up vote 20 down max-spec A, since the inverse image of a maximal ideal need not be maximal. 1 For example, consider the inclusion of k[x] into its fraction field k(x). – Anton Geraschenko Oct 16 '09 at 16:05 @Anton Hartshorne Exercise 2.3.2 works this out in detail. – David Zureick-Brown♦ Oct 16 '09 at 16:28 In general, any domain R (not a field) injects into its field of fractions, F wherein (0) is a maximal ideal but isn't so in R. – Abhishek Parab Feb 9 '10 at 3:33 add comment There was some discussion about this (and other things) at the secret blogging seminar fairly recently: http://sbseminar.wordpress.com/2009/08/06/ up vote 6 down algebraic-geometry-without-prime-ideals/ add comment For starters is worth noting that in the case of Jacobson rings (and more generally Jacobson schemes) (http://en.wikipedia.org/wiki/Jacobson_ring for instance has a definition) that the spectrum of maximal points is equivalent to the full spectrum. However, more generally this is not the case and working with non-closed points allows one more flexibility by using arguments relying on generic points for example. Another example is the common technique of reducing arguments to local statements; in general local rings cannot be Jacobson (in other words one should not view a non-artinian local ring as just a closed up vote 5 point). down vote An example of what can go wrong with the spectrum of closed points is given by the following http://math.berkeley.edu/~ogus/Math%20_256A--08/bigval.pdf where a quasi-affine scheme with no closed point is constructed. add comment The reason why $\operatorname{Spec} A$ is an important notion is because it solves the following problem for a commutative ring $A$: Find a local ring $\mathcal O$ together with a localisation morphism $A \to \mathcal O$ such that every other localisation morphism $A \to B$ to a local ring $B$ factors as a local morphism over $A \to \mathcal O$, i.e. one looks for a kind of universal localisation of $A$. Stated as above, this problem has no solution at least as long one is not willing to leave the world of rings in the category of sets. However it has a solution in the following more general setting: There is a topos $X$ endowed with a local ring object $\mathcal O$ and a localisation morphism $A \to \mathcal O$ such that for every other topos $Y$ together with a local ring object $B$ up vote 4 and a localisation morphism $A \to B$ there is a pair of a geometric morphism $f\colon Y \to X$ and a morphism $f^* \mathcal O \to B$ of local rings, which is unique up to a unique natural down vote isomorphism, such that $A \to B$ is given by the composition of $f^* \mathcal O \to B$ and $A \to f^* \mathcal O$. In fact, the solution to this problem is the topos $X$ of sheaves on $\operatorname{Spec} A$ together with the structure sheaf $\mathcal O_X$ as a local ring object. Now if you replace $\ operatorname{Spec} A$ by the max-spectrum, the locally ringed sheaf topos you get will not solve the universal localisation problem in general. This means that the usual definition of $\operatorname{Spec} A$ with prime ideals is a correct one (as long as one is working in classical logic with the axiom of choice) but it does not mean that it is the only correct definition: You can, for example, replace $\operatorname{Spec}$ by any other topological space or, more generally, by any other site such that the sheaf topos over it is still equivalent to $X$. See also here: mathoverflow.net/questions/8204/… – Peter Arndt Dec 16 '10 at 21:54 1 I like this point of view of Spec, but on the other hand, I think it is very far from motivating for, say, an algebraic geometer who works with closed points all the time. It would be great if you add relevance of this universal property. – Martin Brandenburg Dec 16 '10 at 22:17 add comment One example: you need generic points for base change to work well. For instance, there is a base change map A^1 / C \to A^1 / Q. The non-algebraic points map to the generic point. up vote 3 down vote (This is also a problem in rigid analytic geometry where one does use Max Spec and one reason why Berkovich's more general theory of analytic spaces is useful). add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/731/points-in-algebraic-geometry-why-shift-from-m-spec-to-spec/733","timestamp":"2014-04-19T17:59:40Z","content_type":null,"content_length":"84105","record_id":"<urn:uuid:8e899751-26b7-4b14-8cf8-4d21f205b678>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
spheres/plane intersection [Archive] - OpenGL Discussion and Help Forums I have been trying to write some functions to handle sphere/polyogon collision and am nearly finished but now I'm stuck again. The classifySphere(...) function shown below should tell me whether the sphere is infront of the plane, behind the plane, or intersects the plane. This is where the problem lies however because it only intersects the plane if the centre of the sphere is exactly on the plane. I have tested values for the ditance variable and they are in the region of 10000x too high/low. Why is that? I'm sure its a simple thing but I can't find it.... Anyway here is the code : SPHERE_POSITION classifySphere( Vector3d&amp; centre, Vector3d&amp; planeNormal, Vector3d&amp; pointOnPlane, float radius, float &amp;distance ) //distance of polygon plane from origin float distFromOrigin = float( planeDistance( planeNormal, pointOnPlane ) ); //distance of sphere centre to polgon plane distance = dotProduct( planeNormal, centre ) + distFromOrigin; // cout << "distance : " << distance << endl; // cout << "radius : " << radius << endl; if( absolute( distance ) < radius ) return INTERSECTS; else if( distance >= radius ) return IN_FRONT; return BEHIND; Edit : Why does my code always get posted at varying font sizes? [This message has been edited by endo (edited 05-01-2002).]
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-140321.html","timestamp":"2014-04-17T04:18:56Z","content_type":null,"content_length":"6084","record_id":"<urn:uuid:6870bfcf-7229-499b-9917-259f82536412>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
New Carrollton, MD Algebra 2 Tutor Find a New Carrollton, MD Algebra 2 Tutor ...My tutoring approach includes the following: (1) talk to students on a peer-level to better understand why they are experiencing difficulties in their subject; (2) work with students to improve the skills that will help them succeed; and (3) help students develop a sense of autonomy and empowerme... 17 Subjects: including algebra 2, reading, writing, biology ...I enjoy teaching and helping students understand math. I am able to tutor in areas of basic math through calculus. I worked for Sylvan Learning Centers of America for several years as a tutor of mathematics. 12 Subjects: including algebra 2, calculus, ASVAB, elementary science ...I would love an opportunity to work with you, your family or a friend to strengthen your/their language skills. I have enjoyed playing golf for my entire life (24 years). Last summer, I had a goal to break 80 and ended up shooting a 78. I consistently shoot in the low 80s and feel confident that I could improve the game of beginner golfers up to above-average golfers. 27 Subjects: including algebra 2, reading, Spanish, prealgebra ...My current job requires use of these in finite element analysis, free body diagram of forces, and decomposing forces in a given direction. I have a BS in mechanical engineering and took Algebra 1 & 2 in high school and differential equations and statistics in college. My current job requires use of algebra to manipulate equations for force calculation. 10 Subjects: including algebra 2, physics, calculus, geometry ...Because of my passion for Math, I find it fun to help others understand and appreciate it as well. For two of my college years, I tutored my fellow students in all levels of Algebra and Calculus. After graduation I returned to my hometown of Columbus, OH and continued to tutor privately until I moved to Bowie, MD in May 2013. 8 Subjects: including algebra 2, calculus, geometry, statistics Related New Carrollton, MD Tutors New Carrollton, MD Accounting Tutors New Carrollton, MD ACT Tutors New Carrollton, MD Algebra Tutors New Carrollton, MD Algebra 2 Tutors New Carrollton, MD Calculus Tutors New Carrollton, MD Geometry Tutors New Carrollton, MD Math Tutors New Carrollton, MD Prealgebra Tutors New Carrollton, MD Precalculus Tutors New Carrollton, MD SAT Tutors New Carrollton, MD SAT Math Tutors New Carrollton, MD Science Tutors New Carrollton, MD Statistics Tutors New Carrollton, MD Trigonometry Tutors Nearby Cities With algebra 2 Tutor Beltsville algebra 2 Tutors Berwyn Heights, MD algebra 2 Tutors Bladensburg, MD algebra 2 Tutors Cheverly, MD algebra 2 Tutors College Park algebra 2 Tutors Glenarden, MD algebra 2 Tutors Glenn Dale algebra 2 Tutors Greenbelt algebra 2 Tutors Landover Hills, MD algebra 2 Tutors Lanham algebra 2 Tutors Lanham Seabrook, MD algebra 2 Tutors Riverdale Park, MD algebra 2 Tutors Riverdale Pk, MD algebra 2 Tutors Riverdale, MD algebra 2 Tutors Seabrook, MD algebra 2 Tutors
{"url":"http://www.purplemath.com/New_Carrollton_MD_algebra_2_tutors.php","timestamp":"2014-04-20T02:22:01Z","content_type":null,"content_length":"24586","record_id":"<urn:uuid:1b106978-0211-46a7-a028-ca97e9a25dfc>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Certain central extensions of simply connected simple algebraic groups up vote 2 down vote favorite An offbeat question involving Milnor's $K_2$ has come up recently. Start with an algebraically closed field $F$ (perhaps required to be of characteristic 0). Let $G$ be a connected, simply connected simple algebraic group over $F$, for instance $\mathrm{SL}_n(F)$, maybe of rank $\neq 1,2$ to be on the safe side. Then consider a linear group $H \subset \mathrm{GL}(V)$ with $V$ finite dimensional over $F$, together with an epimorphism $\pi:H \rightarrow G$ of abstract groups whose kernel lies in the center of $H$. (EDIT: Further assume that $H$ is equal to its derived group. I neglected to include this crucial condition in the question originally posed to me.) Are these conditions enough to imply that $\pi$ has trivial kernel? Note that when $G$ is a special linear group of rank at least 2, its abstract universal central extension is the Steinberg group (with generators and relations specified by Matsumoto) and the kernel of the resulting map is $K_2(F)$. Typically this is an infinite group, uncountable if $F$ is uncountable (Milnor). However, if we add the assumption that $H$ acts irreducibly on $V$, then by Schur's Lemma $\pi$ induces a sort of reverse projective representation of $G$ with image equal to the image of $H$ in $\mathrm{PGL}(V)$. Older work of Steinberg would allow us to lift this projective representation to an ordinary one, thus mapping $G$ onto $H$. (See sections 6,7 of Steinberg's Yale lectures here.) However, it's apparently undesirable in the original question to make any assumption about complete reducibility of the action of $H$. So I'm unsure what can be said, given only that $H$ is a linear gr.group-theory algebraic-groups algebraic-k-theory 1 What is to prevent us from taking $H= Z \times G$ where $Z$ is an Abelian linear algebraic group over $F$? – Aakumadula Jul 21 '13 at 17:48 @Aakumadula: Quite right. I left out one crucial part of the original question, that $H$ should be perfect (equal to its derived group). (This assumption is the abstract version of "connected" in the setting of central extensions.) – Jim Humphreys Jul 21 '13 at 19:59 Would it be stupid to hope all this is controlled by $H^2(G,F)$ and $H^1(G,F)$? Which are trivial, even in positive characteristic ... That book of Serre's where he thinks about extensions of algebraic groups comes to mind. – David Stewart Jul 21 '13 at 21:52 Is it too optimistic to expect that every linear representation of a perfect central extension $H$ of $G$ is trivial on the kernel of $H\to G$? – Yves Cornulier Jul 21 '13 at 23:41 Yes, it is too optimistic: $H$ has by definition a faithful linear representation. If you wish to prove that the kernel is trivial, it is equivalent to your statement. – Aakumadula Jul 22 '13 at show 4 more comments Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged gr.group-theory algebraic-groups algebraic-k-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/137328/certain-central-extensions-of-simply-connected-simple-algebraic-groups","timestamp":"2014-04-17T13:14:27Z","content_type":null,"content_length":"53964","record_id":"<urn:uuid:62479013-e3e5-45b8-9c39-aed34ac75278>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
Student Achievement Archived Information The Quality of Vocational Education, June 1998 Student Achievement Standardized tests do not measure student achievement perfectly, but they do a good enough job for people to base decisions on them. School counselors use test scores in guidance and placement; colleges use them in admissions; and the public uses them to gauge the accomplishments of schools. Most people are pleased when test scores are high, and they worry when they are low. Vocational students do not perform as well on standardized tests as students in college programs do, and that worries some educators. Boyer (1983), for example, has concluded that vocational programs shortchange students academically. Oakes (1985) doubts that students in vocational programs get adequate preparation in basic subject areas. Lotto (1986) has concluded that vocational programs provide an inadequate preparation in the basic skills. The Coleman report of 1966 ushered in the modern era of research on curricular tracks and student achievement (Coleman et al., 1966). Coleman's Equal Educational Opportunity Survey (EEOS) examined the relations between many individual and social factors and school learning, and it did not mince words about curricular tracks. "Tracking," Coleman wrote, "shows no relation to achievement" (p. 314). Coleman compared schools that track students with schools that do not track students, and he found no difference in test scores at the two types of schools. He concluded therefore that tracking did not make a difference in student achievement. Coleman's focus was on differences between schools, however. He found that average test scores were very similar in schools with and without curricular tracks, but he also found that there was a great deal of variation in student achievement within schools of both types. It was this finding that stimulated a new generation of survey research on tracking. Researchers speculated that track membership might explain some of the variation in achievement within schools. Researchers therefore began putting track membership into equations predicting the achievement of individual students within schools. Some of the resulting studies tell us little if anything about curricular effects, but other studies are far more informative. In this chapter, I examine the variety of available studies. I first describe the studies and look at some of the differences among them. I then turn to the study findings and their implications. The studies I review in this chapter came from two sources: (1) a computerized search of the data base of the Educational Resources Information Clearinghouse (ERIC); and (2) studies referred to in reviews located through the ERIC search. I first searched the full text of citations and abstracts in the ERIC data base from the years 1982 through September 1993 for the terms secondary education, vocational education, and academic achievement. I located a total of 74 abstracts that contained the three terms. Relatively few of the documents, however, contained relevant quantitative findings. Reviews by Weber et al. (1982) and by Mertens et al. (1980) turned out to be very useful for finding earlier studies. Through direct database searching and branching, I located 10 studies with relevant findings (table 4.1). The studies have some things in common. Each covers either a national or state-wide population of young people. Each reports on student achievement as measured by broad tests administered near the end of high school, and each contains a quantitative description of average performance and variation in performance in vocational and nonvocational programs. The studies were not uniform in design, however. They differed in (a) method for identifying vocational students; (b) the groups to which vocational students are compared; and (c) method of analysis. Identification of vocational students. Alexander et al. (1978) and Echternacht (1975) used student transcripts to identify the curricular programs of students. All other researchers relied on student self-categorizations. Evans and Galloway (1973), Hilton (1971), and Jencks and Brown (1975) used self-categorizations that students made at the beginning of secondary school; Alexander and Cook (1982) and Alexander and McDill (1976) relied on categorizations that students made at the end of high school. Gamoran (1987) and Vanfossen, Jones, and Spade (1987) used self-categorizations made both early and late in high school to place students into tracks. Although some of their analyses involve students who switched tracks between grades 10 and 12, they estimate the size of tracking effects from the test scores of students who stayed in the same track over the period of the study: track-stayers (about 60 percent of the sample) rather than track movers. Comparison groups. The most valuable studies for our purposes are those that report test scores separately for academic, general, and vocational groups. Echternacht (1975), Evans and Galloway (1973), Gamoran (1987), Hilton (1971), and Vanfossen et al. (1987) conducted studies of this sort. Other researchers reported on academic versus nonacademic groups and did not distinguish between students in vocational and general programs (i.e., Alexander et al., 1978; Alexander & Cook, 1982; Alexander & McDill, 1976; Jencks & Brown, 1975). Trent's (1982) study compares test scores of vocational and nonvocational students and does not distinguish between those in academic and general tracks. Method of analysis. Echternacht (1975), Evans and Galloway (1973), Hilton (1971), and Trent (1982) carried out simple descriptive analyses of test scores. Alexander and McDill (1976) carried out a regression analysis using cross-sectional data. Alexander and Cook (1982), Alexander et al. (1978), Gamoran (1987), Jencks and Brown (1975), and Vanfossen et al. (1987) carried out regression analyses using longitudinal data. The descriptive analyses present a simple statistical description of test performance by group. The goal of the regression analyses is to determine whether apparent differences among groups are actually program effects. I discuss these differences in research design as I present the findings from the various analyses. It is necessary now to note only that all the results that I present are calculated from statistics presented in the reports. I used standard statistical equations to translate results from each study into a common metric of standard deviation units. I also used normal curve areas to convert the resulting z-scores into percentile scores. I have divided the studies of curricular effects on student achievement into four main types. The first type of study reports on the performance of academic, general, and vocational students in terms of national norms. The second type of study applies regression analysis to cross-sectional data in order to compare the performance of academic, general, and vocational students who are similar at the end of high school in measured aptitude and in other characteristics. The third type of study applies regression analysis to longitudinal data in order to compare the end-of-school performance of academic, general, and vocational students who were similar in aptitude and background at the beginning of high school. A fourth type of study also uses regression analysis and longitudinal data, but studies of this type compare performance of students in different curricular programs who are similar not only in background characteristics, but who also take a similar number of advanced courses in core high school subjects. Comparisons with national norms. Weber et al. (1982) wrote an authoritative review of studies examining performance of vocational students on standardized tests. They concluded that the scores of vocational students on standardized tests fall about 0.5 standard deviations below national norms. Students in vocational programs thus fall between the 35th and 40th percentile on standardized tests. Weber and his colleagues also noted that this performance level was typical for vocational students both at the beginning and at the end of the programs. Table 4.2 is based on results in studies cited by Weber et al. (1982). It is obvious that at the end of high school there is an achievement gap between students in academic and nonacademic programs. Students completing vocational programs score on the average 0.43 standard deviations below the national norms; students completing general programs score 0.42 standard deviations below the norm; and students completing academic programs score 0.57 standard deviations above the norm. Test scores of vocational and general students fall at the 34th percentile; test scores of students completing academic programs fall at the 71st percentile. It is also obvious that there is an achievement gap at the start of high school between students who elect different programs. The percentile score of each group at the beginning of high school is nearly the same as the group's percentile score at the end of high school. The similarity suggests that students in the three curricular groups grow academically at the same rate. This consistency in rate of growth was an important finding of the Academic Growth Study (Hilton, 1971). Students in the Academic Growth Study took standard achievement tests in grades 5, 7, 9, and 11. Figure 4.1 shows the relationship over time between test scores in mathematics and curricular group membership as determined by self-report in grade 11. The pattern of results is the same for other tests used in the Academic Growth Study. Two points are worth noting about the figure. First, the lines for the academic, general, and vocational groups are nearly parallel during the junior and senior high school years. This means that none of the groups falls behind or gets ahead during the period in which students were taking vocational, academic, and general courses. Second, the lines for general and vocational groups are nearly indistinguishable from the earliest points of measurement. This similarity in pretest scores suggests that students in general programs may be a good comparison group for students in vocational programs. Comparison of academic and vocational groups, on the other hand, are harder to justify on methodological grounds (Grasso & Shea, 1979; Slavin, 1990a; Woods & Haney, 1981). Figure 4.1 Mean Standardized Scores on STEP Mathematics by Year and Curriculum. (Based on Hilton, 1971) Although academic, general, and vocational groups show the same growth patterns on academic tests, it is worth noting that the groups do not show parallel growth in all areas of knowledge. Hilton (1971) has provided graphic evidence that rates of growth are very different for academic and vocational students on a test of knowledge of industrial arts (Figure 4.2). Academic and general students hardly increase at all in their knowledge of industrial arts during the middle and high school years. Vocational students, on the other hand, learn a significant amount about industrial arts during the high school years. Figure 4.2 Mean Standardized Scores on Industrial Arts Scale by Year and Curriculum. (Based on Hilton, 1971) Regression analyses with cross-sectional data. The evidence from simple descriptive studies is far from conclusive, however. To draw firmer conclusions, we need studies in which researchers measure background, curricular, and outcome variables on the same students. We also need statistical analyses in which researchers are able to make separate estimates of the importance of these factors. One approach that yields such estimates is regression analysis. Table 4.3 presents the results of such analyses along with the results of the simple descriptive analyses that I have already reviewed. Alexander and McDill (1976) used regression analysis with cross-sectional data to estimate the importance of curriculum when background factors are held constant. Their data came from a survey conducted by Johns Hopkins University researchers in 1964 and 1965. The survey covered 3,700 seniors in 18 public high schools. Alexander and McDill assumed that a number of factors influenced achievement, including background factors (e.g., socioeconomic status, number of siblings, and gender); academic aptitude; peer characteristics (e.g., the academic aptitude, socioeconomic status, and educational expectations of the student's friends); and differences in the schools that the students attended. Alexander and McDill's goal was to find out whether curricular track had an effect over and above the effect of such factors. They found that their entire set of variables accounted for 48 percent of the variance in mathematics achievement. Academic ability was a major direct determinant of achievement, but track membership was almost as important a factor. Students in the academic track scored 0.80 standard deviation units higher than students of comparable ability and background in nonacademic tracks. The effect is a large one by almost any standard. The result suggests that moving a typical student from a nonacademic to an academic track would raise the student's mathematics test score by 0.80 standard deviations, or from the 34th percentile to the 66th percentile. In other words, nonacademic students would perform at a much higher level if switched to an academic track. Later and better analyses of survey and test data have not supported the results of Alexander and McDill's study, however. The basic problem with Alexander and McDill's analysis is its use of aptitude data collected concurrently with the outcome data. To measure scholastic aptitude, Alexander and McDill used a 15-item multiple-choice test measuring ability to find logical relationships in patterns of diagrams. The test may have been the best one available to the investigators, but it was not good enough for this kind of analysis. For one thing, the reliability of the test was between .60 and .65, or not very high. For another, the academic aptitude measure correlated about .50 with the outcome measure of mathematics achievement. Subsequent studies have shown that the reliability and validity of this measure are too low for work on track effects. More recent longitudinal studies have used pretest scores to predict achievement outcomes, and the investigators who have used such scores have reported much larger correlations with outcome measures. Jencks and Brown (1975) and Gamoran (1987), for example, reported correlations in the .80s between measures of achievement made at the beginning of high school and those made during later high school years. The moral is clear. Aptitude tests such as those administered in the senior year in Alexander and McDill's study do not adequately reflect the capacities that students have when they enter high school. It is also difficult to defend logically the use of end-of-program measures of ability as control variables in studies of tracking. A basic problem is that both aptitude and achievement scores change with education. Aptitude scores on tests administered at the end of high school may therefore make good outcome variables, but they cannot serve as proxies for scores on aptitude tests administered at the beginning of high school. Studies that use end-of-school aptitude scores as predictor variables are likely to produce misleading results. Regression analyses with longitudinal data. Longitudinal designs overcome this basic limitation of cross-sectional studies, and most investigators have therefore used longitudinal data in their regression analyses (table 4.3). Jencks and Brown (1975) carried out one of the first of these longitudinal analyses. They examined data from 91 predominantly white comprehensive high schools throughout the United States that had tested their students for Project Talent in the 9th grade and then had retested them in the 12th grade. Some of the students reported in the 9th grade that they were in academic programs, and others identified themselves as being in nonacademic programs. Jencks and Brown showed that academic and nonacademic students who were initially similar on pretests and in background would also be similar on outcome tests at the end of high school. The academic students averaged only 0.06 standard deviation units higher on achievement tests than did comparable nonacademic students. The effect is a trivial one by almost any standard, and Jencks and Brown concluded therefore that curricular tracks do not have much effect on students' test scores. Alexander, Cook, and McDill (1978) examined the influence of track placement on scores in Educational Testing Services' Academic Growth Study, and they reported more substantial effects. They classified students into college and noncollege tracks based on their reported course work and self-reported curricular track. Predictor variables in their regression equations were in addition to curricular program, socioeconomic background, gender, race, academic aptitude, and educational plans. Dependent variables were verbal and quantitative scores on the Scholastic Aptitude Test (SAT). Alexander and his colleagues found that students in the academic track scored about 16 points higher than similar nonacademic students on the SAT verbal (about 0.14 standard deviations) and 47 points higher on the SAT quantitative (about 0.36 standard deviations). Differences between academic and nonacademic students therefore averaged 0.25 standard deviations. Alexander and Cook (1982) reanalyzed the data from the Academic Growth Study, using student self-report data alone to identify a student's curricular track. Predictor variables were similar to those used by Alexander et al. (1978), and dependent variables were test scores in history and English and PSAT-M and PSAT-V scores. Alexander and Cook carried out several regression analyses, and one of these examined effects of curricular track with background and aptitude factors controlled. Alexander and Cook's results were very similar to Alexander et al.'s (1978) results. They found that on the average test, the academic track raised performance by 0.17 standard deviations. >Both Gamoran (1987) and Vanfossen, Jones, and Spade (1987) used HSB data in analyses similar to those of Jencks and Brown (1975) and Alexander et al. (1978). The HSB data set came from a survey of approximately 30,000 high school sophomores and seniors surveyed initially in 1980 and again in a 1982 follow-up. Both Gamoran and Vanfossen and her colleagues estimated effects of curricular tracking on students who stayed in the same high school tracks for the two-year period between sophomore and senior year in high school. Both of the research studies were based on the assumption that factors other than curricular program influenced student achievement: socioeconomic background, race, sex, and educational expectations in the 8th grade; 10th-grade social-psychological variables (friends' plans to go to college, educational expectations) and 10th-grade academic characteristics (grades so far, courses completed in the subject area of the dependent variable). The researchers therefore formed regression equations that allowed them to specify the effects of track membership with these pre-existing characteristics held constant. Vanfossen et al. (1987) analyzed scores on a composite achievement measure based on tests of vocabulary, reading, and math. Their regression equation predicts that a student who is average on all background factors (z-scores = 0.00) would score 0.15 standard deviations above the population mean on achievement tests if placed in the academic track, 0.08 standard deviations below if placed in the general track, and 0.20 standard deviations below the mean if placed in the vocational track. This implies that on a nationally normed test the student would score at the 56th percentile if placed in the academic track, at the 47th percentile if placed in the general track, and at the 42nd percentile if placed in the vocational track. Gamoran (1987) analyzed scores on six achievement measures (mathematics, science, reading, vocabulary, writing, and civics). His results were similar to the findings of Vanfossen and her colleagues. According to Gamoran's regression equation, a student who was average on all background factors would score 0.10 standard deviations above the population mean on achievement tests if placed in the academic track, 0.06 standard deviations below if placed in the general track, and 0.13 standard deviations below the mean if placed in the vocational track. On a nationally normed test the student would score at the 54th percentile if placed in the academic track, at the 48th percentile if placed in the general track, and at the 45th percentile if placed in the vocational track. Regression analyses with course work as a predictor variable. Why do students learn less in the vocational track? There are two factors to consider. First, students in vocational programs are more likely to be in the lower level of core courses that all students take. That is, they are unlikely to be in the elite sections of stratified core courses. Second, students in the vocational track take fewer advanced courses. Compared to academic students, for example, they are less likely to take advanced math courses, advanced science courses, foreign languages, and so on. Alexander and Cook (1982) and Gamoran (1987) carried out further regression analyses to determine whether the achievement differential for vocational and academic students could be explained by the second factor, the number of advanced courses that students take in core areas (table 4.3). They developed regression equations in which they were able to hold constant students' prior background and subsequent course work while investigating the effects of curricular track alone. The analyses complemented Alexander and Cook's and Gamoran's analyses in which only background variables were held Alexander and Cook (1982) used data from the Academic Growth Study in their analysis. They found that effects of curricular track were reduced when students were compared who took the same number of advanced courses in an area. In fact, academic, general, and vocational students all performed at the same level when both background factors and advanced courses were held constant. Gamoran (1987) used the HSB data set in his analysis. Gamoran's analysis covered six different outcome tests. He found that on the average achievement test, academic students scored 0.08 standard deviation units higher than comparable general students who had taken the same number of advanced courses (table 4.3). General students scored 0.08 higher than comparable vocational students who had taken the same number of advanced courses. These analyses suggest that curricular programs produce most, but not quite all of their effects by prescribing different numbers of advanced courses for students. Non-academic students usually take fewer advanced courses in subjects like mathematics and this affects their performance on mathematics tests. If vocational students elected as many advanced courses in mathematics as academic students did, the gap between vocational and academic students would be narrowed. Test scores of high school students completing academic and vocational programs are clearly different. Academic students usually score at the 71st percentile on standardized achievement tests given at the end of high school (or about 0.56 standard deviations above the mean); vocational students usually score at the 34th percentile (or about 0.41 standard deviations below the mean). The achievement gap at high school graduation is therefore large. The question is, What causes it? Regression analyses suggest that the most important cause of the achievement gap is student self-selection into academic and vocational programs. If the same students enrolled in the two types of programs, graduates of the two programs would differ very little in test scores at the end of high school. A second factor contributing to the achievement gap is the different number of advanced courses in core subjects taken by academic and vocational students. Academic students take more of these advanced courses. If vocational students were as academically strong as college-prep students at the beginning of high school and they took as many advanced courses in core areas as college-prep students do, their test scores would be nearly indistinguishable from those of college-prep students at the end of high school. It is possible to quantify these results. The difference in test scores of academic and vocational students on standardized tests at the end of high school is equal to about 1.0 standard deviation. Regression analyses suggest that the gap would be about 0.2 standard deviations if similar students enrolled in academic and vocational programs. Thus, 80 percent of the difference in test scores of academic and vocational students at the end of high school appears to be due to the difference in aptitude of the students who enter the programs. In addition, regression analyses suggest that 10 percent of the achievement gap is due to the different number of advanced courses in core subjects taken by academic and vocational students. If vocational students were similar to academic students in aptitude and took the same number of advanced courses in core subjects, the achievement gap between academic and vocational students would be no more than 0.1 standard deviations. The remaining 10 percent of the gap is due to other curricular and program factors. Regression analyses, therefore, suggest that moving a student from a vocational to an academic program would raise a students test scores on academic achievement tests. The increase in scores might be as much as 0.2 standard deviations (if the student took a heavy load of advanced courses in mathematics, English, and so on) or as little as 0.1 standard deviations (if the student avoided advanced courses). Two questions naturally arise about these regression results. First, how important are these differences? Second, how trustworthy are the analyses that produced the results.On the question of importance of these differences, two points are worth noting. First, an increase in test scores of 0.1 to 0.2 standard deviations is a trivial to small effect. Cohen (1977) has reviewed the educational and psychological literature on effect sizes. He concluded that an effect of 0.8 standard deviations is large, an effect of 0.5 standard deviations is moderate in size, and an effect of 0.2 standard deviations is small. Moving a student from a vocational to an academic program will have at best a small effect on the students test scores in academic subjects. In addition, the difference is found on standardized tests in academic subjects and such tests do not measure all the things that students learn in high school. Standardized tests give a lot of weight to skills that are useful for survival in college; they give less weight to skills and knowledge that are useful in jobs and careers. Although academic and vocational students appear to grow at the same rate in academic knowledge, they apparently grow at different rates in job knowledge. Hilton (1971), for example, has provided graphic evidence that vocational students acquire industrial arts knowledge at a quicker rate than academic students do. In addition, specific vocational programs may prepare students very well in specific academic areas. In fact, some vocational programs may outdo academic programs in specific areas. Ramey (1990), for example, has reported that business students increase their verbal skills at a faster rate in business programs than they would in an academic or general program. A more critical question is whether regression comparisons of vocational and academic students are trustworthy. Slavin (1990a) has argued forcefully that they are not. He believes that such comparisons are untrustworthy because academic and vocational students differ too much in aptitude and in too many other ways at the start of high school. For regression results to be trustworthy in such situations, measures of relevant initial characteristics would have to be both complete and completely reliable. According to Slavin, they never are. It is difficult to know therefore what conclusions to draw from regression comparisons of academic and vocational students. Even if academic and vocational tracks had identical results on students, Slavin has noted, the studies comparing the achievement of academic and vocational students would still show higher achievement for the academic track. We are on sounder ground with regression comparisons of students in vocational and general tracks. The students in these tracks do not differ greatly in aptitude initially, and regression problems are therefore less severe in comparisons of students in the two tracks. The regression analyses suggest, however, that general and vocational programs have roughly the same effects on student achievement. General and vocational students score at nearly identical levels on standardized achievement tests given both at the beginning and at the end of high school. General students score on the average at the 34th percentile (or about 0.41 standard deviations below the mean); vocational students score on the average at the 32nd percentile (or about 0.47 standard deviations below the mean). Regression analyses suggest that program effects are trivial. If the same students enrolled in general and vocational programs, their test scores would differ by less than 0.1 standard deviation, a trivial amount, at the end of high school. My overall conclusion therefore is that academic and vocational programs may differ slightly in how well they prepare students in the broad academic skills needed in modern society. Academic programs may provide slightly better academic preparation. Requiring vocational students to pursue a college-preparatory curriculum might raise their sores on tests of academic skills by 0.2 standard deviations, but we cannot be sure. This estimate may be inflated by two methodological artifacts: imperfect reliability in the measurement of predictor variables and incomplete measurement of factors influencing student achievement. General and vocational programs, on the otherhand, seem to have equivalent effects on student achievement. Moving a student from the vocational track to the general track would have no measurable effect on the students overall achievement level. -###-[High School Completion ] Educational Attainment ]
{"url":"http://www2.ed.gov/pubs/VoEd/Chapter3/Part5.html","timestamp":"2014-04-17T01:33:24Z","content_type":null,"content_length":"34838","record_id":"<urn:uuid:ff7e2055-f9fb-41bb-8661-448ee21fc163>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Black Hole Images Computer-generated image of a black-hole as seen from a great distance (r=1000M, or 500 times the radius of the event horizon). From this distance, the hole itself is not very conspicuous: the best hope we would have of detecting it from Earth is by observing the brightness changes of a star passing behind it and through the region of optical distortion, which at great distances will appear much larger that the hole itself (scaling in angular size as 1/sqrt(r) rather than 1/r). Most black holes that astronomers have identified so far have been detected because they are in the process of consuming matter, which emits X-rays due to frictional effects as it spirals into the hole. This simulated one is quiescent and there are predicted to be many such holes in the Galaxy, of roughly 3-10 solar masses, formed by supernovae explosions. This image was taken from r=100M. The outermost circular anomaly is known as the first 'Einstein ring' and corresponds to the image of a point directly behind the hole. This point would normally be obscured from view, but light passing at this distance from the hole bends through an angle that allows it to reach the observer. Inside this ring, the light at any point actually comes from the opposite side of the hole's apparent position, as the angular deflection increases. If this were a stellar-mass black hole, humans would not be able to survive at this distance (even in a free fall orbit) because the gravitational tidal gradient is roughly 20 gees per metre. However this effect decreases as the cube of r, so it would be hardly noticable at the distance of the r=1000M picture. The black hole in this image looks much closer than it really is. The field of view in these images is 90 degrees across and this was taken from r=10M, so the hole appears to have a radius of more than double that of the event horizon (2M). This is because not all of the black area corresponds to rays "blocked" by the hole, just rays that could not reach the observer from infinity, so objects nearer the hole could appear here. The bending of light is so extreme that much of the light in the inner bands of this image actually comes from behind you. Two more Einstein rings can be seen as a 'double ring' on the edge of the black region. The outer one corresponds to light from directly behind the observer that has done a U-turn about the hole. The inner one (visible only as an arc at the top left) is light from behind the hole, which has orbited 360 degrees before escaping. Looking sideways from r=4M. At r=3M, the hole covers exactly half of the sky: this image is taken looking sideways at an apparently flat "horizon". In fact this is just the same optical effect as in the last picture, but to a greater extent. Below this radius light emitted from infinity will always be spiralling inwards so there will be no light from 'underneath' the observer. This is also the distance at which there is a circular orbit for photons around the hole, so if we were to shine a torch at the centre of this image, it would do a complete orbit and appear as a light shining from directly behind us. At r=2.5M, the black hole covers more than half the visible sky, but we are still not within the event horizon. The whole exterior sky can now by seen in one view by looking directly away from the hole from r=2.2M (everywhere else is black). The Einstein rings are still visible within this view, so in fact it covers the whole sky more than once. Theoretically there are infinitely many Einstein rings, corresponding to light doing greater numbers of half-turns about the hole before escaping to infinity, so there will be infinitely-many distorted images of the sky right near at the edge of this disc. In practice only the first few will ever be visible. This image is from r=2.05M and shows how the rear view shrinks for observers closer to the event horizon. Note that all these images are calculated for observers who are stationary relative to the outside world (their spatial Schwarzschild coordinates are constant). In reality if you were ever unfortunate enough to be this close to a black hole, you would probably be falling inwards at a significant fraction of the speed of light, so special relativistic effects would make the view rather different. In particular, the view of the outside world would not look this small, because the Lorentz tranformation acts on the sky sphere to reduce the apparent size of objects in front of you and to increase the size of things behind.
{"url":"http://www.stephenbrooks.org/misc/blackhole/","timestamp":"2014-04-18T13:33:16Z","content_type":null,"content_length":"12140","record_id":"<urn:uuid:95d99894-20bc-4103-9185-086ced115eff>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: setermine with out graphing if this has one solution, no solution or infinite number of solutions explain why x-y =3 3x + 3y = -3 Best Response You've already chosen the best response. they are the same line Best Response You've already chosen the best response. no they aint Best Response You've already chosen the best response. parallel at least :) Best Response You've already chosen the best response. take the 3 out of the 2nd equation Best Response You've already chosen the best response. no differnent Best Response You've already chosen the best response. x + y = -1 Best Response You've already chosen the best response. Best Response You've already chosen the best response. negative reciprocal slopes so yes perpendicular Best Response You've already chosen the best response. product of slopes is -1 Best Response You've already chosen the best response. so one solution exists Best Response You've already chosen the best response. i knew it :) kindof... maybe Best Response You've already chosen the best response. intersect at only one point so only one solution exist. As these are straight lines so they cut only once. Best Response You've already chosen the best response. 1. If the slopes are the same then: A) if you can get one equation by multiplying the other equation by a constant, then they are the same line which means infinitely many solutions B) if the left hand sides are the same and the right hand sides are different then the lines are parallel which means no solution 2. otherwise if the slopes are different then they will intersect at one Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4dd2b0a499508b0b7105a2a4","timestamp":"2014-04-18T00:50:49Z","content_type":null,"content_length":"56378","record_id":"<urn:uuid:94af8555-1280-4a6d-806b-3e1fe435d422>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
Tech Briefs Component Mode Synthesis using ADINA A dynamic model reduction technique, i.e., a component mode synthesis scheme, has been included in the dynamic analysis features of ADINA^*. The scheme is a natural and efficient procedure for the approximate frequency analysis of very large and complex structures, see Ref. [1]. Also, the method has been increasingly accepted to couple the use of multibody system programs and finite element analysis programs. An example is the use of AVL EXCITE and ADINA. Here, the use of the component mode synthesis method, and specifically the Craig-Bampton scheme, in the ADINA dynamic analysis capabilities is presented along with some demonstrative examples. The basic calculations are to establish the Craig-Bampton transformation matrix, see Ref. [2] and then compute, The vectors in In our first example, we use the propeller blade model analyzed earlier. The lowest component mode synthesis frequency, 649.6 Hz, is very close to the exact frequency, 649.5 Hz, although only 9 boundary DOFs (see Figure 1) are selected to calculate the static constraint modes, and only 21 fixed interface modes are used. The movie above shows the first vibration mode. Figure 1 Mesh of propeller blade showing the boundary nodes selected In our second example, we consider the frequency solution of the pendulum shown in Figure 2. Only 3 boundary DOFs are selected (see Figure 3) for the constraint modes, and the lowest 7 fixed interface vibration modes are used. In Figure 4, we compare the exact lowest frequencies with the frequencies from the component mode synthesis method. The two sets of results are in very good Note that the exact frequencies of the model are calculated using ADINA by simply continuing the subspace iteration with the vectors Figure 2 First four vibration modes of the pendulum Figure 3 The selected boundary node of the pendulum Figure 4 Comparison of frequencies (Hz) calculated using component mode synthesis and subspace iteration In our third and last example, we use the scheme for a larger finite element model, considered already in an earlier Brief, see Figure 5. We compare in Figure 6 the lowest frequencies calculated using the component mode synthesis scheme with 6 constraint modes and 4 fixed interface vibration modes. The two sets of results are also in very good agreement. Figure 5 Bolted wheel assembly, showing first vibration mode Figure 6 Comparison of frequencies (Hz) calculated using component mode synthesis and subspace iteration Component mode synthesis can be a powerful tool in the analysis of very large systems. However, as mentioned already, the accuracy of the frequencies and mode shapes obtained depends on the vectors used in the transformation matrix On the other hand, Equations (1) to (4) are a first subspace iteration, and in ADINA, this iteration can be continued — if the analyst so chooses — to obtain accurate frequencies and mode shapes of very large finite element models (using perhaps the ADINA DMP solution capability). We will show results using this scheme in a future Brief. 1. Bathe, K.J., Finite Element Procedures, Cambridge, MA: Klaus-Jürgen Bathe, 2006. 2. Craig, R. R. Jr. and Bampton, M.C.C., "Coupling of Substructures for Dynamic Analyses", AIAA, Vol. 6, No. 7, July 1968, pp. 1313-1319. Component mode synthesis, Craig-Bampton scheme, multibody system, Bathe subspace iteration method, constraint modes, vibration modes ^*The scheme is available in ADINA v. 9.
{"url":"http://adina.com/newsgH129.shtml","timestamp":"2014-04-20T06:17:06Z","content_type":null,"content_length":"20875","record_id":"<urn:uuid:cb9752cb-be3c-4fe2-a93c-b8601f97f32c>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
Using a Portfolio's Required Return to Develop an Appropriate Risk Level Edward A. Moses, Ph.D. J. Clay Singleton, Ph.D. Stewart A. Marshall III, Esq. Authors' Note: Modern Portfolio Theory has become a customary tool used by investment professionals and, as such, constitutes an industry standard that investment decision makers cannot ignore. This academic theory has become the bedrock of investment practice. We have elected to publish three articles in consecutive editions of Wealth Strategies Journal to provide its readership with an understanding of Modern Portfolio Theory and the application of this theory to pertinent issues surrounding the administration and formulation of portfolios. Sequential publication eliminates the need to redevelop Modern Portfolio Theory and other concepts in each article. Wealth Strategies Journal readers will have the option of reviewing earlier articles to clarify any points of interest in subsequent articles. This first article in this series, "Developing and Defining a Well Managed Portfolio - A Primer on Modern Portfolio Theory", provided a foundation for understanding the underpinnings of Modern Portfolio Theory. The current article describes the usefulness to an investment decision maker of developing an investment policy statement and how the statement can be used to develop a portfolio's appropriate risk level. The final article in this series is "Determining the Appropriate Withdrawal Rate for a Portfolio: The Crossover Rate." I. Introduction In the first article in this series, we broadly classified an investment decision maker (IDM) as anyone undertaking responsibilities associated with providing investment advice or implementation. We defined IDMs to include, among others, fiduciaries, trustees, investment advisers, family office directors, portfolio managers, and individuals managing personal assets. We also identified one of the most important tasks undertaken by the IDM to be portfolio design: assembling and maintaining a portfolio of assets with a risk tolerance suitable for the purposes, terms, distribution requirements, and other conditions of the portfolio. An appropriately designed investment policy statement (IPS) can be an invaluable guide for the IDM in portfolio design. Section II of this article will provide the rationale for developing an IPS and discuss in some detail three important elements that should be included in the IPS. A very difficult issue facing an IDM is the assessment of an appropriate risk level for a portfolio. Section III presents an approach to determining a portfolio's risk tolerance using the IPS's stated required rate of return based on the portfolio's return/spending level needs. This approach employs a simulation to demonstrate the probabilities of expected outcomes and demonstrates the impact of different risk level assumptions. Section IV summarizes how the IPS can function as a management plan for the portfolio. Included are the reason why an IDM may not accept the Efficient Portfolio produced by Modern Portfolio Theory, the IPS's usefulness in reconciling the portfolio's desired rate of return with an appropriate risk level, and finally, why the portfolio's strategic asset allocation decision is made after the portfolio's target rate of return and risk level have been established. II. The Investment Policy Statement A. The Rationale for an Investment Policy Statement. One of the most important first steps undertaken by an IDM in portfolio design is the crafting of a well-considered investment policy statement (IPS). The IPS serves as an effective guide in making decisions related to the management of a portfolio. An IPS can be likened to the strategic plan of a business. This plan, based on the business' Mission Statement, sets out the objectives of the company and the steps or processes necessary to achieve these objectives. This blueprint guides operational decisions that manage the business. The plan does not change, particularly in the short run, unless the facts and assumptions upon which the plan was formulated change significantly. The IPS serves the same function as the business' strategic plan, providing a guide for the consistent implementation of an investment strategy and preventing emotional reactions to events in the market place. This is not to say the IPS, like a strategic plan, never changes. It should be reviewed periodically and modified if the facts and assumptions warrant a change. Finally, it should be stressed that "one plan does not fit all." Each portfolio, like each business, has a unique set of circumstances that warrant the development of an IPS tailored specifically to the goals and objectives of the portfolio. To blindly adopt a published "model plan" or even a slight modification to a model plan in developing an IPS for a portfolio defeats the purpose of an IPS. The IPS must be individualized for each portfolio to reflect its unique characteristics. B. The Contents of an Investment Policy Statement. Numerous articles and books have been written about the development and maintenance of an IPS. Many of them are excellent guides for determining the contents of the IPS. As indicated above, every portfolio has its unique characteristics and the IPS for the portfolio should be developed with these unique features in mind. Given the wealth of articles and texts available as a guide for developing an IPS, we will not elaborate here on the overall content of a well-constructed IPS. However, there are three components of an IPS that deserve elaboration and insight. Additionally, the order in which these specific components are developed is crucial. The three components listed below are arranged in the order in which decisions should be made. 1. Selection of Asset Classes to be Potentially Included in the Portfolio. Perhaps one of the IDM's most important investment related decisions is determining the appropriate asset classes to be considered for inclusion in the portfolio. If the choices selected are too few, the probability of achieving a well-diversified portfolio is extremely low. As illustrated in the first article in this series, the selection of asset classes to be considered for the portfolio creates the appropriate set used in the analysis. Quite often after asset classes are identified, the IDM determines a strategic asset allocation among these asset classes without the benefit of an Efficient Frontier analysis and establishes the allowable deviations from that allocation. This approach is problematical because it can limit the Efficient Frontier and force the IDM's portfolio choices into too narrow a range of expected returns and risk levels. Assume the IDM selects domestic large and small cap plus foreign equity, T-Bills, government and corporate fixed income, and real estate as the appropriate set of asset classes. Some IDMs might allocate the portfolio among these asset classes using rules of thumb. A typical result, along with allowable lower and upper deviations, is shown in Figure 1. Figure 1 Initial Allocation and Allowable Deviations Lower Limit Allocation Upper Limit Small Stocks 12% 15% 18% Foreign Stocks 9 12 15 Large Stocks 30 35 40 Real Estate 8 10 12 Corp. Bonds 10 13 16 Govt. Bonds 8 10 12 T-Bills 3 5 7 While this portfolio might seem reasonable, the Efficient Frontier analysis in Figure 2 shows this portfolio is severely constrained. In Figure 2 the allowable deviations were imposed as constraints (e.g. the 15% allocation to small stocks was limited to between 12% and 18%), and two Efficient Frontiers were generated - one without these constraints (labeled Unconstrained) and one with (labeled Constrained). The Constrained Efficient Frontier is very short reflecting the limited opportunities available to the IDM with respect to the portfolio's expected return and risk. Figure 2 Constrained and Unconstrained Efficient Frontiers The better approach is to establish target allocations that are consistent with the IPS and based on risk and return expectations produced by an Efficient Frontier analysis. After all, targets should reflect the best possible allocation under the circumstances and upper and lower limits associated with the final allocation decision are nothing more than a guide to portfolio rebalancing. Determination of the Target Rate of Return for the Portfolio. Many factors enter into the selection of the target rate of return, some controllable by the IDM and others dependent on factors outside the IDM's control. Examples of the latter include expected inflation and a minimum level of administrative expenses. Controllable factors include the: withdrawal rate from the portfolio, desired real growth in portfolio value (return above the rate of inflation and after withdrawals), and, ultimately, risk. As will be shown, determination of the target rate of return is subject to change once the risk level associated with this rate of return is 3. Determination of the Risk Tolerance for the Portfolio. Perhaps there is no more vexing problem for an IDM than determining an appropriate risk level for a portfolio. It is well known that all investors desire high returns and low risk. It is an axiom of finance that return and risk move together; the higher the desired return, the higher the necessary exposure to risk. Thus, there is a tradeoff between desired return and risk. As demonstrated in the following sections of this article, an IDM can estimate the investor's risk tolerance through an iterative process. This process involves determining initially the desired rate of return consistent with the IPS and then assessing the risk level required to achieve that return. If the risk is higher than a tolerable level, then the desired return must be adjusted downward to accommodate a lowering of the risk. It is possible the opposite occurs. The initial desired rate of return may suggest a risk level that is below a tolerable level. In this instance, elements of the desired return controllable by the IDM can be increased. III. The Appropriate Level of Risk for a Portfolio A. The Portfolio's Target Rate of Return. Let us assume we have a well-considered IPS for a family. This document would specify the portfolio's target rate of return consistent with the family's goals and objectives. For example the investment policy of a portfolio with a specified withdrawal rate for income purposes for the older generation and the remaining portfolio value after their death allocated to the younger generation would be initially designed to provide periodic income desired by the older generation. To be sustainable, the income target would have to be consistent with the life expectancy of the older generation, the initial portfolio value, consideration of the remaining amount available to the younger generation, and the expected rates of return on the constituent asset classes to produce the specified level of return. Assuming all parties have agreed on the portfolio's initial target rate of return, we can proceed to analyze the appropriate risk level. That risk level, in turn, provides feedback for the IDM's construction of the investment portfolio. Finally, the risk level may need to be adjusted in light of the family's risk tolerance. B. Risk and Rate of Return. In the first article in this series we introduced the Efficient Frontier. This technique finds the best possible combination of asset classes - best in the sense the portfolios along the Efficient Frontier all offer the lowest risk for their level of expected return. This collection of best portfolios is produced by examining all combinations of assets in the appropriate set - those asset classes the IDM deems suitable possible investments. Although IDMs who are experienced investment professionals could forecast and justify their own independent asset class returns, risks, and correlations, historical records are probably the best source for these forecasts. The same history of asset class performance is widely available to everyone. For purposes of this article we will assume the IDM has determined departures from these numbers are unwarranted. C. Using the Historical Record. Historical rates of return on seven popular asset classes are shown in Figure 3. Assume the IDM has selected these seven asset classes as appropriate for the portfolio. The column labeled average return shows the average annual return (including dividends and capital appreciation) produced by these seven asset classes. Figure 3 Annual Historical Returns on Seven Indices* All statistics in % Average Return Standard Deviation Small Stocks 17.45 22.71 Foreign Stocks 9.48 15.78 Large Stocks 11.49 14.83 Real Estate 16.72 15.36 Corp Bonds 8.41 7.92 Govt Bonds 8.56 9.61 T-bills 3.83 0.47 * Figure 3 is based on actual annual returns from 1972 through 2006. This Figure makes three main points: 1. Lessons from the Historical Record. First, this historical experience sets the range of returns that have occurred and, therefore under our assumptions, are likely to occur on average in the future. An IDM seeking a 17.45% rate of return, for example, would have to invest the entire portfolio in small stocks. This approach, of course, would be contrary to the importance of diversification in portfolio construction. A diversified portfolio would have to accept a more modest return objective. 2. Asset Class Risk and Return. Second, every target rate of return carries some risk. Even a portfolio dedicated to Treasury bills carries some risk, as the standard deviation column in Figure 3 suggests. The standard deviation indicates the amount of variation around the annual average return. Every asset class has a standard deviation. Common investment practice is to take this standard deviation statistic as a measure of risk. This statistic produces an intuitive ranking of returns to these asset classes in that most people recognize bonds are more risky than Treasury bills, real estate is more risky than bonds, and stocks become more risky as one moves from large stocks, to foreign stocks, to small stocks. Experience with the capital markets reflects the interaction of millions of investors and billions of dollars over many years. We can, therefore, use this historical information to translate the portfolio's expected return requirement into a risk level. 3. An Alternative Portfolio. If we assume the portfolio's desired rate of return is 12% per year we could construct a portfolio that was invested 43% in real estate and 57% in corporate bonds (.43 x 16.72% + .57 x 8.41% = 12%). This portfolio, however, would carry more risk (i.e., be less efficient) than other portfolios that are expected to produce a rate of return of 12%. The IDM should initially use the Efficient Frontier to discover the portfolio that provides the least risk with an expected return of 12%. Figure 4 shows such an Efficient Frontier. D. Using an Efficient Frontier. The Efficient Frontier shown in Figure 4 was developed following the process discussed in the first article of this series. To find the risk level associated with the efficient portfolio that produces an expected return of 12%, locate the portfolio on the Efficient Frontier that produces 12% (labeled "12% Portfolio") and read down to determine the risk level. In this example the 12% Portfolio has the asset allocation shown in Figure 5 with an expected standard deviation of 11.8%. Figure 4 Efficient Frontier Using Historical Annual Returns for Seven Indices* * Figure 4 is based on actual annual returns from 1972 through 2006. Figure 5 Allocation of the Efficient Portfolio with an Expected return of 12% Asset Class Allocation Small Stocks 0% Foreign Stocks 21% Large Stocks 0% Real Estate 52% Corp Bonds 22% Govt Bonds 0% T-bills 5% While this portfolio is mathematically the best (least risk) portfolio that produces an expected return of 12%, it is not well-diversified in the sense that it does not contain all the asset classes the IDM judged to be consistent with the IPS. This problem can be addressed, however, by nearly efficient portfolios that have the same risk but somewhat less expected return. In other words the IDM is willing to trade some expected return for better diversification. E. Nearly Efficient Portfolios. The Efficient Portfolio in Figures 4 and 5 has an expected return of 12% and an expected standard deviation of 11.8%. The vertical line in Figure 4 that connects this portfolio on the Efficient Frontier and the horizontal axis describes another set of portfolios, all of which have a standard deviation of 11.8% and returns progressively less than 12%. By inspecting these portfolios the IDM can select one that has acceptable diversification and an expected return as close to 12% as possible. One of these portfolios is shown in Figure 6. Figure 6 Allocation of Nearly Efficient Portfolio with Expected Return of 11.4% Asset Class Allocation Small Stocks 14% Foreign Stocks 17% Large Stocks 7% Real Estate 23% Corp Bonds 15% Govt Bonds 13% T-bills 12% This portfolio has an expected return of 11.4% and an expected standard deviation of 11.8%. Assume that the IDM selects this portfolio for risk analysis. F. Assessing the Portfolio's Risk. The IDM can now review the Nearly Efficient Portfolio and form a judgment as to whether the risk implied by the original desired rate of return, 11.8%, is suitable. For many individuals the concept of risk is more difficult than the concept of return. Figure 7 shows how the portfolio displayed in Figure 6 can be recast to assess the risk. Figure 7 Forecast Range of Nearly Efficient Portfolio's Wealth from One to Twenty Years in the Future Figure 7 shows forecast wealth for one through five, ten, and twenty years into the future for the Nearly Efficient Portfolio shown in Figure 6. The portfolio is assumed to start with $1 million today. The bars above each year represent the likely dollar range covering 90% of possible outcomes, i.e., from the 5th to the 95th percentile of the distribution. The horizontal line represents the 50th percentile. The dollar figures are fifth and ninety-fifth percentiles for representative Years 1, 10 and 20. For example, the wealth forecasts for Year 1 range between $968,600 (at the 5% level) and $1,271,800 (at the 95% level). Note that the fifth percentile stays close to the $1 million assumed starting value for many years, and does not exceed $2 million until after year ten. Representing the Nearly Efficient Portfolio this way often helps individuals understand the implications of the asset allocation decision. G. Adjusting the Portfolio's Risk. Assume after evaluating the wealth ranges for the Nearly Efficient Portfolio, the conclusion is reached that the range of wealth values over the years is too large and represents too much risk. In this case, the IDM might choose a different portfolio, close to the original Efficient Portfolio, that has less risk, and, therefore, less expected return than the Nearly Efficient Portfolio. This portfolio, shown in Figures 8 and 9, is labeled the Alternative Nearly Efficient Portfolio. This portfolio has an expected return of 11.2% and an expected standard deviation of 10.3%, placing it slightly below and to the left of the 12% Efficient Portfolio shown in Figure 4. Figure 8 Allocation of Alternative Nearly Efficient Portfolio with Expected Return of 11.2% Asset Class Allocation Small Stocks 10% Foreign Stocks 10% Large Stocks 25% Real Estate 15% Corp Bonds 25% Govt Bonds 10% T-bills 5% Figure 9 Forecast Range of the Alternative Nearly Efficient Portfolio's Wealth from One to Twenty Years in the Future With this portfolio, the IDM can point out the comparisons with the Nearly Efficient Portfolio's risk and return. Notice that the range of wealth for the Alternative Nearly Efficient Portfolio is less than that for the Nearly Efficient Portfolio for all years. For example in Year 1 the Alternative Portfolio has a higher 5th%-tile ($988,400) compared to the Nearly Efficient Portfolio in Figure 7 ($968,600) and a lower 95th%-tile ($1,219,300 versus $1,271,800). The differences are easiest to see over long periods of time with Year 20 demonstrating the biggest difference. This approach can help individuals understand the risk-return trade-off inherent in a more aggressive portfolio. F. The Optometrist Approach. This process of moving to lower risk portfolios near the Efficient Frontier (or to higher risk portfolios if the desire is to increase risk) can be repeated until an appropriate level of risk tolerance is found. Once a risk comfort level is established, the portfolio's asset composition is also determined. Like an optometrist alternating lenses and asking, "Can you see the chart better now?" the IDM continues to adjust the portfolio's risk up or down until the range of wealth values for a portfolio near the Efficient Frontier meets the family's risk preference. IV. Conclusions A. The Investment Policy Statement. A well constructed and implemented IPS is an important step in an IDM's management of a portfolio. It provides a guide for consistent implementation of an investment strategy based on circumstances associated with a particular portfolio and prevents irrational reactions to events in the market place. The appropriate contents of an IPS are well documented in the literature. When constructing an IPS, the IDM should pay particular attention (in the following order) to selection of the appropriate set of assets to be included in the portfolio, determination of the target return and assessment of the appropriate risk tolerance. The selection of the portfolio's potential assets determines the appropriate set which in turn determines the Efficient Frontier. Estimation of the target return established in the IPS identifies the appropriate portfolio on the Efficient Frontier. The location of the portfolio on the Efficient Frontier can be used to assess the portfolio's expected risk for that return. B. Diversification and the Efficient Frontier Portfolio. It is possible, perhaps likely, that the Efficient Frontier Portfolio will not meet the diversification requirements of the IDM. In this instance, the IDM can establish a better diversified portfolio by forming a portfolio below the expected return of the Efficient Portfolio. The portfolio, which we have identified as a Nearly Efficient Portfolio, will have the same risk level as the Efficient C. Determining the Appropriate Level of Risk. The desired return and risk are inextricably related; the higher the required return of a portfolio the higher the risk exposure of the portfolio. Using the targeted return established in the IPS to locate an Efficient Portfolio allows the IDM to identify the expected risk of the portfolio expressed in terms of its standard deviation. The Nearly Efficient Portfolio will have the same standard deviation as the Efficient Portfolio. While the standard deviation is a common indicator of risk used by academics it can be difficult for individuals to appreciate its significance. Using a simulation it is possible to convert this risk measure into potential ending dollar values for the portfolio. The higher the risk level the larger the potential future fluctuations in the portfolio's dollar value. The IDM can determine whether the ranges of potential ending dollar values are acceptable. If the potential ranges of portfolio values are deemed to be unacceptable, then the target return of the portfolio must be reduced in increments until an acceptable level of risk is determined. It is also possible the initial target return estimation results in potential portfolio value fluctuation estimations that are below a level of tolerance. In this case, the target return can be increased resulting in higher portfolio risk. This process, increasing or decreasing the target return, is repeated until the risk tolerance of the portfolio is established. D. Strategic Asset Allocation. After the target rate of return and corresponding risk level has been determined, the portfolio's strategic asset allocation can be established. If the strategic asset allocation and the upper and lower limits around the allocation for portfolio rebalancing guidelines are created prior to establishing the Efficient Frontier, a Constrained Efficient Portfolio will be created, with limited asset allocation opportunities for the IDM.
{"url":"http://www.wealthstrategiesjournal.com/articles/2008/12/using-a-portfolios-required-re.html","timestamp":"2014-04-18T15:49:46Z","content_type":null,"content_length":"59459","record_id":"<urn:uuid:868feb46-ae24-4272-b52a-c2c7f4a0b472>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
iterative data flow question "Jeremy Wright" <Jeremy.Wright@microfocus.com> 28 Apr 2006 23:51:14 -0400 From comp.compilers | List of all articles for this month | From: "Jeremy Wright" <Jeremy.Wright@microfocus.com> Newsgroups: comp.compilers Date: 28 Apr 2006 23:51:14 -0400 Organization: Compilers Central Keywords: analysis, question Posted-Date: 28 Apr 2006 23:51:14 EDT When solving forward flow problem using iterative data-flow analysis the best order for processing the blocks is reverse post order (RPO) on the control flow graph (CFG). When solving backwards flow problems the best order is RPO on the reverse CFG. In "Iterative Data Flow Analysis, revisited" the authors note (in section 5.2 if you are interested) that RPO on reverse CFG is distinct from the PO on the forward CFG. Now - given that there are usually several different post order traversals possible for a graph, any given PO of the CFG may be different to a RPO of the CFG, but I have yet to come up with an example where a forward PO does not correspond to a reverse RPO (for graphs with single entry and exit). More generally, if PO() and RPO() return the set of all possible [reverse] post orders of a graph, and rev() returns the graph with the control flow edges reversed, is there a graph G such that PO(G) != RPO(rev(G)) Jeremy Wright Compiler Team Leader Micro Focus Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/06-04-167","timestamp":"2014-04-19T02:06:48Z","content_type":null,"content_length":"4741","record_id":"<urn:uuid:8ba1b12c-68c3-4a9a-aed3-03c75507556e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
by William O. Douglas Doubleday & Company, N.Y. $4.50 The Caltech community got a preview of this book when justice Douglas visited the campus in February, on the YMCA's Leaders of America program. Now, here is the complete account of Douglas's trip to the Soviet Union in the summer of 1955, which pretty much took him from one end of the country to the other. As a report on Russia, and as a travel book, it's in a class by itself. by Norbert Wiener Doubleday & Co., N.Y. $5.00 Reviewed by F. Bohnenblust Professor of Mathematics Dr. Norbert Wiener is outstanding among contemporary mathematicians. He is, perhaps, most widely known for his book on Cybernetics, but his contributions to modern mathematics have opened many other new fields of investigations. I Am a Mathematician is the second volume of Wiener's autobiography Ð the first volume, Ex‑Prodigy, having appeared in 1953. In Ex‑Prodigy the problems of his childhood and the emotional conflicts with a brilliant, dominant father create the necessary background for a sympathetic understanding of the character and ambitions of the author. Unfortunately. the drama of human relations is less forceful in I Am a Mathematician. This volume begins at the moment Dr. Wiener enters his professional world as a young mathematician seeking a proper field for his abilities and struggling for recognition. It is progressively more episodic in character; it deals less with the elations of first discoveries and his emotional difficulties, and more with a play‑by-play account of the events of his life and of his achievements through his years of maturity to his present‑day position as a leader in the field of Analysis. Dr. Wiener repeatedly attempts to describe the significance of his work. which is known to professionals and will remain obscure to the layman. As a result of his work and of his extensive travels in America, Europe, China and India, Dr. Wiener has met most leading mathematicians. The sketches of their personalities and of his experience, often deft, amusing, or penetrating, add greatly to the interest of the book. edited by James R. Newman Simon & Schuster $4.95 A collection of 12 essays in which a many scientists, and social scientists, attempt to explain their particular fields to the layman. Impressive as they are, the discussions, in general, seem to require more than a layman interest or understanding of the reader. Among the authors represented are Bertrand Russell (Science and Human Life), Hermann Bondi (Astronomy and Cosmology), Edward U. Condon (Physics), John Bead (Chemistry), Julian Huxley (Evolution and Genetics), Clyde Kluckhohn (Anthropology) and Erich Fromm (Psycho‑analysis).
{"url":"http://calteches.library.caltech.edu/49/2/Books.htm","timestamp":"2014-04-21T14:56:13Z","content_type":null,"content_length":"8402","record_id":"<urn:uuid:4496e8f2-e634-4dd7-bd3a-2771e5c03707>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
Fisher Information and the Cramer-Rao Bound Next: Applications of the Up: An Information Theory Approach Previous: Introduction The ability to estimate a specific set of parameters, without regard to an unknown set of other parameters that influence the measured data, or nuisance parameters, is described by the Fisher Information matrix, and its inverse the Cramer-Rao bound. Until recently, analytic solutions to the inverse of the Fisher Information matrix have been intractable for all but the simplest of problems. Scharf and McWhorter [1] have recently shown how to analytically compute this inverse for general problems. Through this general inverse they have shown that the ability to estimate the desired parameters of the data is related to the system sensitivity to these parameters that is orthogonal to the system sensitivity related to the nuisance parameters. A summary of this result follows. Later sections apply this theory to particular spatially incoherent optical systems. Assume that a deterministic model of a particular spatially incoherent optical, or spatially incoherent remote-sensing, system has been found. This model should include all parameters of affecting the deterministic part of the measured signal. Fisher Information is then a measure of the information content of the measured signal relative to a particular parameter. The Cramer-Rao bound is a lower bound on the error variance of the best estimator for estimating this parameter with the given system. Let the unknown system parameters of a given system be denoted by the length where the noiseless measurement is some vector function of these parameters, say [4][3][2]. This bound can describe both biased and unbiased estimators. This work will consider only unbiased estimators. The variance of any unbiased estimator of one component of where 3) reduces [1] to The matrix 4) as it can be shown that the inverse of the Fisher Information matrix of (3) is given by [1] is a projection matrix projecting onto the space orthogonal to the space spanned by the matrix Notice that (6) is a general geometric formulation of the Cramer-Rao bound for a given general information processing system. The influence of the nuisance parameters on the estimation of the desired parameters is clearly stated. Consider the desired parameters as Next: Applications of the Up: An Information Theory Approach Previous: Introduction Ed Dowski Wed Nov 1 12:38:26 MST 1995
{"url":"http://www.colorado.edu/isl/papers/info/node2.html","timestamp":"2014-04-18T16:20:46Z","content_type":null,"content_length":"8469","record_id":"<urn:uuid:cb53ff31-a053-4d43-bfa2-552c3bc2c530>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
Loudspeaker formula - Page 2 - diyAudio There is a fairly easy way to get an intuitive feel for the 6dB point source freefield rule. For each doubliing of distance the sound intensity in watts per square meter is reduced by a factor of 10 * log (0.25) = -6 (-6.0206...) The actual formula for sound level is: dB = 10 * log(Intensity/1e-12) such that 1 watt per square meter is 120dB If you wish to use pressure rather than intensity you may use: dB = 20* log (P/0.00002), where P is sound pressure in Pascals. 0.00002 is the sound pressure in pascals defined to be the 0dB level. Bonus question: What happens to SPL when the distance goes to zero, assuming dB = 90 at 1 meter Our species needs, and deserves, a citizenry with minds wide awake and a basic understanding of how the world works. --Carl Sagan Armaments, universal debt, and planned obsolescence--those are the three pillars of Western prosperity. —Aldous Huxley
{"url":"http://www.diyaudio.com/forums/multi-way/45928-loudspeaker-formula-2.html","timestamp":"2014-04-19T14:56:25Z","content_type":null,"content_length":"69572","record_id":"<urn:uuid:6572cd67-6faf-419f-8c81-4088395a7016>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
Symmetries and level-appropriate teaching This fall I'm going to be teaching honors introductory mechanics to incoming undergraduates - basically the class that would-be physics majors take. Typically when we first teach students mechanics, we start from the point of view of forces and Newton's laws, which certainly parallels the historical development of the subject and allows students to build some physical intuition. Then, in a later class, we point out that the force-based approach to deriving the equations of motion is not really the modern way physicists think about things. In the more advanced course, students are introduced to Lagrangians and Hamiltonians - basically the Action Principle, in which equations of motion are found via the methods of variational calculus. The Hamiltonian mechanics approach (with action-angle variables) was the path pursued when developing quantum mechanics; and the Lagrangian approach generalizes very elegantly to field theories. Indeed, one can make the very pretty argument that the Action Principle method does such a good job giving the classical equations of motion because it's what results when you start from the path integral formulation of quantum mechanics and take the classical limit. A major insight presented in the upper division course is Noether's Theorem. In a nutshell, the idea is that symmetries of the action (which is a time integral of the Lagrangian) imply conservation laws. The most famous examples are: (1) Time-translation invariance (the idea that the laws of physics governing the Lagrangian do not change if we shift all of our time parameters by some amount) implies energy conservation. (2) Spatial translation invariance (the laws of physics do not change if we shift our apparatus two feet to the left) implies conservation of momentum. (3) Rotational invariance (the laws of physics are isotropic in direction) implies conservation of angular momentum. These classical physics results are deep and profound, and they have elegant connections to operators in quantum mechanics. So, here's a question for you physics education gurus out there. Does anyone know a way of showing (2) or (3) above from a Newton's law direction, as opposed to Noether's theorem and Lagrangians? I plan to point out the connection between symmetry and conservation laws in passing regardless, but I was wondering if anyone out there had come up with a clever argument about this. I could comb back issues of AJP, but asking my readers may be easier. 16 comments: So, what is Newton's second law? It's just F = m*a = m*(dvelocity / dt). And for essentially all cases of interest in classical physics (except perhaps friction, but that's another story, and as far as I know friction isn't even really easy to deal with as regards Lagrangians and Hamiltonians...) we may write the force as the negative gradient of a potential (conservative forces) F = - grad V. So, what do we mean by spatial-positional independence of the laws of physics? I am under the impression that that implies that the gradient of V would vanish, otherwise theres a spatial dependence in the potential energy and the laws of physics are not invariant under a spatial translation. Thus, grad V = 0, and thus F = - grad V = 0 = d (momentum)/dt so momentum is a constant in time. In Newtonian terms, torque = d(angular momentum)/dt = r cross product Force = (again, for all cases of interest) - r cross (grad V). I suppose now we figure that if r cross (grad V) is not zero, that would imply that there is a component of grad V that is perpendicular to displacement from the origin. Intuitively it seems to easy to realize (from picture and diagrams) that this would mean that the physics is different at different angles relative to your origin. Thus, just as with the linear momentum - translation symmetry case, in order for the physics to be angle independent, you HAVE to have (grad V) parallel to the displacement relative to your origin, meaing r cross F has to be zero, meaing d(angular momentum)/dt has to be zero. In fact, now that I think about it, it is pretty easy to reason r cross (grad V) = 0 if and only if the physics is isotropic. Because when you say r cross (grad V) = 0, it is equivalent to saying that there is no gradient in any direction perpendicular to the outward radial vector, as I have discussed in the previous post. So what does this mean? Any spatial dependence of the potential, and thus the physics, has to be in the outward radial direction -> its isotropic. Paul said... Awww, Hannon's not teaching 111? It was that class that made me (partially) switch over from Chemistry to Physics. Is he going to be teaching any other classes this year? In any case, best of luck with that class. I'm sure you'll do well. Sorry to bug you again...but I can simplify everything I talked about with a concise summary. Linear Momentum: Assuming translation invariance is equivalent to assuming V is not a function of position. Thus, Force = d*(momentum)/dt = - gradient V = 0, so linear momentum is conserved. Angular Momentum: Assuming rotational isotropic invariance is equivalent to assuming the potential is only a function of radial distance V = V(r) and not angle. By simple calculus, this means grad V has only a radial component parallel to radial displacement r. Thus, Torque = r x Force = - r x grad(V) = 0 = d(angular momentum)/dt, so angular momentum is clearly conserved. What I like about this approach is that it connects more to how we approach the problems in Lagrangian and Hamiltonian mechanics. Namely, you look and see if your potential, and thus your Lagrangian (or Hamiltonian) has any positional and/or angular dependence. If it has none, then that's an immediate sign that momentum and/or angular momentum is conserved by Noether's theorem, respectively. You don't even have to do a calculation, just look at the form of the potential and your good to go. - Tahir Joe Renes said... Looking over my old notes from ug days, I see that we did not specifically discuss the connection between translation invariance and conservation of momentum. The argument we used treats all forces as being due to some other object(s), and conservation of momentum follows immediately from the 2nd and 3rd laws and assumes of course that these laws hold regardless of the location of the objects. Similarly, I think one can say that rotation invariance means that the force on one object due to another must be along the line joining them, which results in conservation of angular momentum since the lever arm and force point in the same direction. These arguments do seem more casual about the connection between symmetry and conservation than Noether's theorem, but perhaps that's because we're using Newton's laws in the first place. That is, the causality is backwards: momentum conservation from translation symmetry is not something that should come out of Newton's laws, rather it is something that goes in to coming up with them in the first place. Thus, when we use the argument above and just start from Newton's laws, we're showing that this connection was inherent in Newton's laws all along. Vincent said... Why does translation invariance imply momentum conservation? I ask because as I understand it translation invariance is true in general, but momentum conservation is only true in the classical limit of c=inf, and I don't see a very simple way to say that because we can move our experiment arbitrarily and have it act out the same, we must therefore have the momentum conserved. On a side note, isn't momentum conservation just an axiom of newtons laws, with angular momentum following as consequence? CarlBrannen said... Hmmm. I show my algebra based associate degree introductory physics students that conservation of momentum immediately follows from F=ma and "for every action there's an opposite and equal reaction"; one replaces a by change dv/dt, (without calculus). I wonder if that is related. Doug Natelson said... Tahir - Thanks for your remarks. I'd already been thinking about conservative force fields (gradients of scalar potentials) as an example. However, that doesn't really get at the deeper issue. In some sense, the deep question is, how does translational invariance imply Newton's 3rd law (which is really critical to momentum conservation for multiparticle systems)? The forces involved in interparticle interactions in general don't have to be described as the gradients of scalar potentials.... Paul, yes, after more than a dozen years, Jim is going to teach 301 instead of 111 in the fall. He'll still do 112 in the spring, though. Vincent, momentum is always conserved, even in the relativistic (c != infinity) case, as long as momentum is properly defined. Your response makes my point though - the connection between translation invariance and momentum conservation isn't obvious. Joe, I think your last paragraph is insightful - I need to think more about this. Hmm...I guess I see how you'd like something a bit more intuitive and physical. Forgive me, I've been thinking about it from another point of view, and please allow me to present it in a way that I hope better fits what you are looking for. In fact, now that I think about it in that way, what Joe says seems spot on, and I hope the following will convince you of that. So when we say a systems of particles in mechanics is translationally invariant, we are saying, if we had a big box that hosted the entire system, and we shifted the box and the entire system's coordinates over by some arbitrary position, and then looked at it from this new position, the physics would be the same. Essentially, we could freeze time, put the system in the box, shift it, turn time back on, and we wouldn't notice a thing had changed if we were in the box, per se. For a truly translationally invariant system, this should be the case no matter what the box, as long as it enclosed everything in the system in question. Now, imagine if the physics were to indeed change were we to move that box and everything in it to another position. That would imply that there is no translational invariance. It would also imply that there has to be some external variable at play that is affecting the box and the system as a whole. Otherwise how could the physics change if we moved our box? There has to be SOMETHING causing the change. And saying that there has to be something there is equivalent to saying, in Newtonian terms, that there's got to be a NET EXTERNAL FORCE on the box on the system. That net external force is the external variable acting on the box and whole system that is the only way to change physics if you shift the box by some displacement. So if we are saying that the system's physics is translationally invariant, that's saying there can't be a net external force on the system, so the net force on the system is ENTIRELY DUE TO INTERNAL FORCES BETWEEN THE PARTICLES. Because as is known, the net force on an arbitrary system of particles is the sum of any external forces on the system as a whole, and all internal forces between all particles acting together. So the above argument shows that translational invariance implies there is no external force and the net force is entirely internal, in a nutshell. Momentum conservation, by the 2nd law, is equivalent to saying the net force is zero, meaning all internal forces cancel, in other words, for every action there has to be an equal and opposite reaction. Thus, as I see it, Newton's third law and momentum conservation from translational symmetry are essentially two sides of the same coin, so to speak. This coin is a postulate that Newton came up with, and if I understand correctly it is not derivable from Newton's 2nd law itself. Sorry for my wordiness. Joe Renes said... I was led to my final comment by reading a bit about the history of Newton's laws in Julian Barbour's book "The Discovery of Dynamics". Reading a bit more carefully now, it seems that Newton came upon the idea of the third law from studying collisions, and his notion of the third law (which wasn't then yet precisely formulated) is very much bound up with the conservation of momentum. Here's a quote from Newton's Waste Book, written sometime between 1664 and 1666, regarding two colliding objects p and r (page 508 of Barbour): "119. If r presse p towards w then p presseth r towards v [w and v are opposite directions of course]. Tis evident without explication. 120. A body must move that way which it is pressed. 121. If 2 bodys p and r meet the one the other, the resistance in both is the same for soe much as p presseth upon r so much r presseth on p. And therefore they must suffer an equall mutation in their motion [momentum]." Of course, none of that has to do with momentum conservation from translational invariance, but I think Tahir's argument works nicely to show that statement is essentially the same as the third law. If I understood the reasoning correctly, we can get momentum conservation from translational invariance via the 2nd and 3rd laws, or we can get the 3rd law via the 2nd law and the statement that momentum conservation is implied by translation invariance. There is one possible objection I thought of: why shouldn't we allow the possibility of a constant force acting on all particles? The system is translationally-invariant, inasmuch as the force is the same everywhere, but of course momentum is not conserved. Momentum minus force times time is conserved though. This situation is a bit subtle in analytic mechanics as well, because now the Lagrangian isn't translationally invariant, even though it "should" be. Actually, it is, once you remember the caveat that the Lagrangian is only defined up to gauge terms which are total time derivatives of a function of position and time. If we use the function Ft, then the symmetry is restored (the gauge term can depend on the translation) and Noether's theorem leads to the correct conservation. Not sure where I'm going with this! Joe's counterexample of a constant force is an interesting one. On the surface, it appears that indeed such a system should be translationally invariant. I am not sure if I am correct with this reasoning, however I wonder if we could argue that in reality, the system is not truly translationally invariant if subject to a constant force. I guess I am imagining the example of a constant force due to a constant electric field created by an ideal infinite parallel plate capacitor. Then if your system of particles, with charge per se, were sandwiched in the middle of that ideal capacitor, it would indeed be feeling a constant force. However, if you translate the system beyond the edge of the parallel plate capacitor, then the net electric field and force is zero, so in that sense the translational invariance is broken. I guess (and here, I confess, I am kind of waving my hands and am not sure) you could imagine expanding that parallel plate capacitor's walls out ot plus and minus infinity so that the uniform electric field becomes ever more present "everywhere" -> but there would kind-of always still be a translation beyond an edge that breaks the physical invariance... Joe Renes said... @Tahir: I agree! In principle the constant force scenario has the limitations you mention, since ultimately we want to talk about the degrees of freedom which are responsible for the force in the first place. If we assume that we can do this, then we're back to the case of no external force and can proceed as before. This seemed like an interesting point not because I expected it to wreck the arguments made so far, but because it's a weird case of translational invariance that doesn't quite fit the usual conception, especially in Lagrangian mechanics. Possibly a good homework question, or even more cruelly, a question for an oral exam! Yes, indeed Joe, such a thing would indeed be quite fun to orally examine a student on. I can just imagine professors torturing graduate students preparing for their oral qualifying candidacy exam...hehehehe. Ironic that I find such torture amusing since I myself am taking said exam in a few months. So the picture is really coming together now. In Lagrangian /Hamiltonian mechanics, Noether's theorem leads to the conservation of p - Ft. We can reconcile this with simple Newtonian intuition by, as you said, figuring that when we talk about "translationally invariant system", our system HAS to include the external degrees of freedom responsible for the force. Thus, the total momentum of the system, which is the relevant conserved quantity, is not just the p of the particles, but also the -Ft, which is the additional momentum of the additional force-creating degrees of freedom in the system. I think we should co-write an article for the American Journal of Physics, Joe. What do you think? ;-) QuasiNewton said... It's the generating momentum that takes you from configuration A to B that are related by the corresponding symmetry operations. Without explicit time dependence, the change in energy only depends on the change in generating momentum. If there is no change in energy, there is no change in momentum either. (This is a mere rephrasing the "cyclic coordinates" idea of Lagrangian mechanics, but there might be no need to introduce the Euler-Lagrange equation, if one simply explains that e.g. an angular momentum rotates a system and that all resulting changes in the system can be characterized by the angular momentum). Doug Natelson said... Thanks to all of you for a vigorous discussion! I'm traveling right now, or I would post more. Joe Renes said... @Tahir -- send me an email and let's discuss! =)
{"url":"http://nanoscale.blogspot.com/2010/07/symmetries-and-level-appropriate.html","timestamp":"2014-04-21T04:33:04Z","content_type":null,"content_length":"132850","record_id":"<urn:uuid:79dd6b42-0c8a-4065-89b0-b9c2c5168938>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
• Magnitude[vector] returns the length of vector. • vector may be a simple vector or a Mech vector object with head Vector, Line, or Plane (3D). • Magnitude is also an option for Force and Moment (3D). Magnitude->Relative causes the magnitude of the applied load to be multiplied by the length of the load vector. • Magnitude is also an option for Extrude (3D). Magnitude->Relative causes the extrusion distance to be multiplied by the length of the extrusion vector. • Magnitude->Absolute causes the magnitude of the applied load or the extrusion distance to be equal to the specified magnitude. • Magnitude[bnum, lpnt] is interpreted as Magnitude[Vector[bnum, lpnt]]. • Other Magnitude[args, ... ] instances are interpreted as Magnitude[Line[args]]. • The default setting is Magnitude->Absolute. • See also: Direction, Distance, Unit. Further Examples Load the Modeler2D package. Direction returns the direction vector of any Mech geometry object that has the property of having a direction. Magnitude returns the length of the direction vector of a Mech vector object, taking advantage of some simplifications. Similarly, Unit takes advantage of the fact that the magnitude of the vector is not a function of the coordinate system that the vector is expressed in. If the vector is already expressed in global coordinates, then such simplifications are not possible.
{"url":"http://reference.wolfram.com/applications/mechsystems/FunctionIndex/Magnitude.html","timestamp":"2014-04-17T15:35:45Z","content_type":null,"content_length":"35731","record_id":"<urn:uuid:0aec2061-cb2b-452a-ae5e-cbcd0921138e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
Seismic-Hazard Maps for the Conterminous United States Scientific Investigations Map 3195 Prepared in cooperation with the California Geological Survey Seismic-Hazard Maps for the Conterminous United States, 2008 By Mark D. Petersen,^1 Arthur D. Frankel,^1 Stephen C. Harmsen,^1 Charles S. Mueller,^1 Kathleen M. Haller,^1 Russel L. Wheeler,^1 Robert L. Wesson,^1 Yuehua Zeng,^1 Oliver S. Boyd,^1 David M. Perkins,^1 Nicolas Luco,^1 Edward H. Field,^1 Christopher J. Wills,^2 and Kenneth S. Rukstales^1 Probabilistic seismic-hazard maps were prepared for the conterminous United States portraying peak horizontal acceleration and horizontal spectral response acceleration for 0.2- and 1.0-second periods with probabilities of exceedance of 10 percent in 50 years and 2 percent in 50 years. All of the maps were prepared by combining the hazard derived from spatially smoothed historic seismicity with the hazard from fault-specific sources. The acceleration values contoured are the random horizontal component. The reference site condition is firm First posted December 27, 2011 rock, defined as having an average shear-wave velocity of 760 m/s in the top 30 meters corresponding to the boundary between NEHRP (National Earthquake Hazards Reduction program) site classes B and C. Part of this report is presented in Portable Document Format (PDF); the latest version of Adobe Reader or similar software This data set represents the results of calculations of hazard curves for a grid of points with a spacing of 0.05 degrees in latitude and is required to view it. Download the latest version of Adobe longitude. The grid of points were contoured to produce the final representation of the seismic-hazard. Reader, free of charge. These maps are intended to summarize the available quantitative information about seismic ground motion hazard for the conterminous United States from geologic and geophysical source. ^1U.S. Geological Survey ^2 California Geological Survey, Sacramento, Calif. Suggested citation: Petersen, M.D., Frankel, A.D., Harmsen, S.C., Mueller, C.S., Haller, K.M., Wheeler, R.L., Wesson, R.L., Zeng, Yuehua, Boyd, O.S., Perkins, D.M., Luco, Nicolas, Field, E.H., Wills, C.J., and Rukstales, K.S., 2011, Seismic-Hazard Maps for the Conterminous United States, 2008: U.S. Geological Survey Scientific Investigations Map 3195, 6 sheets, scale 1: 7,000,000. SIM3195_sheet1.pdf--map of peak horizontal acceleration with 10% probability of exceedance in 50 years SIM3195_sheet2.pdf--map of peak horizontal acceleration with 2% probability of exceedance in 50 years SIM3195_sheet3.pdf--map of horizontal spectral response acceleration for 0.2-second period with 10% probability of exceedance in 50 years SIM3195_sheet4.pdf--map of horizontal spectral response acceleration for 0.2-second period with 2% probability of exceedance in 50 years SIM3195_sheet5.pdf--map of horizontal spectral response acceleration for 1.0-second period with 10% probability of exceedance in 50 years SIM3195_sheet6.pdf--map of horizontal spectral response acceleration for 1.0-second period with 2% probability of exceedance in 50 years
{"url":"http://pubs.usgs.gov/sim/3195/","timestamp":"2014-04-18T18:46:32Z","content_type":null,"content_length":"12994","record_id":"<urn:uuid:10436e4b-365d-444b-a8b2-4a6fcfe9feef>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
covariance problem December 13th 2009, 10:53 AM #1 Oct 2007 covariance problem Let X1, X2, and X3 be uncorrelated random variables, each with mean u and variance sigma^2. Find, in means of u and sigma^2, Cov(X1+X2, X2+X3) I got this far: Cov(X1+X2, X2+X3) = E[(X1+X2)(X2+X3)] - E(X1+X2)E(X2+X3) But i don't know how to continue this problem Now expand, use linearity of the expectation, and the assumptions... I just obtain the 4 covariances... $Cov(X_1+X_2, X_2+X_3)= Cov(X_1,X_2)+Cov(X_1,X_3)+Cov(X_2,X_2)+Cov(X_2,X_3 )$ $= 0+0+V(X_2)+0=\sigma^2$ It's just like $(a+b)(c+d)=ac+ad+bc+bd$ December 13th 2009, 11:42 AM #2 MHF Contributor Aug 2008 Paris, France December 13th 2009, 06:25 PM #3
{"url":"http://mathhelpforum.com/advanced-statistics/120234-covariance-problem.html","timestamp":"2014-04-17T12:39:59Z","content_type":null,"content_length":"37035","record_id":"<urn:uuid:e34cb75d-5462-42bb-a04a-f9f20f67b10a>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
10-725 Optimization Spring 2008 Class lectures: Mondays and Wednesdays 10:30-11:50am in NSH 1305 Recitations: Thursdays 5:00-6:20, Wean Hall 5409 Essentially, every problem in computer science and engineering can be formulated as the optimization of some function under some set of constraints. This universal reduction automatically suggests that such optimization tasks are intractable. Fortunately, most real world problems have special structure, such as convexity, locality, decomposability or submodularity. These properties allow us to formulate optimization problems that can often be solved efficiently. This course is designed to give a graduate-level student a thorough grounding in the formulation of optimization problems that exploit such structure and in efficient solution methods for these problems. The course focuses mainly on the formulation and solution of convex and combinatorial optimization problems. These general concepts will also be illustrated through applications in machine learning, AI, computer vision and robotics. Students entering the class should have a pre-existing working knowledge of algorithms, though the class has been designed to allow students with a strong numerate background to catch up and fully participate. Though not required, having taken 10-701 or an equivalent machine learning class will be helpful, since we will use applications in machine learning and AI to demonstrate the concepts we cover in class. Announcement Emails • Class announcements will be broadcasted using a group email list: • Please subscribe the 10725-announce list page. • For changes (incl. additions or removal) to your membership in the course list, please make changes directly via the list administration page. Discussion Group • We have a discussion group where you can post questions, discuss issues, and interact with fellow students. Please join the group and check in often: • Optional textbook: Introduction to Linear Optimization, Dimitris Bertsimas and John N. Tsitsiklis. • Optional textbook: Combinatorial Optimization: Algorithms and Complexity, Christos Papadimitriou and Kenneth Steiglitz. • Optional textbook: Combinatorial Optimization, Alexander Schrijver. • Optional textbook: Nonlinear Programming, Dimitri Bertsekas. • Optional textbook: Approximation Algorithms, Vijay Vazirani • Homeworks (5 assignments 50%) • Final project (30%) • Final exam (20%) - Out: Monday, May 5, Due: Friday, May 9 by Noon • We don't know for sure yet whether we will be able to allow auditors. If you are considering auditing, you should attend the first class. In any case, students wishing to audit must register to audit the class. To satisfy the auditing requirement, you must either: □ Do *two* homeworks, and get at least 75% of the points in each; or □ Take the final, and get at least 50% of the points; or □ Do a class project and do *one* homework, and get at least 75% of the points in the homework ☆ Like any class project, it must address a topic related to machine learning and you must have started the project while taking this class (can't be something you did last semester). You will need to submit a project proposal with everyone else, and present a poster with everyone. You don't need to submit a milestone or final paper. You must get at least 80% on the poster presentation part of the project. □ Please, send us an email saying that you will be auditing the class and what you plan to do. • If you are not a student and want to sit in the class, please get authorization from the instructors. Homework policy Important Note: As we often reuse problem set questions from previous years, or problems covered by papers and webpages, we expect the students not to copy, refer to, or look at the solutions in preparing their answers. Since this is a graduate class, we expect students to want to learn and not google for answers. The purpose of problem sets in this class is to help you think about the material, not just give us the right answers. Therefore, please restrict attention to the books mentioned on the webpage when solving problems on the problem set. If you do happen to use other material, it must be acknowledged clearly with a citation on the submitted solution. Collaboration policy Homeworks will be done individually: each student must hand in their own answers. In addition, each student must write their own code in the programming part of the assignment. It is acceptable, however, for students to collaborate in figuring out answers and helping each other solve the problems. We will be assuming that, as participants in a graduate course, you will be taking the responsibility to make sure you personally understand the solution to any work arising from such collaboration. In preparing your own writeup, you should not refer to any written materials from a joint study session. You also must indicate on each homework with whom you collaborated. The final project may be completed individually or in teams of two students. Late homework policy • Homeworks are due at the begining of class, unless otherwise specified. • You will be allowed 3 total late days without penalty for the entire semester. For instance, you may be late by 1 day on three different homeworks or late by 3 days on one homework. Each late day corresponds to 24 hours or part thereof. Once those days are used, you will be penalized according to the policy below: □ Homework is worth full credit at the beginning of class on the due date. □ It is worth half credit for the next 48 hours. □ It is worth zero credit after that. • You must turn in all of the 5 homeworks, even if for zero credit, in order to pass the course. • Turn in all late homework assignments to Michelle (Wean Hall 4619) (). Homework regrades policy If you feel that we have made an error in grading your homework, please turn in your homework with a written explanation to Monica, and we will consider your request. Please note that regrading of a homework may cause your grade to go up or down. Final project • Project proposal due date TBD. • Graded milestone due date TBD (20% of project grade) • Poster session, May 1st 3-6pm in the NSH Atrium (20% of project grade) • Paper due date TBD (via electronic submission to the instructors list) (60% of project grade) For project milestone, roughly half of the project work should be completed. A short, graded write-up will be required, and we will provide feedback. Note to people outside CMU Feel free to use the slides and materials available online here. If you use our slides, an appropriate attribution is requested. Please email the instructors with any corrections or improvements.
{"url":"http://www.cs.cmu.edu/~guestrin/Class/10725/","timestamp":"2014-04-21T02:47:42Z","content_type":null,"content_length":"11222","record_id":"<urn:uuid:7038bd39-233e-4b94-96e7-a7a9f77885b8>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00479-ip-10-147-4-33.ec2.internal.warc.gz"}
Boylston Precalculus Tutor Find a Boylston Precalculus Tutor ...I will then analyze the test results and develop an individualized study plan for the student.I taught for four years at The Willow Hill School, Sudbury, MA, where I worked exclusively with students with learning disabilities. The majority of these students were diagnosed with ADD or ADHD. In t... 31 Subjects: including precalculus, reading, chemistry, English ...I have tutored many students preparing for the GRE's. I took the GRE exam myself and scored 800 on the math section and 740 verbal. I work with students in the three areas that the GRE test: quantitative, verbal and writing. 29 Subjects: including precalculus, calculus, geometry, GRE ...I enjoy working with students who are motivated but need a little help to understand the subject at hand. I'm very good at explaining hard concepts or problems using easy to understand and every day examples. I'm patient with my students and experienced in helping them improve their grades in s... 11 Subjects: including precalculus, calculus, geometry, algebra 1 ...In addition to undergraduate level linear algebra, I studied linear algebra extensively in the context of quantum mechanics in graduate school. I continue to use undergraduate level linear algebra in my physics research. I use MATLAB routinely in my research. 16 Subjects: including precalculus, calculus, physics, geometry ...I have experience teaching in 1st, 3rd, 4th, 5th, and 7th grade classrooms. I have tutored students in math from 1st grade through college level, including standardized testing. I have my teaching certification in Elementary Education (1-6) as well as in Middle School Math (5-8), and I have passed the license test for high school math. 13 Subjects: including precalculus, geometry, algebra 1, SAT math
{"url":"http://www.purplemath.com/Boylston_precalculus_tutors.php","timestamp":"2014-04-17T19:32:34Z","content_type":null,"content_length":"24094","record_id":"<urn:uuid:234d826d-15b0-49bc-8f7a-922cae00673b>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/roamingblue/medals","timestamp":"2014-04-17T21:41:38Z","content_type":null,"content_length":"103879","record_id":"<urn:uuid:136cb468-d59d-483d-a5a4-17d227c37cec>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US4084255 - Positional, rotational and scale invariant optical correlation method and apparatus The present invention relates generally to optical pattern recognition systems, and, more particularly, to optical correlation apparatus and methods utilizing transformations that are shift, scale and rotationally invariant. In the correlation of 2-D information, the signal-to-noise ratio of the correlation peak decreases significantly when there are scale and rotational differences in the data being compared. For example, in one case of a 35 mm transparency of an aerial image with about 5 to 10 lines/mm resolution, this ratio decreased from 30 db to 3 db with a 2 percent scale change and a similar amount with a 3.5° rotation. Several methods have been advanced for overcoming the signal losses associated with the scale, shift and rotational discrepancies encountered in optical comparison systems. One proposed solution involves the storage of a plurality of multiplexed holographic spatial filters of the object at various scale changes and rotational angles. Although theoretically feasible, this approach suffers from a severe loss in diffraction efficiency which is proportional to the square of the number of stored filters. In addition, a precise synthesis system is required to fabricate the filter bank, and a high storage density recording medium is needed. A second proposed solution involves positioning the input behind the transform lens. As the input is moved along the optic axis the transform is scaled. Although useful in laboratory situations, this method is only appropriate for comparatively small scale changes, i.e., 20 percent or less. Also, since this method involves mechanical movement of components, it cannot be employed in those applications where the optical processor must possess a real time capability. Mechanical rotation of the input can, of course, be performed to compensate for orientation errors in the data being compared. However, the undesirable consequences of having to intervene in the optical system are again present. In applicants' co-pending application, Ser. No. 707,977, filed July 23 1976, there are disclosed correlation methods and apparatus which use Mellin transforms that are scale and shift invariant to compensate for scale differences in the data being compared. The systems therein disclosed, however, do not compensate for orientation errors in this data. It is, accordingly, an object of the present invention to provide a transformation which is invariant to shift, scale and orientational changes in the input. Another object of the present invention is to provide an optical correlation method and apparatus for use with 2-D data having shift, scale and rotational differences. Another object of the present invention is to provide a method of cross-correlating two functions which are scale and rotated versions of one another where the correlation peak has the same signal-to-noise ratio as the autocorrelation peak. Another object of the present invention is to provide an electro-optic correlator whose performance is not degraded by scale and orientational differences in the data being compared and which provides information indicative of the magnitudes of these differences. Other objects, advantages and novel features of the invention will become apparent from the following detailed description of the invention when considered in conjunction with the accompanying drawings wherein: FIG. 1 is a block diagram illustrating a positional, rotational and scale invariant transformation system; FIG. 2 is a block diagram illustrating the real time implementation of the transformation of FIG. 1; FIG. 3 shows the sequence of operations carried out in the cross-correlation method of the present invention; FIG. 4 shows a correlation configuration for practicing the method of FIG. 3; and FIG. 5 shows the correlation peaks appearing in the output plane of the correlator of FIG. 4. The present invention provides a solution for the shift, scale and rotational differences between the input and reference data by utilizing a transformation which is itself invariant to shift, scale and orientational changes in the input. As shown in FIG. 1, the first step in the synthesis of such a transformation is to form the magnitude of the Fourier transform |F(ω[x],ω[y])| of the input function f(x,y). This eliminates the effects of any shifts in the input and centers the resultant light distribution on the optical axis of the system. Any rotation of f(x,y) rotates |F (ω[x],ω[y])| by the same angle. However, a scale change in f(x,y) by "a" scales |F(ω[x],ω[y])| by 1/a. The effects of rotation and scale changes in the light distribution resulting from the Fourier transform of f (x,y) can be separated by performing a polar transformation on |F(ω[x],ω[y])| from (ω [x],ω[y]) coordinates to (r,θ) coordinates. Since θ = tan^-1 (ω[y] /ω[x]) and r = (ω[x] ^2 +ω[y] ^2)^1/2, a scale change in |F| by "a" does not affect the θ coordinate and scales the r coordinate directly to r = ar. Consequently, a 2-D scaling of the input function is reduced to a scaling in only one dimension, the r coordinate, in this transformed F(r,θ) function. If a 1-D Mellin transform in r is now performed on F'(r, θ), a completely scale invariant transformation results. This is due to the scale invariant property of the Mellin transform. The 1-D Mellin transform of F(r, θ) in r is given by ##EQU1## where ρ = ln r. The Mellin transform of the scaled function F" = F(ar,θ) is then M' (ω.sub.ρ,θ) = a^-jω ρM(ω.sub.ρ,θ) from which the magnitudes of the two transforms are seen to be identical. One arrangement for optically implementing the Mellin transform is disclosed in applicants' co-pending application, above-identified, and there it is shown that ##EQU2## where ρ = ln r. From equation (3), it can be seen that the realization of the required optical Mellin transform simply requires a logarithmic scaling of the r coordinate followed by a 1-D Fourier transform in r. This follows from equation (3) since M(ω.sub.ρ,θ) is the Fourier transform of F(expρ,θ). The rotation of the input function f(x,y) by an angle θ[0] will not affect the r coordinate in the (r,θ) plane. If, for example, the input F(ω[x],ω[y]) is partitioned into two sections F[1] (ω[x],ω [y]), F[2] (ω[x],ω[y]) where F[2] is a segment of F that subtends an angle θ[0], the effects of a rotation by θ[0] is an upward shift in F[1] (r,θ) by θ[0] and a downward shift in F[2] (r,θ) by 2π - θ[0]. Thus, while the polar transformation has converted a rotation in the input to a shift in the transform space, the shift is not the same for all parts of the function. These shifts in F(r,θ) space due to a rotation in the input can be converted to phase factors by performing a 1-D Fourier transform on F(r,θ). The final Fourier transform shown in FIG. 1 is a 2-D transform in which the Fourier transform in ρ is accomplished to effect scale invariance by the Mellin transform and the Fourier transform in θ is used to convert the shifts due to θ[0] to phase terms. The resultant function is, thus, a Mellin transform in r, and, hence, it is denoted by M in FIG. 1. If the complete transformation of f(x,y) is represented by M (ω.sub.ρ,ω.sub.θ) = m[1] (ω.sub.ρ,ω.sub.θ) + m[2] (ω.sub.ρ,ω.sub.θ) (4) the transformation of the function f'(x,y), which is scaled by "a" and rotated by θ[0] is given by M'(ω.sub.ρ,ω.sub.θ) = M[1] (ω.sub.ρ,ω.sub.θ) exp[-j(ω.sub.ρ lna+ω.sub.ρ θ[0])] (5) + M[2] (ω.sub.ρ,ω.sub.θ) exp{-j[ω.sub.ρ lna-ω.sub.θ (2π-θ[0])]} The positional, rotational and scale invariant (PRSI) correlation is based on the form of equations (4) and (5). If the product M*M' is formed, we obtain M*M' = M*M[1] exp[-j(ω.sub.ρ lna+ω.sub.θ θ[0])] (6) +M*M[2] exp{-j[ω.sub.ρ lna-ω.sub.θ (2π-θ[0])]} The Fourier transform of equation (6) is f[1] * f * δ(ρ'-lna) * δ(θ'-θ[0]) + f * f[2] * δ(ρ'-lna) * δ(θ'+2π-θ[0]) (7) The δ functions in equation (7) identify the locations of the correlation peaks, one at ρ' = ln a, θ' = θ[0] ; the other at ρ' = ln a, θ' = (2π+θ[0]). Consequently, the ρ' coordinate of the peaks is proportional to the scale change "a" and the θ' coordinate is porportional to the rotational angle θ[0]. The Fourier transform of equation (6) thus consists of two terms: (a) the cross-correlation F[1] (expρ,θ) * F(expρ,θ) located, as indicated above, at ρ' = ln a and θ' = θ[0] ; (b) the cross-correlation F[2] (expρ,θ) F(expρ,θ) located at ρ' = ln a and θ' = (2π+θ[0]), where the coordinates of this output Fourier transform plane are (ρ',θ'). If the intensities of these two cross-correlation peaks are summed, the result is the autocorrelation of F(expρ,θ). Therefore, the cross-correlation of two functions that are scaled and rotated versions of one another can be obtained. Most important, the amplitude of this cross-correlation will be equal to the amplitude of the autocorrelation function itself. Referring now to FIG. 2, which illustrates one electrooptical arrangement for implementing the positional, rotational and shift invariant transformation, the input f(x,y), which may be recorded on a suitable transparency 20 or available in the form of an appropriate transmittance pattern on the target of an electron-beam-addressed spatial light modulator of the type described in the article, "Dielectric and Optical Properties of Electron-Beam-Addressed KD[2] PO[4] " by David Casasent and William Keicher which appeared in the December 1974 issue of the Journal of the Optical Society of America, Volume 64, Number 12, is here illuminated with a coherent light beam from a suitable laser not shown and Fourier transformed by a spherical lens 21. A TV camera 22 is positioned in the back focal plane of this lens and arranged such that the magnitude of the Fourier transform [F(ω[x],ω[y])] constitutes the input image to this camera. As is well known, camera 22 has internal control circuits which generate the horizontal and vertical sweep voltages needed for the electron beam scanning, and these waveforms are extracted at a pair of output terminals as signals ω[x] and ω[y]. The video signal developed by sanning the input image is also available at a third output terminal. Horizontal and vertical sweep voltages ω[x] and ω[y] are subject to signal processing in the appropriate circuits 23 and 24 to yield the quantities (1/2) ln (ω[x] ^2 +ω[y] ^2) and tan^-1 (ω[x] /ω [y]), respectively. It will be recalled that the results of this signal processing, which may be performed in an analog or digital manner, is the polar coordinate transformation of the magnitude of the Fourier transform of the input function and its subsequent log scaling in r. The function F(e.sup.ρ,θ) is formed on the target of an EALM tube of the type hereinbefore referred to. In this regard, the video signal from camera 22 modulates the beam current of this tube while the voltages from circuits 23 and 24 control the deflection of the electron beam. Instead of utilizing an electron-beam-addressed spatial light modulator, an optically addressed device may be used wherein the video signal modulates the intensity of the laser beam while deflection system 25 controls its scanning motion. It would also be mentioned that the transformation can also be accomplished by means of computer generated holograms. The function M(ω.sub.ρ,ω.sub.θ) is obtained by Fourier transforming F(e.sup.ρ,θ) and this may be accomplished by illuminating the target of the EALM tube with a coherent light beam and performing a 2-D Fourier transform of the image pattern. FIG. 3 shows the sequence of steps involved in correlating two functions f[1] (x,y) and f[2] (x,y) that differ in position, scale and rotation. It would be mentioned that this method may be implemented by optical or digital means. Thus, all of the operations hereinafter set forth may be performed with a digital computer. However, the following description covers the optical process since it has greater utility in real time optical pattern recognition systems. The first step of a method is to form the magnitude of the Fourier transform of both functions |F[1] (ω[x],ω[y])} and |F[2] (ω[x],ω[y])}. This may be readily accomplished, as is well known, with a suitable lens and an intensity recorder with γ = 1. Next, a polar coordinate conversion of these magnitudes is performed to produce F[1] (r,θ) and F[2] (r,θ). The r coordinate of these functions is now logarithmically scaled to form F[1] (expρ,θ) and F[2] (expρ,θ). A second Fourier transform is carried out to produce the Mellin transform of F(r,θ) in r and the Fourier transform in θ. The resultant functions being M[1] (ω.sub.ρ,ω.sub.θ) and M[2] (ω.sub.ρ,ω.sub.θ). The conjugate Mellin transform of F(r,θ) which is M[1] *(ω.sub.ρ,ω.sub.θ) is formed and, for example, recorded as a suitable transparency. This can be readily accomplished by conventional holographic spatial filter synthesis methods which involve Fourier transforming F[1] (expρ,θ) and recording the light distribution pattern produced when a plane wave interferes with this transformation. The correlation operation involves locating the function F[2] (expρ,θ) at the input plane of a conventional frequency plane correlator and positioning the conjugate Mellin transform recording M[1] * (ω.sub.ρ,ω.sub.θ) at the frequency plane. The light distribution pattern leaving the frequency plane when the input plane is illuminated with a coherent light beam has as one of its terms M[1] *M[2] and this product when Fourier transformed completes the cross-correlation process. The correlation of the two input functions in this method appears as two cross-correlation peaks, and the sum of their intensities is equal to the autocorrelation peak. Thus, the correlation is performed without loss in the signal-to-noise ratio. As mentioned hereinbefore, the coordinates of these cross-correlation peaks, as shown in FIG. 5, provides an indication of the scale difference, "a", and amount of rotation between the two functions θ[0]. FIG. 4 shows a frequency plane correlator for forming the conjugate Mellin transform M[1] * (ω.sub.ρ,ω.sub.θ) and for performing a cross-correlation operation utilizing a recording of this transform. In applicants' co-pending application, identified hereinbefore, there is disclosed a procedure for producing a hologram corresponding to this conjugate Mellin transform, and, as noted therein, the process involves producing at the input plane P[0], an image corresponding to the function F[1] (expρ,θ). This image may be present on the target of an EALM tube as an appropriate transmittance pattern. Alternatively, it may be available as a suitable transparency. In any event, the input function is illuminated with a coherent light beam from a laser source, not shown, and Fourier transformed by lens L[1]. Its transform M[1] (ω.sub.ρ,ω.sub.θ) is interferred with a plane reference wave which is incident at an angle Ψ and the resultant light distribution pattern is recorded. One of the four terms recorded at plane P[1] will be proportional to M[1] *(ω.sub.ρ,ω.sub.θ). In carrying out the correlation, the reference beam is blocked out of the system. The hologram corresponding to the conjugate Mellin transform M[1] *(ω.sub.ρ,ω.sub.θ) is positioned in the back focal plane of lens L[1] at plane P[1]. The input image present at plane P[0] now corresponds to the function F[2] (expρ,θ) which again may be the transmittance pattern on an EALM tube or suitable transparency. When the coherent light beam illuminates the input plane P[0], the light distribution incident on plane P[1] is M[2] (ω.sub.ρ,ω.sub.θ). One term in the distribution leaving plane P[1] will, therefore, be M[2] M[1],* and the Fourier transform of this product is accomplished by lens L[2]. In the output plane P[2], as shown in FIG. 5, two cross-correlation peaks occur. Two photodetectors spaced by 2π may be utilized to detect these peaks and, as indicated hereinbefore, the sum of their amplitudes will be equal to the autocorrelation peak produced when the two images being compared have the same position, scale and orientation. In the correlation method depicted in FIG. 3, the conjugate Mellin transform M[1] *(ω.sub.ρ,ω.sub.θ) was produced and utilized in the frequency plane of the correlator of FIG. 4. However, it should be understood that the method can also be practiced by utilizing the conjugate Mellin transform M[2] *(ω.sub.ρ,ω.sub.θ) at P[1] forming the product M[1] M[2] * and Fourier transforming it to complete the cross-correlation process.
{"url":"http://www.google.com/patents/US4084255?dq=7,117,485","timestamp":"2014-04-23T09:46:08Z","content_type":null,"content_length":"88640","record_id":"<urn:uuid:f1130e6e-e478-46db-8bf1-509df3d162ae>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
"The mind is not a vessel to be filled but a fire to be kindled." Plutarch. Frank Potter is my name. Physics is my profession. And I thoroughly enjoy struggling with Nature's challenges to us humans who are trying to understand the most fundamental aspects of the natural world. The 4-D sphere in the image above plays a key role in my research into the nature of leptons and quarks, the fundamental particles of Nature. Below, I tell you about some of the physics research I am doing. Leptons and Quarks I spent much of the last thirty years working on a project which I created for myself: to understand what underlying mechanisms are responsible for the leptons and the quarks that comprise all things around us. I think that I have successfully answered the challenge by connecting the quarks to the symmetries of the regular polytopes in a real 4-D space and the leptons to the symmetries of the regular polyhedrons in the 3-D subspace. This approach uses the Standard Model and enlists the finite rotational subgroups of its Lie group. *** In 2012 the acid test of the ideas may be achievable because the LHC collider at CERN will be able to produce many times the number of collisions per second than before at Fermilab, and I would expect dozens of FCNC b' quark decays into b + gamma and b + gluon to make their appearance. In 2010 and 2011 there were some indications of two jets being produced with a total energy around 120 - 150 GeV, which may be b' quarks, or it may be the Higgs particle. No definitive statements can be made yet. As of 2013, still no b' quark production, although there might be a remote possibility that a b' bound state exists at around 125 GeV to 135 GeV, hidden under the boson bump in that energy region. Some important predictions of this modification of the Standard Model(SM) are: 1. Leptons and quarks are related through the finite rotational symmetry groups in the 3-D and 4-D real spaces that are part of the 2-D complex space called the unitary plane. The unitary plane is the only space mentioned explicitly in the SM, and the reduction of its Lie Group SU(3)C x SU(2)L x U(1)Y to the finite rotational groups is contrary to expectations. 2. 4 quark families and 3 lepton families are predicted, with the particles no longer considered to be point particles. Hence reduced concern over anomaly cancellations which would require equals numbers of families. As a result of 3-D states for the lepton families, I have derived the neutrino PMNS mixing matrix from first principles, available online at Progress in Physics: Geometrical Derivation of the Lepton PMNS Matrix Values (2013). 3. A b' quark at 80 GeV and a t' quark at 2600 GeV should exist, both members of the 4th quark family. This b' quark at 80 GeV is the acid test for my modification, and I expect it to show up at the colliders soon. One problems is their longer lifetime than normal because of the FCNC type of decay, so most may escape the detector. 4. Neutrinos and antineutrinos are distinct particle states. At least one neutrino has a mass > 0 eV in order to solve the solar neutrino problem. By taking linear superposition states, neutrinos can have a non-zero mass. 5. No Higgs boson is required. The actual symmetry groups are finite rotational groups, so one does not need to introduce the artificial Higgs mechanism because the symmetry breaks to these discrete groups, i.e. also the source of electroweak symmetry breaking. I.e., the Standard Model gauge group is an excellent approximation to these finite rotational groups for the different families. 6. Quark states are defined in a 4-D real space while lepton states are defined in the 3-D subspace. This spatial demarcation distinguishes leptons from quarks and may be the origin of baryon number. Quark confinement follows also because they are 4-D states that cannot exist in 3-D space. 7. All mass ratios arise from the numerical invariants of the symmetry groups of the polytopes and polyhedrons. The invariants are related to the absolute invariant J of elliptic functions, so any Hamiltonian must incorporate the symmetries of the appropriate elliptic functions for each finite rotational group. Linear superposition of the two degenerate states in each finite rotational group leads to the two physical states in each family. 8. Left-handedness and parity violation are dictated by the mathematical properties of the general rotation in the unitary plane whenever a W or Z boson acts. There is no choice here, and all left-right symmetric approaches are eliminated. Also, two particle states per family are required, eliminating approaches with one additional particle state. 9. The three color charges for quarks represent the three pairs of rotation planes for a true 4-D rotation. If correct, lepton states must exist in the 3-D subspace as proposed, so they do not have color charge. One can also define all meson and baryon states in the 3-D subspace formed mathematically by intersecting 4-D entities. 10. No more leptons or quarks will appear beyond the four quark families and three lepton families predicted here. No more finite rotational groups are available in the 3-D and 4-D subspaces of the unitary plane. In fact, leptons, quarks and the interaction bosons may be the only fundamental particles! 11. These fundamental particles and all fundamental physics rules are dictated by the Fischer-Greiss Monster Group. The details are outlined in my 2011 paper titled "Our Mathematical Universe: I. How the Monster Group Dictates All of Physics" in the online journal Progress in Physics: Our Mathematical Universe: I. How the Monster Group Dictates All of Physics (2011) 12. The particles of the Standard Model and their interactions are encoded by information theory There are 72 fundamental particle states - 30 lepton and quark states (6 leptons and 8 x 3 colors of quarks = 30) plus 30 antiparticle states plus 12 interaction bosons - for a grand total of 72 particles. There are exactly 72 available Golay-24 code words. For more information, please read my original paper . "Geometrical Basis for the Standard Model" . International Journal of Theoretical Physics, Vol. 33 (1994), pp. 279-305, which is available at Geometrical Basis for the Standard Model (1994). Connection to Superstrings in a discrete spacetime: A recent (2005) follow-up paper Leptons and Quarks in a Discrete Spacetime as well as a published article (2006) Unification of Interactions in Discrete Spacetime are now available (in PDF format) that show how to combine the finite subgroups of the Standard Model from the original paper above with finite subgroups of the Lorentz group SO(3,1) to form Weyl E8 x Weyl E8, a finite subgroup of SO (9,1) in discrete 10-D spacetime, unifying all the fundamental interactions. Its relationship to a discrete PSL(2,O) and to the continuous group E8 x E8 of superstrings and M-theory is discussed. Consequently, spacetime is discrete, and there can be only one universe and one set of physical constants because the result is unique. The discovery of the b' quark decaying to a b quark and an energetic photon at the LHC is the definitive experiment. A short 2-minute Quicktime summary (2007) of several of the key concepts is available to whet your appetite at Let's Fix String Theory!. The video provides the "big picture" of how the unification of the fundamental interactions in discrete spacetime occurs in a unique way, leaving "no choice in the creation of the world". Presentation slides in pdf format for my talk at the DISCRETE '08 Conference (2008) in Valencia, Spain with numerous images to help in conceptual understanding of the advantages to having finite rotation groups for the leptons and quarks, plus the unification in 4-D discrete space and 10-D discrete spacetime. Finite Rotation Group approach. The details of connections to the Monster group and to coding theory are outlined in my 2011 paper titled "Our Mathematical Universe: I. How the Monster Group Dictates All of Physics" in the online journal Progress in Physics: Our Mathematical Universe: I. How the Monster Group Dictates All of Physics (2011). Further progress has been achieved in 2013 toward verifying my geometrical approach because I have derived the neutrino PMNS mixing matrix from first principles, available online at Progress in Physics: Geometrical Derivation of the Lepton PMNS Matrix Values (2013). Quantum Celestial Mechanics(QCM) With general relativity physicist Howard Preston (deceased 2011), in 2003 we have posted a paper at the archive which has a new approach to understanding the states of gravitationally bound systems such as solar systems, galaxies, and the Universe. By knowing only the total mass and the total angular momentum of the system, all its quantization states are known. Predictions include angular momentum quantization states for orbiting bodies of mass M and angular momentum L obeying L/M = m LT/MT where m is an integer, MT is the total mass of the gravitationally bound system, and LT is the total angular momentum of the system. Several of our recent papers are online: Kepler-47 Circumbinary Planets obey Angular Momentum Quantization per Unit Mass predicted by Quantum Celestial Mechanics (QCM) (2014) in which the 3 known planets orbiting a binary star system are analyzed and shown to fit the QCM constraint; Multi-planet Exosystems All Obey Orbital Angular Momentum Quantization per Unit Mass predicted by Quantum Celestial Mechanics (QCM) (2013) in which 15 multiple planet systems are analyzed and shown to fit the constraint; Pluto Moons exhibit Orbital Angular Momentum Quantization per Mass (2012) shows that the known 5 moons of Pluto obey the QCM predictions; Galaxy S-Stars exhibit Orbital Angular Momentum Quantization per Unit Mass (2012) shows that the innermost 27 well-measured S-stars of our Galaxy (Milky Way) obey QCM predictions even though they have seemingly random orbital planes. Kepler-16 Circumbinary System Validates Quantum Celestial Mechanics (2012) shows how this angular momentum quantization condition produces m = 10 to within 1% for the only system found so far for which the pertinent values are known to within 1%. The Kepler-34 and Kepler-35 systems have 4% uncertainties in values and they produce m = 9 and m = 4, respectively. The acid test for QCM would appear to be finding another planet orbiting Kepler-16 and calculating its quantization integer. QCM has a wave equation which, in the Schwarzschild metric, predicts quantization states that agree extremely well with the actual states of the Solar System, the Galaxy (without requiring dark matter!), and clusters of galaxies. With the interior metric, QCM predicts that the redshifts of supernovae SNe1a are actually gravitational redshifts and are not due to space expansion. In other words, we have discovered a repulsive gravitational effect that helps keep planets in quantized orbits, the stars revolving around the galactic center in quantization states, and the Universe in a static equilibrium. More details can be perused in these papers: Exploring Large-scale Gravitational Quantization without h-bar in Planetary Systems, Galaxies, and the Universe (2003) Gravitational Lensing by Galaxy Quantization States (2004) Quantization State of Baryonic Mass in Clusters of Galaxies (2007) Cosmological Redshift Interpreted as Gravitational Redshift (2007) Other physics research: • I am continuing to think about the origin of time and the reason for the obvious particle-antiparticle asymmetry in the universe. My present conjecture is that time is important for leptons and hadrons, which are 3-D entities in my geometric approach above, and that quarks do not experience time because they are 4-D entities – the 4^th dimension being needed for time. The singular direction of time is probably associated with the general transformation in the complex 2-D space in which we all live! • Professor Joseph Weber (deceased) had some great ideas about neutrino detection techniques that continue to fascinate me. If neutrino total cross-sections can be increased by his factor of about 10^24, wow! • Weber bars used for gravitational wave detectors certainly have an interesting history. With two independent nearly identical Weber bars placed far apart Weber claims that they responded in almost identical fashion to the Supernova 1987A event. If his ideas about their quantum mechanical responses are correct, then these bars detected gravitational waves. Perhaps we'll know more in the near future. • I also find that the 1994 x-ray laser claims by K. DasGupta for a Ni K-alpha laser at 1.658 Angstroms should be checked out thoroughly. Something amazing is happening! What me worry? If you would like to discuss any of these ideas with me and your age is between 8 and 80, my e-mail address is given below. Frank Potter (drpotter@lycos.com) Copyright © 1994-2012, All Rights Reserved, Frank Potter
{"url":"http://www.sciencegems.com/Icosahedron.html","timestamp":"2014-04-16T13:03:38Z","content_type":null,"content_length":"19029","record_id":"<urn:uuid:60d149d6-59e1-4a9f-9b0a-36dfb1309d23>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
On triple crowns in mathematics and AMS badges > On triple crowns in mathematics and AMS badges On triple crowns in mathematics and AMS badges As some of you figured out from the previous post, my recent paper (joint with Martin Kassabov) was accepted to the Annals of Mathematics. This being one of my childhood dreams (well, a version of it), I was elated for a few days. Then I thought – normal children don’t dream about this kind of stuff. In fact, we as a mathematical community have only community awards (as in prizes, medals, etc.) and have very few “personal achievement” benchmarks. But, of course, they are crucial for the “follow your dreams” approach to life (popularized famously in the Last Lecture). How can we make it work in mathematics? I propose we invent some new “badges/statistics” which can be “awarded” by AMS automatically, based on the list of publications, and noted in the MathSciNet Author’s Profile. The awardees can then proudly mention them on the department websites, they can be included in Wikipedia entries of these mathematicians, etc. Such statistics are crucial everywhere in sports, and most are individual achievements. Some were even invented to showcase a particular athlete. So I thought – we can also do this. Here is my list of proposed awards. Ok, it’s not very serious… Enjoy! Triple Crown in Mathematics A paper in each of Annals of Mathematics, Inventiones, and Journal of AMS. What, you are saying that “triple crown” is about horse racing? Not true. There are triple crowns in everything, from bridge to golf, from hiking to motor racing. Let’s add this one to the list. Other Journal awards Some (hopefully) amusing variations on the Tripe Crown. They are all meant to be great achievements, something to brag about. Marathon – 300 papers Ultramarathon – 900 papers Iron Man – 5 triple crown awards Big Ten – 10 papers in journals where “University” is part of the title Americana – 5 papers in journals whose title may only include US cities (e.g. Houston), states (e.g. Illinois, Michigan, New York), or other parts of American geography (such as Rocky Mountains, Pacific Ocean) Foreign lands – 5 papers in journals named after non-US cities (e.g. Bordeaux, Glasgow, Monte Carlo, Moscow), and five papers in journals named after foreign countries. Around the world – 5 papers in journals whose titles have different continents (Antarctica Journal of Mathematics does not count, but Australasian Journal of Combinatorics can count for either What’s in a word - 5 papers in single word journals: (e.g. Astérisque, Complexity, Configurations, Constraints, Entropy, Integers, Nonlinearity, Order, Positivity, Symmetry). Decathlon - papers in 10 different journals beginning with “Journal of”. Annals track - papers in 5 different journals beginning with “Annals of”. I-heart-mathematicians - 5 papers in journals with names of mathematicians (e.g. Bernoulli, Fourier, Lie, Fibonacci, Ramanujan) Publication badges Now, imagine AMS awarded badges the same way MathOverflow does, i.e. in bulk and for both minor and major contributions. People would just collect them in large numbers, and perhaps spark controversies. But what would they look like? Here is my take: enthusiast (bronze) – published at least 1 paper a year, for 10 years (can be awarded every year when applicable) fanatic (silver) – published at least 10 papers a year, for 20 years obsessed (gold) – published at least 20 papers a year, for 30 years nice paper (bronze) – paper has at least 2 citations good paper (silver) – paper has at least 20 citations great paper (gold) – paper has at least 200 citations famous paper (platinum) – paper has at least 2000 citations necromancer (silver) – cited a paper which has not been cited for 25 years asleep at the wheel (silver) – published an erratum to own paper 10 years later destroyer (silver) – disproved somebody’s published result by an explicit counterexample peer pressure (silver) – retracted own paper, purchased and burned all copies, sent cease and desist letters to all websites which illegally host it scholar (bronze) – at least one citation supporter (bronze) – cited at least one paper writer (bronze) – first paper reviewer (bronze) – first MathSciNet review self-learner (bronze) – solved own open problem in a later paper self-citer (bronze) – first citation of own paper self-fan (silver) – cited 5 own papers at least 5 times each narcissist (gold) – cited 15 own papers at least 15 times each enlightened rookie (silver) – first paper was cited at least 20 times dry spell (bronze) – no papers for the past 3 years, but over 100 citations to older papers over the same period remission (silver) – first published paper after a dry spell soliloquy (bronze) – no citation other than self-citations for the past 5 years drum shape whisperer (silver) – published two new objects with exactly same eigenvalues neo-copernicus (silver) – found a coordinate system to die for gaussian ingenuity (gold) – found eight proofs of the same law or theorem fermatist (silver) – published paper has a proof sketched on the margins pythagorist (gold) – penned an unpublished and publicly unavailable preprint with over 1000 citations homologist (platinum) – has a (co)homology named after dualist (platinum) – has a reciprocity or duality named after ghost-writer (silver) – published with a person who has been dead for 10 years prince of nerdom (silver) – wrote a paper joint with a computer king of nerdom (gold) – had a computer write a joint paper sequentialist (gold) – authored a sequel of five papers with the same title prepositionist (gold) – ten papers which begin with a preposition “on”, “about”, “toward”, or “regarding” (prepositions at the end of the title are not counted, but sneered at). luddite (bronze) – paper originally written NOT in TeX or LaTeX. theorist (silver) – the implied constant in O(.) notation in the main result in greater than 10^80. conditionalist (silver) – main result is a conditional some known conjecture (not awarded in Crypto and Theory CS until the hierarchy of complexity classes is established) ackermannist (gold) – main result used a function which grows greater than any finite tower of 2′s. What about you? Do you have any suggestions? :) 1. September 9, 2012 at 6:52 pm | 1. No trackbacks yet.
{"url":"http://igorpak.wordpress.com/2012/09/09/on-triple-crowns-in-mathematics-and-ams-badges/","timestamp":"2014-04-18T02:58:50Z","content_type":null,"content_length":"56285","record_id":"<urn:uuid:da2be093-0dc4-493a-a7ba-b0cce735ac6f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
Meter Stick Torque Beam, 1J40.20 Topic and Concept: A meter stick, suspended at the center, is used as a torque balance. Different combinations of weights hung at different distances can keep the beam balanced demonstrating how torque works. Equipment Location ID Number Torque Beam ME, Bay C1, Shelf T Weights and Weight Hangars ME, Bay BA, Shelf #5 3/4" Rod and clamps Rod and tackle cabinet near main lecture halls Important Setup Notes: Setup and Procedure: 1. Place apparatus on lecture bench with numbered side facing audience. 2. Behind the apparatus, mount a 3/4" rod to the bench with a table clamp. 3. To this vertical rod, attach another rod horizontally so that it is about above the apparatus, low enough so that the central loop of the beam can slip onto the rod (this is how we will suspend the beam). 4. From each of the outer loops, hang a weight hangar, and load up each with some amount of weight. 5. Choose the relative positions on the beam of the weights so that the beam will balance. 6. The moment of truth: unhook the chain from the stage left leg of the apparatus. Push the legs out to free the beam. If the previous two steps were completed appropriately, the beam will remain balanced. See photos below for some examples of weight - position combinations. Cautions, Warnings, or Safety Concerns: There are four torques acting on the beam: that due to gravity (weight of the beam), tension of the suspension string, and each of the two weights. If we set our origin at the 50 cm mark, the torque due to gravity and the tension go to zero. For the torques to balance giving us zero angular acceleration (keeping the beam balanced), we must have r[1]·W[1] = r[2]·W[2] -> r[1]·m[1] = r[2]·m[2] where the r's denote the distance from the center of the beam of each weight hangar and the m's denote the total mass of the weight hangar.
{"url":"https://wiki.physics.wisc.edu/facultywiki/Torque_Beam","timestamp":"2014-04-20T09:28:03Z","content_type":null,"content_length":"20834","record_id":"<urn:uuid:ec46fdd0-d99b-4bf0-ab33-df4f44f6dd53>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: When n is divided by 12,the remainder is 6.What is the remainder when n is divided by 6 ? • one year ago • one year ago Best Response You've already chosen the best response. According to first statement : n = 12q + 6 According to second statement: n = 6p + r We are to find r.. Best Response You've already chosen the best response. I am not very sure on how you generated the statements ?Please explain. Best Response You've already chosen the best response. See, there is formula : \[Dividend = Divisor \times Quotient + Remainder\] Best Response You've already chosen the best response. If I say 4 divides 12 leaving quotient as 3 and 0 : Then we can represent it as : \[12 = 4 \times 3 + 0\] Best Response You've already chosen the best response. and *remainder as 0.. Best Response You've already chosen the best response. Ok then how is the Q(x) p and q Best Response You've already chosen the best response. I have assumed them as p and q in that two cases because we don't know them.. Best Response You've already chosen the best response. @waterineyes what you given is true but there is two eqns and 3 unknowns... am I wrong? Best Response You've already chosen the best response. Yes we have 3 unknowns and this time I am remembering @mukushla because he is expert in finding 3 unknowns out of 1 equation given only.. Ha ha ha ha... Best Response You've already chosen the best response. n = 12q + 6 = 6(2q +1) so n is a multiple of 6 ...:) when you that a multiple of 6 divide 6...is there a remainder ? :) Best Response You've already chosen the best response. when 6,,,...remainder should be 0 !!!!! Best Response You've already chosen the best response. tamtoan got the correct way.. Best Response You've already chosen the best response. i have options so we can put them and check ? Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. 18, 30, 42, 54 all leaves remainder as 6 when divided by 12.. So, you can easily see that they all are fully divided by 6, leaving remainder as 0.. Best Response You've already chosen the best response. Best Response You've already chosen the best response. Thanks to all........ I never think of bunchy equations but only facts... My bad.... i thought how it is possibl to solve it... my bad,... but thanks I got it Best Response You've already chosen the best response. Best Response You've already chosen the best response. ^ options ? Best Response You've already chosen the best response. Now you guys have pretty confused me lol Best Response You've already chosen the best response. @waterineyes Now tell me the math behind this :( Best Response You've already chosen the best response. I think @hba will be able to comprehend Ida behind it when you again check the first response of waterineyes Best Response You've already chosen the best response. Are you still not able to find the correct choice?? Best Response You've already chosen the best response. followed by tamtoan's first response Best Response You've already chosen the best response. Well the answer is 0 but i could not understand the math behind it :( Best Response You've already chosen the best response. Till here you got : \[n = 12q + 6\] Best Response You've already chosen the best response. Best Response You've already chosen the best response. Now can you factor out 6 from that ?? Best Response You've already chosen the best response. Best Response You've already chosen the best response. how to do that ? Best Response You've already chosen the best response. By using distributive property?? Best Response You've already chosen the best response. \[ab + ac = a(b+c)\] Best Response You've already chosen the best response. Best Response You've already chosen the best response. Yep. Now if I say that \(n = 3\times 4\) then it means that 3 and 4 are the factors of n.. Okay?? Best Response You've already chosen the best response. Best Response You've already chosen the best response. so here the factors are 6 and 2q+1 Best Response You've already chosen the best response. Good going.. Best Response You've already chosen the best response. And you must remember that factors divide the number fully leaving NO REMAINDER BEHIND.. If \(n = 3 \times 4\) then 3 and 4 both will divide n fully leaving remainder as 0.. Getting or more explanation you want?? Best Response You've already chosen the best response. Best Response You've already chosen the best response. See, here \(3 \times 4 = 12\) So here n = 12.. Now divide 12 by 3 first : Quotient - 4 Remainder - 0 Now divide 12 by 4: Quotient - 3 Remainder - 0 What did you get as remainder?? Best Response You've already chosen the best response. It means that factors of any number will divide it fully leaving No Remainder.. It is obvious hba.. Best Response You've already chosen the best response. More ?? Best Response You've already chosen the best response. Still not getting tell me @hba Best Response You've already chosen the best response. Best Response You've already chosen the best response. So, 6/n=0 and 6/2q+1 = 0 Best Response You've already chosen the best response. what next ? Best Response You've already chosen the best response. You have to find remainder no?? Best Response You've already chosen the best response. That you have found I think... Why do you want to go next in this?? Best Response You've already chosen the best response. Sorry, remainder of n/6 will be 0.. And not 6/n.. Best Response You've already chosen the best response. Net got crushed?? Best Response You've already chosen the best response. Sorry some issue with my internet connection,So the answer is 0 @waterineyes Best Response You've already chosen the best response. Yes it is... Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50a5f2d2e4b0329300a92643","timestamp":"2014-04-20T00:37:52Z","content_type":null,"content_length":"151183","record_id":"<urn:uuid:8b418a54-a651-4270-beb0-c39bbca31b4b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
Pacoima Algebra 2 Tutor ...Brooke is also an SAT tutor who helped me increase my reading comprehension score from the low 600s to the high 700s. Also, she somehow makes the SAT fun.” Student, La Canada, CA “Brooke is by far the most dedicated, enthusiastic, and devoted teacher I have ever had! She actually makes SAT tutor session fun and engaging. 49 Subjects: including algebra 2, reading, writing, English ...I am certified in GMAT prep, as well as GRE, MCAT, and LSAT. I have scored above the 95th percentile on every standardized test I have taken. Standardized tests do not measure education or 63 Subjects: including algebra 2, chemistry, English, ASVAB ...I have a Degree in Accounting from California State University Northridge. I have been working for a tax agency for the past 16 years. As an employee of a tax agency it is a conflict of interest for me to prepare any tax returns. 5 Subjects: including algebra 2, accounting, algebra 1, prealgebra ...My life's passion is helping others to learn and grow. I have expertise in many academic area and a contagious passion for learning. I have tutoring experience with children (ages 5 and up) and 44 Subjects: including algebra 2, English, reading, writing ...My strong points are patience and building confidence. I can tailor my lessons around your child's homework or upcoming tests and stay synchronized. Your child's skills will be improved in a few sessions. 14 Subjects: including algebra 2, reading, Spanish, ESL/ESOL
{"url":"http://www.purplemath.com/pacoima_ca_algebra_2_tutors.php","timestamp":"2014-04-19T06:59:51Z","content_type":null,"content_length":"23602","record_id":"<urn:uuid:ae3f6652-cac7-40ec-a0c8-f3e42f64f4b5>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
Suggestions About Alcohol Content In Absinthe - Absinthe absinthe liquor There has been much controversy regarding the thujone levels in Absinthe and whether Absinthe can really make you hallucinate but what about its alcohol content – How much alcohol is in Absinthe? All commercial alcoholic drinks are labeled to show their alcohol content which makes easy for people to decide about the drink and also the quantity of intaking them. To measure the alcohol content one will require a hydrometer. One can get a hydrometer with a thermometer attached to it. The hydrometer floats upright in the liquid. The water line will reach at level marked 1when it is put in water.One can see increase in the level of water when it is put in sugared water. When the hydrometer is used in fermenting alcohol the float moves lower. There is done two measurements of alcohol, one is taken with the sugar but before the yeast is added and then the final reading is taken after the fermentation of yeast. One can detect the volume of alcohol by the following formula Alcohol content can be calculated by the formula Original gravity – Final gravity x 131 = Alcohol by volume. The EU and the United States label the alcohol by volume content of drinks on the other hand some other countries believe on proof. One can find proof to be approximately two times more alcohol by Different brands of Absinthe have different alcohol contents. Look at the following statistics:- Lucid Absinthe 62% abv (124 proof) One can get 53% abv (106 proof) in La Clandestine Absinthe According to the studies Sebor contains 55% abv The alcohol content is 60% in Pere Kermanns Pernod Absinthe contains 68% abv of alcohol One can get 70% abv alcohol content in Mari Mayans Collectors 70 Tips about La Fee XS Absinthe Suisse 53% abv (106 proof) La Fee XS Absinthe Francaise 68% abv (136 proof) Knowledge of La Fee Bohemian 70% abv (140 proof) The alcohol content is 68% abv in La Fee Parisian The alcohol content is found to be 53% abv in Carte D’Or Kubler 53 The alcohol content in Doubs Mystique Carte D’Or is 65% abv Roquette 1797 contains 75% abv of alcohol The alcohol volume in Jade PF 1901 is 68% abv The alcohol content in Jade Edouard is 72% abv (144 proof) Jade Verte Suisse 65% abv (130 proof) Jade Nouvelle Orleans 68% abv (136 proof) As you can see Absinthe can range from 53% abv to 75% abv, quite a difference. Now, let’s compare those levels to other alcoholic drinks:- Absolut Blue Vodka 40% abv (80 proof) Jose Cuervo Gold Tequila contains 38% abv The alcohol content in beer can be 4 to 5%. Table Wine contains 9-12% alcohol by volume (18-24 proof). Johnnie Walker Black Label Scotch Whisky is available with 40% alcohol by volume (80 proof). Everclear 95% abv (190 proof) No other alcoholic drink seems to come close to Absinthe! The alcohol content can differ in homemade Absinthes. Mix an Absinthe essence from AbsintheKit.com.Absolut vodka contains 40% abv and the Everclear contains 95% abv alcohol. Absinthe was banned in the 1900s because of claims that thujone, the chemical in wormwood, was like THC in the drug cannabis and that it was psychoactive and caused psychedelic effects. We now know that claims that Absinthe is an hallucinogen are completely false but we do need to remember that any liquor can be harmful to our health if we consume too much. People in Memphis have realized this and during the poll in 2006 more than 6/10 Memphis citizens had not even drunk a sip of alcohol in at least a month. One should definitely know about the alcohol content in Absinthe and other alcohol that is consumed by the people.
{"url":"http://absinthe-liquor.com/absinthe/suggestions-about-alcohol-content-in-absinthe/","timestamp":"2014-04-19T06:53:18Z","content_type":null,"content_length":"26915","record_id":"<urn:uuid:2799ec88-1965-46a2-8de5-9ed51f20e996>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Prepare for the GMAT Edit Article Edited by Ravi, Ben Rubenstein, Flickety, RabidBox and 21 others The Graduate Management Admissions Test (GMAT) is a vital aspect of the application process to most accredited graduate business schools. The test, which has scores ranging from 400 to 800, has four main components: a quantitative multiple choice section, a verbal multiple choice section, Integrated Reasoning and one thirty minute essay. 1. 1 Allow time to get scores back and finish making applications before the last date of applying for your targeted Business Schools. 2. 2 Determine the average score of admitted applicants to the schools you wish to apply for. □ Most schools in the United States will frequently ask for higher than a 600 on the GMAT. Some of the most prestigious schools require a score of 700 or higher. 3. 3 Start studying months ahead. Set a time every day if possible to prepare for GMAT. 4. 4 Map out a plan for the coming weeks or months - depending on whether you are planning to self-study or take a course. 5. 5 Decide how much time you will need to either attend your course or to study, or both. 6. 6 Take a practice test or diagnostic test to get a sense of your current score range. 7. 7 Decide your strategy for preparing for the test. Do you want to take a course or prepare on your own? 8. 8 Do research on various options available to you. □ Consider the Official Guide (OG) provided by MBA.com. (Latest one is OG 13 edition as of 2013). 9. 9 Take practice tests throughout your preparation to test how much your skills have improved. 10. 10 Decide by what date you need or want to take the GMAT. 11. 11 Wait to register until you are confident that you have progressed well in your preparation. • Teach others: organize a group and do serious study. It's less boring that way. • Read a lot and extensively so that your speed of reading will help you in the GMAT reading comprehension section. • Start your preparation well in advance. • Master the easiest and basic lessons so that you will become more confident. □ Pay more attention to your weak spots. • Review, familiarize with and possibly memorize some of the commonly-used geometric formulas (Unless you are very mathematical, get tutoring since a college senior has not had geometry since the 10th grade approx.): □ In a 30, 60, 90 triangle, if the short side is named a, the hypotenuse (long side) is 2a, and the medium side is a√3. □ In the isosceles right triangle the angles are 45, 45, 90, and if the two equal sides are named a and the third side (hypotenuse) is a√2. □ The area of a triangle is A = 1/2(b * h) where the height is perpendicular to the base from the vertex that is opposite the base. The height may need to be measured inside or outside the triangle depending on the type of triangle and which side is the base. □ The area of either a parallelogram or rectangle is A = b * h . The height may be inside or outside the parallelogram. □ The area of the square is A = s^2 where s means the length of the side. □ The area of a circle is A = πr^2 . The shaded square region shows the area of r^2, which is to help visualize what 3.14 times radius squared means. □ The circumference of a circle is A = 2πr OR A = πd. □ A circle graph is based on either arcs or central angles which are percentages of 360° where the total 360° means 100%. 180° means 50%. 90° means 25%. Calculation examples: ☆ 17% is how many degrees? 17% of 360° = 0.17 * 360° = 61° approx. ☆ 30° represents what percentage? Using the fraction of 30° for 360° it is 30°/360° * 100% = 8.333333333... or 30° represents approx 8.3 percent • When dealing with Algebra questions where the answer is a formula, pick numbers and plug them into the answers to find the correct answer faster than working through the algebra. • Try to score the highest marks in Quantitative section. • Take notes for all the points for the future reference. • If you can afford it, use a certified tutor to help you prepare. Or, also contingent upon cost, enroll in a GMAT preparation course.
{"url":"http://www.wikihow.com/Prepare-for-the-GMAT","timestamp":"2014-04-18T23:19:09Z","content_type":null,"content_length":"69895","record_id":"<urn:uuid:24683883-580a-4973-96b1-6cf79d24ad04>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
Even-hole-free graphs Da Silva, Murilo Vicente Gonçalves (2008) Even-hole-free graphs. PhD thesis, University of Leeds. Download (890Kb) In this thesis we consider the class of simple graphs defined by excluding even holes (i.e. chordless cycles of even length). These graphs are known as even-hole-free graphs. We first prove that every even-hole-free graph has a node whose neighborhood is triangulated. This implies that in an even-hole-free graph, with n nodes and m edges, there are at most n+2m maximal cliques. It also yields a fastest known algorithm for computing a maximum clique in an even-hole-free graph. Afterwards we prove the main result of this thesis. The result is a decomposition theorem for even-hole-free graphs, that uses star cutsets and 2-joins. This is a significant strengthening of the only other previously known decomposition of even-hole-free graphs, by Conforti, Cornu´ejols, Kapoor and Vuˇskovi´c, that uses 2-joins and star, double star and triple star cutsets. It is also analogous to the decomposition of Berge (i.e. perfect) graphs with skew cutsets, 2-joins and their complements, by Chudnovsky, Robertson, Seymour and Thomas. In a graph that does not contain a 4-hole, a skew cutset reduces to a star cutset, and a 2-join in the complement implies a star cutset, so in a way it was expected that even-hole-free graphs can be decomposed with just the star cutsets and 2-joins. A consequence of this decomposition theorem is an O(n19) recognition algorithm for even-hole-free graphs. The recognition of even-hole-free graphs was first shown to be polynomial by Conforti, Cornu´ejols, Kapoor and Vuˇskovi´c. They obtained an algorithm of complexity of about O (n40) by first preprocessing the input graph using a certain “cleaning” procedure, and then constructing a decomposition based recognition algorithm. The cleaning procedure was also the key to constructing a polynomial time recognition algorithm for Berge graphs. At that time it was observed by Chudnovsky and Seymour that once the cleaning is performed, one does not need a decomposition based algorithm, one can instead just look for the “bad structure” directly. Using this idea, as opposed to using the decomposition based approach, one gets significantly faster recognition algorithms for Berge graphs and balanced 0,±1 matrices. However, this approach yields an O(n31) recognition algorithm for even-hole-free graphs. So this is the first example of a decomposition based algorithm being significantly faster than the Chudnovsky/Seymour style algorithm. Actions (repository staff only: login required)
{"url":"http://etheses.whiterose.ac.uk/1357/","timestamp":"2014-04-20T00:39:40Z","content_type":null,"content_length":"24105","record_id":"<urn:uuid:d9d56cf8-1174-4605-84ef-d0b470579231>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: This text provides an introduction to noncommutative geometry and some of its applications. It can be used either as a textbook for a graduate course or for self-study. It will be useful for graduate students and researchers in mathematics and theoretical physics and all those who are interested in gaining an understanding of the subject. One feature of this book is the wealth of examples and exercises that help the reader to navigate through the subject. While background material is provided in the text and in several appendices, some familiarity with basic notions of functional analysis, algebraic topology, differential geometry and homological algebra at a first year graduate level is helpful. Developed by Alain Connes since the late 1970s, noncommutative geometry has found many applications to long-standing conjectures in topology and geometry and has recently made headways in theoretical physics and number theory. The book starts with a detailed description of some of the most pertinent algebra geometry correspondences by casting geometric notions in algebraic terms, then proceeds in the second chapter to the idea of a noncommutative space and how it is constructed. The last two chapters deal with homological tools: cyclic cohomology and Connes-Chern characters in \(K\)-theory and \(K\)-homology, culminating in one commutative diagram expressing the equality of topological and analytic index in a noncommutative setting. Applications to integrality of noncommutative topological invariants are given as well. Two new sections have been added to the second edition: the first new section concerns the Gauss-Bonnet theorem and the definition and computation of the scalar curvature of the curved noncommutative two torus, and the second new section is a brief introduction to Hopf cyclic cohomology. The bibliography has been extended and some new examples are presented. Request an examination or desk copy. A publication of the European Mathematical Society (EMS). Distributed within the Americas by the American Mathematical Society. Graduate students and research mathematicians interested in mathematics and theoretical physics. • Examples of algebra-geometry correspondences • Noncommutative quotients • Cyclic cohomology • Connes-Chern character • Appendices • Bibliography • Index
{"url":"http://ams.org/bookstore?fn=20&arg1=tb-an&ikey=EMSSERLEC-10-R","timestamp":"2014-04-18T20:17:05Z","content_type":null,"content_length":"15875","record_id":"<urn:uuid:f9154a53-e1df-4029-8e7d-1421dec1ed1c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimisation with an orthogonal matrix July 1st 2009, 06:55 AM #1 Jul 2009 I have an optimisation problem as follows: min ||AP - B||^2_F s.t. P'P = I where A, B, P are real n*n matrices, || . ||_F is the Frobenius norm, A,B are symmetric positive semi-definite matrices, and P is an unknown orthogonal matrix. By introducing a Langrange multiplier, differentiating and equating to zero, I obtain a result P = (A'A)^-1 B'A, but P isn't orthogonal, and so I've gone wrong somewhere. If someone could point out how to obtain a solution, and where I have gone wrong, I would be very grateful! Thanks in advance!
{"url":"http://mathhelpforum.com/advanced-math-topics/94149-optimisation-orthogonal-matrix.html","timestamp":"2014-04-19T05:35:59Z","content_type":null,"content_length":"29669","record_id":"<urn:uuid:5ead1821-cadb-40b2-989c-47ab536b2072>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help November 15th 2012, 08:21 AM #1 Senior Member Sep 2012 ima solve al none negative solution för this equation ehmm idk how to write it but its ^(4) *logx*^(8)*log x = ^(16)*log x idk how to solve this Re: log I'm not sure what the "^4" means but I will guess that it is the base of the logarithm. I am used to that being written as a subscript, after. If that is correct then the equation is $log_4(x) (log_8(x))= log_16(x)$. Now, " $y= log_a(x)$" is the same as " $a= e^y$". Do you notice that all of those bases are powers of 2? $4= 2^2$, $8= 2^3$ and $16= 2^4$. So $y= log_4(x)$ is the same as $x= 4^y= (2^2)^y= 2^{2y} $ and so $log_2(x)= 2y$, thus $y= log_{4}(2)= \frac{log_2(x)}{2}$. $y= log_8(x)$, so $x= 8^y= (2^3)^y= 2^{3y}$, thus $3y= log_2(x)$ and $y= log_8(x)= \frac{log_2(x)}{3}$. Similarly, $log_{16}(x)= \frac{log_2(x)}{4}$ so the equation can be written $\frac{log_2(x)}{2}\frac{ln_2(x)}{3}= \frac{ln_2(x)}{16}$ If we let $y= log_2(x)$ that is the same as $\frac{y}{2}\frac{y}{3}= \frac{y^2}{6}= \frac{y}{4}$. Can you solve that? Re: log Do you mean: $\log_4(x)\cdot\log_8(x)=\log_{16}(x)$ ? If so, try the identity: $\log_{b^n}(a)= \frac{{\log_b(a)}}{n}$ Re: log Last edited by Petrus; November 15th 2012 at 12:31 PM. Re: log November 15th 2012, 12:21 PM #2 MHF Contributor Apr 2005 November 15th 2012, 12:25 PM #3 November 15th 2012, 12:27 PM #4 Senior Member Sep 2012 November 15th 2012, 12:45 PM #5
{"url":"http://mathhelpforum.com/pre-calculus/207668-log.html","timestamp":"2014-04-21T07:47:09Z","content_type":null,"content_length":"45021","record_id":"<urn:uuid:a54d850d-5f0f-4104-b2a6-e1600bf4d9aa>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Question about phase constant we have the wave equation as follows with non zero phase constant: y(x,t) = ym * sin(k( x - PHI/k) - wt) y(x,t) = ym * sin(kx - w(t + PHI / w)) I don't understand where did the PHI /k or PHI / w came from ?? I understand how did we derive the wave equation but I don't understand this part.
{"url":"http://www.physicsforums.com/showpost.php?p=4238691&postcount=1","timestamp":"2014-04-18T10:51:35Z","content_type":null,"content_length":"8679","record_id":"<urn:uuid:75bb0fd5-95cc-4bfd-9470-e3f70c1f9b87>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Lorenz attractor exists The Lorenz attractor exists There are three choices here: 1. The Lorenz attractor exists This is a revised version of my Ph.D. thesis. It also has all code and initial data used for the proof. 2. The Lorenz attractor exists This is a short version (6 pages) of my Ph.D. thesis, which appeared in C. R. Acad. Sci. Paris, t. 328, Série I, p. 1197-1202, 1999 3. A Rigorous ODE solver and Smale's 14th Problem This is the most final extension of my thesis. It contains a completely new core program, which will work for a general ODE in any dimension. There are also a few articles about my thesis: 1. Brandt, S. Schmetterlingseffekt bewiesen meOme, Jan. 25, 2001 2. Lasson, M. Med kaos i sinnet Le Coq, No 4, 1998 3. Lotsson, A. Räkna med kaos det bästa vi kan göra Computer Sweden, No 70, 1998 4. Pacifico, M. J. The Lorenz attractor exists Featured Review, MatSciNet, 2001b:37051, 2001 5. Sollerman, J. Svensk löste matematisk utmaning Dagens Nyheter, Aug. 15, 1998 6. Stewart, I. The Lorenz attractor exists Nature, vol 406, No 6799, 948-949, 2000 7. Tillemans, A. Chaos mathematisch bewiesen! Weltraum Forschung, Sept. 4, 2000 8. Viana, M. What's New on Lorenz' Strange Attractors? Math. Intel., vol 22, No 3, 6-19, 2000 9. Weisstein, E. Smale's 14th Problem Solved Mathworld Headline News, February 13, 2002 10. Lozi, R. La preuve d'un certain chaos (1, 2) La Recherche, No 337, December 2000
{"url":"http://www2.math.uu.se/~warwick/main/thesis.html","timestamp":"2014-04-19T12:22:31Z","content_type":null,"content_length":"4781","record_id":"<urn:uuid:f1eef913-d897-49e7-a7b8-a11c8120d762>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
Exponential Functions Resources for Exponential Functions that have been developed for the Algebra for All project are provided below. Use the comments area at the bottom of page to disucss these resources or share additional online resources related to Exponential Functions. Student Access: You do not need to go to the AfA Social Network to access these activities. Students and teachers can access these resources directly by going straight to the urls provided. Simply copy and paste the links given below and provide those links to your students. Click here for more detailed instructions. Description: In this activity students explore exponential relationships through two concrete examples. Through paper-folding, and rolling dice, students make connections between abstract exponential equations and concrete situations that are modeled by exponential equations Exponential Dice Roller Applet: In this applet students create a model of exponential decay by rolling (up to) 98 dice and taking out dice of different values. By varying which dice are kept and how many dice are used, students can connect parts of exponential equations with a concrete model. Teacher Video: M&Ms Activity: Introduction Teacher Video: M&Ms Activity: Wrap-up Teacher Video: Paper Folding: Wrap-up Description: In this activity students learn about the parts of exponential functions by graphing groups of graphs in different stations and identifying similarities and differences of the graphs. Exponential Stations Graphing Applets: Each of these applets provides an opportunity to graph a set of four exponential function, providing an opportunity to compare and contrast the features of each graph and learn about the different parts of an exponential equation. Station 1: http://media.mivu.org/mvu_pd/a4a/resources/applets/exponential_station1.html Station 2: http://media.mivu.org/mvu_pd/a4a/resources/applets/exponential_station2.html Station 3: http://media.mivu.org/mvu_pd/a4a/resources/applets/exponential_station3.html Station 4: http://media.mivu.org/mvu_pd/a4a/resources/applets/exponential_station4.html Station 5: http://media.mivu.org/mvu_pd/a4a/resources/applets/exponential_station5.html Station 6: http://media.mivu.org/mvu_pd/a4a/resources/applets/exponential_station6.html Teacher Video: Exponential Stations: Introduction Description: In this activity students learn the rules of exponents through experimentation and exploration, using either a calculator program or a computer applet. Exponent Rules Applet: In this applet students have the opportunity to experiment with and practice several exponential rules. Teacher Video: Exponent Rules: Introduction Please provide comments on the resources provided on this page, or links to similar resources using the comments feature below.
{"url":"http://a4a.learnport.org/page/exponential-functions","timestamp":"2014-04-18T20:58:03Z","content_type":null,"content_length":"36638","record_id":"<urn:uuid:5ffa77df-acf9-4062-bd0e-974b5103f35f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: introduction Hi Mike, hope you have a great time here. I have discovered a truly marvellous signature, which this margin is too narrow to contain. -Fermat Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. -Archimedes Young man, in mathematics you don't understand things. You just get used to them. - Neumann
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=269192","timestamp":"2014-04-17T12:42:29Z","content_type":null,"content_length":"22338","record_id":"<urn:uuid:1790720a-8f8e-4aa4-b1de-8d90a5cce01a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
Modular representations with unequal characteristic - reference request up vote 4 down vote favorite Let $G$ be a finite group, and let $K$ be a finite field whose characteristic does not divide $|G|$. I am interested in the theory of finitely generated modules over $K[G]$. Of course many problems are not present here because $K[G]$ is semisimple and all modules are projective. My case is partly covered by Section 15.5 of Serre's book "Linear Representations of Finite Groups". However, Serre likes to assume that $K$ is "sufficiently large", meaning that it has a primitive $m$'th root of unity, where $m$ is the least common multiple of the orders of the elements of $G$. I do not want to assume this, so some Galois theory of finite extensions of $K$ will come into play. I do not think that anything desperately complicated happens, but it would be convenient if I could refer to the literature rather than having to write it out myself. Is there a good source for this? In particular, I would like to be able to control the dimensions over $K$ of the simple $K[G]$-modules. As pointed out in Alex Bartel's answer, these need not divide the order of $G$. I am willing to assume that $G$ is a $p$-group for some prime $p\neq\text{char}(K)$. [UPDATED AGAIN]: OK, here is a sharper question. Put $m=|K|$ (which is a power of a prime different from $p$) and let $t$ be the order of $m$ in $(\mathbb{Z}/p)^\times$. Let $L$ be a finite extension of $K$, let $G$ be a finite abelian $p$-group, and let $\rho:G\to L^\times$ be a homomorphism that does not factor through the unit group of any proper subfield containing $K$. Then $\rho$ makes $L$ into an irreducible $K$-linear representation of $G$, and every irreducible arises in this way. If I've got this straight, we see that the possible degrees of nontrivial irreducible $K$-linear representations of abelian $p$-groups are the numbers $tp^k$ for $k\geq 0$. I ask: if we let $G$ be a nonabelian $p$-group, does the set of possible degrees get any bigger? 2 Maybe Geoff's answer to mathoverflow.net/questions/91132/… could help you? – Someone May 16 '12 at 11:32 @Someone: thanks, that's a useful pointer. – Neil Strickland May 16 '12 at 12:27 Could you clarify what you mean by "control the dimension"? The dimensions of the simple modules over $K$ are multiples of the dimensions of the simple modules over $\bar{K}$, but I am not even sure what "controlling the dimensions" over $\bar{K}$ would entail. – Alex B. May 16 '12 at 15:04 @Alex: I have updated the question again. – Neil Strickland May 16 '12 at 16:25 add comment 2 Answers active oldest votes Your last statement is not true in general. Let $G=C_3$ and take your favourite finite field that does not contain the cube roots of unity, e.g. $K=\mathbb{F}_5$. Then the two non-trivial one-dimensional representations over $\bar{K}$ are not defined over $K$, but their sum is, since it's the regular representation minus the trivial. In general, $K[G]$-modules for $K$ finite of characteristic co-prime to $|G|$ behave pretty much like modules over characteristic zero fields (the simple $K[G]$-modules are just sums over Galois orbits of the absolutely simple ones), the major simplification being that there are no Schur indices. I am not sure that there is much more to say about this. The second up vote 8 volume of Curtis and Reiner contains a whole chapter on rationality questions, i.e. fields that are not "sufficiently large" in the sense of Serre. Most of it is for characteristic zero, down vote but as I say, a lot of it carries over. Edit Re updated question: if $G$ is a finite $p$-group and $K$ is a finite field of characteristic different from $p$, then it is indeed true that any irreducible representation of $G$ over $K$ has dimension $tp^k$, for some $k$ and some $t\;|\;(p-1)$. Indeed, the absolutely irreducible representations have dimension a power of $p$, and the field of definition of any absolutely irreducible representation is $K$ adjoin $p^r$-th roots of unity for some $r$, so the Galois orbit of a representation has size dividing $(p-1)p^{r-1}$ for some $r$. add comment The question you raise doesn't come up often enough to be dealt with explicitly in textbooks and such, I think. One way to extract an answer (possibly overkill) is to look more closely at the cde-triangle in the formulation of Serre or Curtis & Reiner. Changing your notation a bit, take $p$ not dividing $|G|$ and then form a triple $(K,A,k)$ with $A$ a complete d.v.r. (such as $\mathbb{Z}_p$) having $K$ as fraction field and $k$ as the finite residue field of characteristic $p$. Without assuming that these fields are "large enough", one knows that the decomposition homomorphism $d: \mathrm{R}_K(G) \rightarrow \mathrm{R}_k(G)$ is surjective: see my older question here. EDIT: Looking back at what I wrote next, it seems too superficial. Maybe a more careful comparison of the behavior under field extensions is really needed. Back to the drawing board. up vote 4 down ADDED: Maybe I've missed something, but I think what Serre does in his Section 15.5 avoids any use of the assumption that the fields are large enough. So this should dispose of the original vote question asked, while equating dimensions of correlated simple modules over $K$ and $k$. (Serre tends to be careful about specifying where it matters that fields are large enough.) Even though simple modules may decompose further over field extensions, working with any fixed $p$-modular system seems to yield trivial Cartan and decomposition matrices. (In this situation, the fact that $d$ is surjective follows from the assumption that $p$ doesn't divide the group order.) In any case Alex has addressed the modified questions well. add comment Not the answer you're looking for? Browse other questions tagged rt.representation-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/97105/modular-representations-with-unequal-characteristic-reference-request?sort=newest","timestamp":"2014-04-20T13:18:59Z","content_type":null,"content_length":"63534","record_id":"<urn:uuid:1be2a1ea-c830-4de4-ae2e-e324096f82ad>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
The Z-Test is implemented as follows: Two Sample Test The assumptions/conditions needed when testing the difference between two percentages (proportions) using Maritz Stats include: 1. Measurement scale is at least nominal 2. Random samples 3. Weighting is not used 4. The random variables are binomially distributed, meaning: Both random variables are defined in terms of the number of occurrences (e.g., number of successes, number of "Yes" responses, number of top box responses) There are a fixed number of trials (e.g., the number to be sampled is not determined by the number of successes) The result of each trial can be classified into one of two categories (e.g., success or failure, yes or no, top box or non-top box) The probability of success remains constant for each trial (e.g., the probability of randomly selecting a "success" does not change throughout the sample) Each trial of the experiment is independent of the other trials (e.g., the response by one respondent does not affect, and is not affected by, the response by another respondent) The sample sizes, taking into account the percents (proportions), are sufficiently large (i.e., each sample size times the corresponding percent of success is greater than 500 and each sample size times the corresponding percent of non-success is greater than 500, or, in terms of proportions, the sample size times the probability of success is greater than five, and the sample size times the probability of non-success is greater than 5 ( n[1]p[1 ]> 5, n[1](1 - p[1]) > 5, n[2]p[2 ]> 5, and n[2](1 - p[2]) > 5 ) The hypothesis test is to determine if the difference between two population percents (proportions) is either equal to, less than, or greater than zero (as opposed to a non-zero constant) Variables input: n[1] Number of valid responses for group 1 n[2] Number of valid responses for group 2 y[1] Number of responses to a particular selection for group 1 y[2] Number of responses to a particular selection for group 2 p[1] Percent of responses to a particular selection for group 1 p[2] Percent of responses to a particular selection for group 2 N[1] Population for group 1 N[2] Population for group 2 n[12] Number of overlapping respondents r[12] Correlation between group 1 and 2 If a finite population correction factor is used, compute for each group: Compute the pooled variance estimate: Compute standard error: Compute the value of z: Compute the result of the test: The critical value is determined based on a table of z values, which determines the critical value based on the selected level of confidence. The computed z is compared to the critical value to determine if the difference is significant. One Sample Test The assumptions/conditions needed when testing one mean against an expected value using Maritz Stats include: 1. Measurement scale is at least nominal 2. Random sample 3. Weighting is not used 4. The random variable is binomially distributed, meaning: The random variable is defined in terms of the number of occurrences (e.g., number of successes, number of "Yes" responses, number of top-box responses) There are a fixed number of trials (e.g., the number to be sampled is not determined by the number of successes The result of each trial can be classified into one of two categories (e.g., success or failure, yes or no, top box or non-top box) The probability of success remains constant for each trial (e.g., the probability of randomly selecting a "success" does not change throughout the sample) Each trial of the experiment is independent of the other trials (e.g., the response by one respondent does not affect, or is affected by, the response by another respondent) The sample size, taking into account the percents (proportions), is sufficiently large (i.e., sample size times the percent of success is greater than 500 and the sample size times the percent of non-success is greater than 500, or, in terms of proportions, the sample size times the probability of success is greater than five (np > 5), and the sample size times the probability of non-success is greater than 5 (n(1 - p) > 5) Variables input: p[s] Sample proportion p[0] Hypothesized proportion n Sample size N Population Compute the value of z: Compute the result of the test: The critical value is determined based on a table of z values, which determines the critical value based on the selected level of confidence. The computed z is compared to the critical value to determine if the difference is significant.
{"url":"http://www.maritzresearch.com/maritzstats/HelpFiles/..%5CHelpFiles%5CFormula_ZTest.htm","timestamp":"2014-04-19T07:06:00Z","content_type":null,"content_length":"14476","record_id":"<urn:uuid:c19fccdc-25fc-4e80-8cc2-18b7664fa694>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
x≡y (mod a) => (x,a)=(y,a) January 30th 2010, 04:55 AM #1 Jan 2010 x≡y (mod a) => (x,a)=(y,a) Looking at the following one would think it is simple, and mabye it is, BUT to me it is impossible and has been so for two days. So, show that if x≡y (mod a) then (x,a)=(y,a) Dear matzerath, Suppose, $d_1=(x,a)$ and $d_2=(y,a)$ $x\equiv{y(mod a)}\Rightarrow{a}\mid{x-y}$ Therefore, $d_1\mid{a}$ and $a\mid{x-y}\Rightarrow{d_1\mid{x-y}}$ $d_1\mid{x-y}$ and $d_1\mid{x}\Rightarrow{d_1\mid{y}}$ $d_1$ is a common divisor of y and a Therefore, $d_1\leq{d_2}$--------(1) Similarly, $d_2$ is a common divisor of x and a Therefore, $d_2\leq{d_1}$---------(2) From (1) and (2), Hope this helps. There's a short way to prove this. $x\equiv y \pmod{a} \Leftrightarrow x=y+ka\:\<img src=$k\in \mathbb{Z})" alt="x\equiv y \pmod{a} \Leftrightarrow x=y+ka\:\ Thus, using the fact that $\gcd(m,n)=\gcd(m+rn,n)$ for all $r\in \mathbb{Z}$, it follows that how would one prove the last thing? I've seen it before but I've never seen the proof... and by the last thing I mean $\gcd(y+ka,a)=\gcd(y,a)$ Yeah, and thanks to both of you! Dear matzerath, Suppose, $(y+ka,a) = d_1$ and $(y,a) = d_2$ for $k\in{Z}$ Since, $(y+ka,a) = d_1$ $d_1\mid{y+ka}$ and $d_1\mid{a}$ Therefore, $d_1\mid{y}$ and $d_1\mid{a}$ Also since, $d_2=(y,a)$ $d_2\mid{y}$ and $d_2\mid{a}$ Therefore, $d_2\mid{y+ka}$ and $d_2\mid{a}$ By (1) and (2); $d_1=d_2\Rightarrow{(y+ka,a)=(y,a)}$ Hope this helps. Dear matzerath, Suppose, $(y+ka,a) = d_1$ and $(y,a) = d_2$ for $k\in{Z}$ Since, $(y+ka,a) = d_1$ $d_1\mid{y+ka}$ and $d_1\mid{a}$ Therefore, $d_1\mid{y}$ and $d_1\mid{a}$ Also since, $d_2=(y,a)$ $d_2\mid{y}$ and $d_2\mid{a}$ Therefore, $d_2\mid{y+ka}$ and $d_2\mid{a}$ By (1) and (2); $d_1=d_2\Rightarrow{(y+ka,a)=(y,a)}$ Hope this helps. next time I'll think befor asking a question... but thank you! January 30th 2010, 05:45 AM #2 Super Member Dec 2009 January 30th 2010, 06:53 AM #3 Nov 2009 January 31st 2010, 03:02 AM #4 Jan 2010 January 31st 2010, 05:08 PM #5 Super Member Dec 2009 February 7th 2010, 05:05 AM #6 Jan 2010
{"url":"http://mathhelpforum.com/number-theory/126238-x-y-mod-x-y.html","timestamp":"2014-04-16T22:10:11Z","content_type":null,"content_length":"52958","record_id":"<urn:uuid:3f9b2949-6b4b-4a70-818e-306e89324243>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to Determinants For any square matrix of order 2, we have found a necessary and sufficient condition for invertibility. Indeed, consider the matrix The matrix A is invertible if and only if determinant of A. It is clear from this, that we would like to have a similar result for bigger matrices (meaning higher orders). So is there a similar notion of determinant for any square matrix, which determines whether a square matrix is invertible or not? In order to generalize such notion to higher orders, we will need to study the determinant and see what kind of properties it satisfies. First let us use the following notation for the determinant Properties of the Determinant 1. Any matrix A and its transpose have the same determinant, meaning This is interesting since it implies that whenever we use rows, a similar behavior will result if we use columns. In particular we will see how row elementary operations are helpful in finding the determinant. Therefore, we have similar conclusions for elementary column operations. 2. The determinant of a triangular matrix is the product of the entries on the diagonal, that is 3. If we interchange two rows, the determinant of the new matrix is the opposite of the old one, that is 4. If we multiply one row with a constant, the determinant of the new matrix is the determinant of the old one multiplied by the constant, that is In particular, if all the entries in one row are zero, then the determinant is zero. 5. If we add one row to another one multiplied by a constant, the determinant of the new matrix is the same as the old one, that is Note that whenever you want to replace a row by something (through elementary operations), do not multiply the row itself by a constant. Otherwise, you will easily make errors (due to Property 6. We have In particular, if A is invertible (which happens if and only if If A and B are similar, then Let us look at an example, to see how these properties work. Example. Evaluate Let us transform this matrix into a triangular one through elementary operations. We will keep the first row and add to the second one the first multiplied by Using the Property 2, we get Therefore, we have which one may check easily. The determinant of matrices of higher order will be dealt with on the next page. [Geometry] [Algebra] [Trigonometry ] [Calculus] [Differential Equations] [Matrix Algebra] Do you need more help? Please post your question on our S.O.S. Mathematics CyberBoard. Author: M.A. Khamsi Copyright © 1999-2014 MathMedics, LLC. All rights reserved. Contact us Math Medics, LLC. - P.O. Box 12395 - El Paso TX 79913 - USA users online during the last hour
{"url":"http://www.sosmath.com/matrix/determ0/determ0.html","timestamp":"2014-04-20T01:39:09Z","content_type":null,"content_length":"12522","record_id":"<urn:uuid:c4ebe90d-0e03-4c16-b335-bb833686cf7b>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
Macroscopic models of superconductivity Chapman, S. J. (1991) Macroscopic models of superconductivity. PhD thesis, University of Oxford. After giving a description of the basic physical phenomena to be modelled, we begin by formulating a sharp-interface free-boundary model for the destruction of superconductivity by an applied magnetic field, under isothermal and anisothermal conditions, which takes the form of a vectorial Stefan model similar to the classical scalar Stefan model of solid/liquid phase transitions and identical in certain two-dimensional situations. This model is found sometimes to have instabilities similar to those of the classical Stefan model. We then describe the Ginzburg-Landau theory of superconductivity, in which the sharp interface is `smoothed out' by the introduction of an order parameter, representing the number density of superconducting electrons. By performing a formal asymptotic analysis of this model as various parameters in it tend to zero we find that the leading order solution does indeed satisfy the vectorial Stefan model. However, at the next order we find the emergence of terms analogous to those of `surface tension' and `kinetic undercooling' in the scalar Stefan model. Moreover, the `surface energy' of a normal/superconducting interface is found to take both positive and negative values, defining Type I and Type II superconductors respectively. We discuss the response of superconductors to external influences by considering the nucleation of superconductivity with decreasing magnetic field and with decreasing temperature respectively, and find there to be a pitchfork bifurcation to a superconducting state which is subcritical for Type I superconductors and supercritical for Type II superconductors. We also examine the effects of boundaries on the nucleation field, and describe in more detail the nature of the superconducting solution in Type II superconductors - the so-called `mixed state'. Finally, we present some open questions concerning both the modelling and analysis of superconductors. Repository Staff Only: item control page
{"url":"http://eprints.maths.ox.ac.uk/24/","timestamp":"2014-04-18T23:16:41Z","content_type":null,"content_length":"17164","record_id":"<urn:uuid:e4de14f2-be07-46f2-9208-2a65d79f109c>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] adding booleans Robert Kern robert.kern@gmail.... Sat Jun 8 06:54:56 CDT 2013 On Sat, Jun 8, 2013 at 12:40 PM, <josef.pktd@gmail.com> wrote: > Question about namespace > why is there bool and bool_ ? >>>> np.bool(True) + np.bool(True) > 2 >>>> np.bool_(True) + np.bool_(True) > True >>>> type(np.bool(True)) > <type 'bool'> >>>> type(np.bool_(True)) > <type 'numpy.bool_'> > I didn't pay attention to the trailing underline in Pauli's original example `np.bool is __builtin__.bool`. It's a backwards-compatibility alias for an old version of numpy that used the name `np.bool` for the scalar type now called `np.bool_` (similarly for `np.float`, `np.int`, and `np.complex`). Since `from numpy import *` caused name collisions, we moved the scalar types to their current underscored versions. However, since people had started to use `dtype=np.bool`, we just aliased the builtin type objects to keep that code working. I would love to be able to remove those since they are an unending source of confusion. However, there is no good way to issue deprecation warnings when you use them, so that means that upgrades would cause sudden, unwarned breakage, which we like to avoid. Robert Kern More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2013-June/066815.html","timestamp":"2014-04-17T16:18:58Z","content_type":null,"content_length":"3896","record_id":"<urn:uuid:fd03b3f6-1627-4a7c-aea9-ee736dc9f7a7>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
Erwinna Math Tutor Find an Erwinna Math Tutor ...Thank you for the opportunity to help your child.I am certified as an Elementary School Teacher and Teacher of Students with Disabilities. I am also Highly Qualified in Middle School Mathematics. I am familiar and have worked with many math programs including Holt, Everyday Math, McDougal Littell, Singapore, Digi-Blocks, Pinpoint, and Number Worlds. 13 Subjects: including prealgebra, English, elementary (k-6th), phonics I am a recently graduated college student who studied Early Childhood/Elementary Education. I have been tutoring since my Junior year in high school. I am very fun, energetic, and patient and enjoy learning almost as much as I enjoy making learning fun and understandable. 14 Subjects: including algebra 1, algebra 2, reading, prealgebra ...I work one-on-one with my students to focus on their weaknesses by utilizing their strengths. Algebra 1 has become a very important subject in Pennsylvania with the advent of the Keystone Exams, and it is a requirement if a student will progress successfully throughout High School. Trigonometry is one of those subjects that takes a student out of their comfort zones. 12 Subjects: including prealgebra, algebra 1, algebra 2, calculus ...I also took calculus, statistics, and business calculus in college. I have had extensive experience tutoring in all of these subjects. My senior year of college, I researched and wrote an honors thesis. 36 Subjects: including geometry, phonics, soccer, astronomy ...I have spent most of the last 20 years in maintenance area finding ways to improve the operation of the equipment, save money and reduce operating costs.I enjoy math and enjoy helping others understand math I have helped all my kids (5 of them) from elementary to college math I have a degree in... 20 Subjects: including geometry, trigonometry, algebra 2, algebra 1
{"url":"http://www.purplemath.com/Erwinna_Math_tutors.php","timestamp":"2014-04-18T05:58:44Z","content_type":null,"content_length":"23723","record_id":"<urn:uuid:d9e9baac-1abb-414b-a193-72e97257881e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: [ccp4bb]: origin definition [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [ccp4bb]: origin definition *** For details on how to be removed from this list visit the *** *** CCP4 home page http://www.ccp4.ac.uk *** Bernard Santarsiero wrote: > *** For details on how to be removed from this list visit the *** > *** CCP4 home page http://www.ccp4.ac.uk *** > I have a set of phases from an MIR run in SOLVE and a potential molecular > replacment solution of about half the structure in space group P3. They > have different origins in Z since it's either defined by the set of heavy > atom positions in SOLVE, or the set of coords in the MR search model. I > want to compare the SOLVE phased map and the MR sfall/2Fo-Fc map. I see > at least two possibilities: > 1. mask a map region around the model of the MR solution and move it > around the phased map; the highest CC should indicate the best overlap of > the two maps. I think I can use RAVE to do this. > 2. calculate a series of phases and list the 00l's with phases. Each set > is from a set of coords that have been translated in Z. At some point, > these phases should match those from the SOLVE solution. > As for point #2, I can't seem to get sfall or sigmaa to output the > Hendrickson-Lattman coefficients, which would be useful. Suggestions? > Bernie Santarsiero This is a problem for the phased translation function. It is an option for FFT; you must have both the SF and the MIR phases in one file - easiest achieved by running SFALL wit LABO ALLIN option which will append FC/PHIC to the MIR output, run sgmaa if you like though it prob isnt needed, then set for fft labi F1=F PHI=PHINIR W1=FOMmir PHI2 = PHIC W2=Fom+sigmaa and the big peak will give you the origin shift. I would have to check the documentation to see which way it went.. Or even easier, run a diff fourier or an anonmalous difference fourier using the PHIC FOM_sigmaa and that should give you the heavy atom sites, relative to MR solution You dont
{"url":"http://www.ysbl.york.ac.uk/ccp4bb/2003/msg00029.html","timestamp":"2014-04-16T19:54:14Z","content_type":null,"content_length":"5580","record_id":"<urn:uuid:9eeecf44-6252-48a0-8e7b-e3b174b2fb1b>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
dimension of compact support cohomology up vote 1 down vote favorite Let $X$ be a smooth complex algebraic variety and let $\overline{X}$ be a compactification by a divisor $D$ with normal crossings. Then there is a non-canonical isomorphism $$(1) \quad \quad \quad \quad H^k(U, \mathbb{C})=\bigoplus_{p+q=k} H^q(\bar{X}, \Omega^p_{\bar{X}}(\log D))$$ where $\Omega^\bullet_{\bar{X}}(\log D)$ is the complex of logarithmic differentials. It is filtered by weight and you see that the weight $m$ Hodge numbers of $H^k(U, \mathbb{C})$ are $$ \dim H^q(\bar{X}, \mathrm{Gr}^W_m \Omega^p_{\bar{X}}(\log D)) $$ for $p+q=k$. I wonder how this extend to cohomology with compact supports. I guess in that case one should look at $$ H^q(\bar{X}, \Omega^p_{\bar{X}}(\log D)(-D)) $$ Is there still a decomposition like (1)? If so, how to prove it? How does one read compactly supported Hodge numbers from the picture? ag.algebraic-geometry hodge-theory add comment 1 Answer active oldest votes Easiest is maybe to just use Poincaré duality, in the form of the statement that $H^k_c(U) \otimes H^{2d-k}(U) \to H^{2d}_c(U) \cong \mathbf Q(-d)$ (where $d = \dim_\mathbf{C} U$) is a perfect pairing, compatible with the mixed Hodge structures. Then $\dim \mathfrak{gr}_F^p\mathfrak{gr}^W_m H^k_c(U)= \dim \mathfrak{gr}_F^{d-p}\mathfrak{gr}^W_{2d-m}H^{2d-k}(U)$, in other up vote 1 words, the Hodge numbers of $H^k_c$ and $H^{2d-k}$ are related by the transformation $(p,q) \leftrightarrow (d-p,d-q)$. down vote Thanks Dan! Could you also explain how tensor by $\mathcal{O}_X(D)$ enters in the story? – arrabal Jul 1 '13 at 11:18 Because $H^q(\bar X, \Omega^q(\log D)(-D))$ and $H^{n-q}(\bar X, \Omega^{n-q}(\log D))$ are Serre dual, and this is compatible with Poincar\'e. – Donu Arapura Jul 1 '13 at 16:35 Thanks Donu! Could you explain a bit more (or give some reference) why this are Serre dual? – arrabal Jul 1 '13 at 20:46 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry hodge-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/135415/dimension-of-compact-support-cohomology","timestamp":"2014-04-17T15:57:35Z","content_type":null,"content_length":"53074","record_id":"<urn:uuid:190a4a14-cc1f-4707-9ee5-04aa8cddcd5f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
ACL2 Version 3.2(r) (April, 2007) Notes Major Section: RELEASE-NOTES Changed the default distributed books directory for ACL2(r) from books/ to books/nonstd/. See include-book, in particular the discussion of ``Distributed Books Directory''. Added directory books/arithmetic-3/ and its subdirectories to books/nonstd/. (But a chunk of theorems from arithmetic-3/extra/ext.lisp are ``commented out'' using #-:non-standard-analysis because they depend on books/rtl/rel7/, which is not yet in books/nonstd/; feel free to volunteer to remedy this!) Incorporated changes from Ruben Gamboa to some (linear and non-linear) arithmetic routines in the theorem prover, to comprehend the reals rather than only the rationals. Please also see note-3-2 for changes to Version 3.2 of ACL2.
{"url":"http://www.cs.utexas.edu/users/moore/acl2/v4-2/NOTE-3-2_lparen_R_rparen_.html","timestamp":"2014-04-23T21:47:37Z","content_type":null,"content_length":"1549","record_id":"<urn:uuid:f4e61735-4353-4b07-8da1-bd0b033a49a8>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
undefined entries Graeme Forbes graeme.forbes at colorado.edu Fri Dec 8 15:46:45 MST 2006 How about this? You have n properties P1 to Pn making for n rows in the table. As I understand it, the number of entires on each row can vary. Define a nonsense string that couldn't be the value of any of the properties. Put it in wherever needed so that all the rows have the same number of entries as the longest row(s). Now generate an uber-list of lists, where each member list contains exactly one value from each row. Then define a contraction routine that erases the nonsense string wherever it occurs in this list of lists. You now have a list of lists of variable length, and provided you didn't omit any "walk" down the table, all possible combinations of values for P1...Pn should be Any good? More information about the Framers mailing list
{"url":"http://lists.frameusers.com/pipermail/framers/2006-December/005523.html","timestamp":"2014-04-19T02:57:54Z","content_type":null,"content_length":"3202","record_id":"<urn:uuid:20f496bb-c2a5-40c2-b3a4-dada855d37e0>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Model of Pipeline Abandonment and Recovery in Deepwater Journal of Applied Mathematics Volume 2014 (2014), Article ID 298281, 7 pages Research Article Mathematical Model of Pipeline Abandonment and Recovery in Deepwater ^1College of Mechanical and Transportation Engineering, China University of Petroleum, Beijing 102249, China ^2Offshore Oil and Gas Research Center, China University of Petroleum, Beijing 102249, China Received 28 September 2013; Accepted 10 December 2013; Published 29 January 2014 Academic Editor: M. Montaz Ali Copyright © 2014 Xia-Guang Zeng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In offshore oil and gas engineering the pipeline abandonment and recovery is unavoidable and its mechanical analysis is necessary and important. For this problem a third-order differential equation is used as the governing equation in this paper, rather than the traditional second-order one. The mathematical model of pipeline abandonment and recovery is a moving boundary value problem, which means that it is hard to determine the length of the suspended pipeline segment. A novel technique for the handling of the moving boundary condition is proposed, which can tackle the moving boundary condition without contact analysis. Based on a traditional numerical method, the problem is solved directly by the proposed technique. The results of the presented method are in good agreement with the results of the traditional finite element method coupled with contact analysis. Finally, an approximate formula for quick calculation of the suspended pipeline length is proposed based on Buckingham’s Pi-theorem and mathematical fitting. 1. Introduction Bad weather is frequent during laying offshore pipelines, so the pipeline abandonment and recovery operation is unavoidable. In offshore oil and gas engineering the pipeline laying engineers need to do detailed mechanical analysis to determine the operation parameters and then make sure that the pipeline will not overstress during the operation. To do the mechanical analysis the mathematical model is a very important problem. In the abandonment operation the A&R cable lowers a pipeline down to the seabed by a pull head and in the recovery operation lifts it up to the sea level. During the process the pipeline’s axial forces, bending moments must be controlled in a reasonable scope to prevent its strength damage. The calculation of these quantities is very useful to guide the operation. So the mathematical model of pipeline abandonment and recovery should be established. The sketch of the pipeline abandonment and recovery operation is shown in Figure 1. In the processes a pipeline is lifted up from the seabed to the sea surface or put down to the seabed from the sea surface by joint A. The two physical processes are generally called one point lifting and lowering. The processes are mutually inversed and can be described by the same mathematical model. A lot of papers have reported the pipeline installation of mathematical models which are closely related to this operation. Palmer et al. [1] investigated the stresses and configurations of the pipelines being laid from a lay barge over a stinger. They derived equations governing the configuration and solved them by different techniques. Meanwhile they suggested a nondimensionalized governing equation. Mattheij and Rienstra [2] studied the pipeline S-laying model based on a second-order nonlinear differential equation. In the work they explained some difficulties in approximating the numerical solutions. Zhu and Cheung [3] presented an analytical method for finding the elastic deflection of submerged pipelines laid with an adjustable stinger. They claimed that the method costs less computational time than the finite element method (FEM). Guarracino and Mallardo [4] showed a refined analytical analysis of the pipeline S-lay problem. They used a singular perturbation technique and found out a useful analytical solution which took into account the overall effects of the pipe cross-section ovalization. Timoshenko et al. [5] provided some analytical and numerical solutions for the pipeline deepwater S-lay which quantified the loading history effects. The analytical solution was fully developed for an arbitrary pipe material model and it was agreed well with the numerical results. Lenci and Callegari [6] developed three simple analytical models for the J-lay problem. By the models the boundary layer phenomenon was correctly detected and the influence of the soil stiffness was studied. By the means of extensive numerical studies, Kashani and Young [7] found that in ultradeepwater pipeline laying problem the installation parameters were sensitive to pipe wall thickness. Gong et al. [8, 9] made a parameter sensitivity analysis of S-lay technique for deepwater pipelines. The stiffened catenary theory was applied to establish the governing differential equations. They also presented a numerical iteration method for solving the pipeline configurations, and its validity was further verified by means of a comparison with results obtained from OFFPIPE. Wang et al. [10–12] did some analyses on both S-lay and J-lay problems. They proposed a novel numerical model which could take into account the influence of ocean currents and seabed stiffness. In the model the pipeline was divided into two parts and the continuity of the two parts was guaranteed at the touch down point (TDP). They also presented an analytical model for the pipeline J-lay behavior with plastic seabed. Duan et al. [13] proposed an installation system for deepwater riser S-laying and carried some laboratory scale pipeline lifting experiments by this system. Szczotka [14] studied the pipeline J-lay problem by a modified rigid finite element method (RFEM). A modification of the stiffness coefficients and the corresponding model was proposed. They claimed the model could take into account wave and sea current loads, hydrodynamic forces and material nonlinearity. Yuan et al. [15] presented a novel numerical model for the pipeline S-lay problem. They claimed that the model could be used to investigate the overall configuration, internal forces, and strain of the pipelines. On the pipeline abandonment and recovery problem which is very similar to the pipeline laying problem, Andreuzzi and Maier [16] and Datta [17] did the pioneering works. They presented an analytical and a graphical approach for the problem and adopted the finite difference method to analyze the pipeline configurations. Dai et al. [18] studied the configuration of pipelines by the spline collocation method and presented a graphical approach showing the relationship between the configuration and axial forces of the pipeline. Xing et al. [19] continued the research. They built a nonlinear equation system and modeled the pipeline lifting process as a moving boundary problem. By numerical methods the limit moments of some pipelines were obtained. In the researches of pipeline abandonment and recovery, most previous researchers seemed to investigate the problem by a second-order beam equation. However, in our previous research [20] we found that the boundary value problems with the second-order equation cannot tackle the beginning stage of pipeline lifting and the ending stage of pipeline lowing accurately. They produced very different configurations, bending moments at the two stages than the third-order equation boundary value problem or Orcaflex. The pipeline in abandonment and recovery operation undergoes a process from a large-angle deflection to a small-angle deflection or from a small-angle deflection to a large-angle deflection. In the problem the boundary condition is moving, which means that it is hard to determine the length of the suspended pipeline segment. The finite element method coupled with contact elements can be used to analyze this problem. However, sometimes it is hard to converge and costs time. Obviously the simple catenary model [21] or stiffened catenary model [22] can never be used to simulate the whole process, so a new mathematical model should be established. In this paper a mathematical model and a new strategy to tackle the moving boundary without contact analysis are presented. On the other hand the length of the suspended pipeline segment is very important because it can quicken the calculation. So finally a length approximate formula is presented based on Buckingham’s Pi-theorem and mathematical fitting. 2. Mathematical Model On the problem the following simplifications are made based on offshore engineering experience [4, 6, 17, 19]: the marine environment is stable, the seabed can be regarded as rigid plane, the lifting and lowering processes are slow, and the material of pipelines is isotropic and always in the elastic state. As shown in Figure 2, the touch down point (TDP, the point where the suspended pipeline contacts with the seabed) is located at the origin of the Cartesian coordinate system, where is the resultant force at joint A, and are the horizontal force and the vertical force at the origin, is the pipeline submerged weight per unit length, is the angle between the pipeline axis and the horizon, and is the angle between the direction of and the horizon. The natural coordinate system is established along the pipeline. It is clear that the physical quantities of the pipeline are the functions of the arc length . 2.1. Governing Differential Equations The pipeline is regarded as a tensioned beam. There are usually two kinds of differential equations which are used to analyze this problem, a second-order one and a third-order one, and the third-order one is more suitable for the beginning stage of pipeline lifting and the ending stage of pipeline lowing [20]. So in this paper the third-order one has been used. Taking a short segment of the pipeline, as shown in Figure 3, it is easy to deduce the basic governing differential equations [23]. Resolving forces normal to the segment axis leads to According to , (1) leads to According to beam theory, there is the following equation, where is elastic modulus and is second moment of area of the pipe cross section: Substituting (3) into (2), then (2) becomes Resolving forces in the segment axis leads to In the pipeline abandonment and recovery problem, is assumed to be zero, so the governing differential equations for the pipeline lifting and lowering by one point are shown as follows: 2.2. Boundary Conditions According to the similar problems [10, 24], the following boundary conditions are chosen for this problem:at the origin: , , ,at the joint: , ,where is the bending moment of the suspended pipeline segment and is its length. According to (3), and are equivalent to and . To sum up, the whole mathematical model for the pipeline abandonment and recovery is the following boundary value problem: 3. Numerical Solutions 3.1. Numerical Solution Method It is hard to get the analytical solutions of the mathematical model presented above. So in this research the traditional numerical method, fourth-order accurate finite difference has been used to get the numerical solutions. 3.2. Tackling the Moving Boundary Notice that the boundary conditions of the model are moving; in another word, the parameter is usually an unknown before numerical solving. Solving this problem with moving boundary is challenging. The parameter must be given first then the problem can be solved in numerical methods. The method of variable substitution, , has been taken, so the boundary becomes 0 and 1, and (7) becomes However, the unknown parameter just goes into the differential equations and still cannot be determined. According to the balance of axial forces at the lifting joint, the equation has been added as a supplementary boundary condition herein. Using this condition can be calculated in the following steps:(1)suppose ;(2)solve the boundary value problem (8) by the fourth-order accurate finite difference method or other numerical methods;(3)get axial force at the joint of the pipeline from the results (provided the pipeline divided into pieces); then compare the value and . If the absolute value of their difference is very small, the decided in the last step is approximately equal to the length of suspended pipelines and the work is finished. Otherwise, take the following step;(4) decrease the value of with a suitable increment if and repeat from the second step to the third step until , or increase the value of with a suitable increment if and repeat from the second step to the third step until . Note. the value of the increment controls the precision of the calculation of . To improve the precision, one can repeat the fourth step with a smaller increment. Finally if the increment is smaller than the allowable error, the length parameter is determined. 4. Engineering Application 4.1. Calculation of Pipeline’s Physical Quantities For engineering application, the pipeline’s physical quantities during abandonment or recovery, such as pipeline’s configuration, bending moments, must be calculated. After numerical calculation of ( 8), the angle , the tension force , and the suspended pipeline length are all known, and then the coordinates of the suspended pipeline can be calculated by the following formulas: And the bending moment of the pipeline can be calculated by the following formula: 4.2. A Numerical Calculation Example Using MATLAB, (8) with the basic values shown in Table 1 is calculated as an example. More details for the numerical solving method of this problem can refer to solving ODEs with MATLAB [25]. Consider the first case. Suppose that the angle keeps as a constant of 80°, and the loads are varying in the pipeline abandonment or recovery operation, as shown in Table 2. The results corresponding to these loads are obtained. The configurations of the pipeline and the corresponding bending moments are shown in Figures 4 and 5, respectively. From Figure 4 we know that the mathematical model proposed above can be used in a large scope of water depth (deeper than 3500m) and can simulate the whole lowering and lifting process. And from Figure 5 we know that the bending moment becomes bigger and bigger when the pipeline is lifting. So the most dangerous situation usually happens at the beginning of abandonment or at the end of recovery. Consider the second case. Keeping the tension as a constant of 800KN, varying the angle , as shown in Table 3, the pipeline configurations and bending moments are also obtained by calculations, as shown in Figures 6 and 7 separately. From Figures 6 and 7 we know that the angle has a great effect on the pipeline configuration and bending moments. It can be seen that while the angle increases from 70 degree to 90 degree the bending moment of the pipeline increases greatly, especially when . And it is clear that the bending moment is more sensitive to the parameter angle than to the parameter load . 4.3. Results Comparison It is necessary to compare the results calculated by the presented model and method with the traditional finite element analysis results. The software called DRICAS is developed by the model and method presented above. Meanwhile Orcaflex is also used, which is a world's leading package for pipeline finite element analysis and it tackles the moving boundary condition by the contact analysis formula [26]. The comparisons of one of their configuration results and bending moments are shown in Figures 8 and 9, respectively. It can be seen that these results are in good agreement. That indicates that the model and the method established for tackling the moving boundary condition in this paper are correct and effective. 5. Simple Calculation Methods 5.1. Similarity Criterions of Model Experiment Sometimes it is necessary to simulate the pipeline recovery and abandonment by model experiments. According to dimensional analysis theory [27], by equation and equation , the similarity criterions of such kind of model experiments are obtained; that is, and , respectively. That means if we want to simulate the pipeline abandonment and recovery processes in the laboratory, we should make sure that the values of and of the model are equal to the corresponding values of the actual offshore pipeline operation project. 5.2. Approximate Formula of Suspended Pipeline Length From the numerical calculation procedures it is known that the length of the suspended pipeline is a key parameter of this problem. A simple approximate formula will be very useful to quicken the solving of this boundary value problem. It is known that the length of suspended pipeline is related to , , , and . According to Buckingham’s Pi-Theorem [27], a dimensionless function is derived as shown in (11) To determine (11) completely, the boundary value problem (8) has been solved extensively within the range , , where supposing , , and . Using these results the approximate formula for has been obtained by mathematical fitting as shown in (12). Once is known, the suspended pipeline length can be calculated by equation . And if the suspended pipeline length is known, the steps presented in Section 3.2 can be reduced and hence the computational time for solving the problem: 6. Conclusions In offshore engineering the pipeline S-laying, J-laying, abandonment and recovery operations can be all governed by (6) which is suitable for the deepwater situation. The differences between these processes are mainly in the boundary conditions. Reasonable boundary conditions for the problem of pipeline abandonment and recovery are that at the TDP the angle and the bending moment are equal to zero and the tension loading is equal to the loading force horizontal component, and at the joint the bending moment is equal to zero. The whole mathematical model for this problem is (7) or (8), a moving boundary value problem. The new direct tackling method for the moving boundary of this problem is effective and can get as accurate results as the traditional finite element method coupled with contact analysis. The similarity criterions for model experiments of pipeline abandonment and recovery are and . The suspended pipeline length can be calculated first by approximate formula (12) which can quicken the solving of the pipeline abandonment and recovery problem. Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. The authors have been supported by the National Basic Research Program of China (no. 2011CB013702) and the National Natural Science Foundation of China (no. 51379214). 1. A. C. Palmer, G. Hutchinson, and J. W. Ells, “Configuration of submarine pipelines during laying operations,” ASME Journal of Engineering for Industry, vol. 96, no. 4, pp. 1112–1118, 1974. View at Publisher · View at Google Scholar · View at Scopus 2. R. M. M. Mattheij and S. W. Rienstra, “On an off-shore pipe laying problem,” in Proceedings of the 2nd European Symposium on Mathematics in Industry (ESMI '88), H. Neunzert, Ed., pp. 37–55, Springer, 1988. 3. D. S. Zhu and Y. K. Cheung, “Optimization of buoyancy of an articulated stinger on submerged pipelines laid with a barge,” The Ocean Engineering, vol. 24, no. 4, pp. 301–311, 1997. View at Publisher · View at Google Scholar · View at Scopus 4. F. Guarracino and V. Mallardo, “A refined analytical analysis of submerged pipelines in seabed laying,” Applied Ocean Research, vol. 21, no. 6, pp. 281–293, 1999. View at Publisher · View at Google Scholar · View at Scopus 5. S. Timoshenko, S. Woinowsky-Krieger, and S. Woinowsky, Theory of Plates and Shells, McGraw-Hill, New York, NY, USA, 1959. 6. S. Lenci and M. Callegari, “Simple analytical models for the J-lay problem,” Acta Mechanica, vol. 178, no. 1-2, pp. 23–39, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 7. M. Kashani and R. Young, “Installation load consideration in ultra-deepwater pipeline sizing,” Journal of Transportation Engineering, vol. 131, no. 8, pp. 632–639, 2005. View at Publisher · View at Google Scholar · View at Scopus 8. S. F. Gong, Y. He, J. Zhou et al., “Parameter sensitivity analysis of S-lay technique for deepwater submarine pipeline,” The Ocean Engineering, vol. 4, p. 014, 2009. 9. S.-F. Gong, K. Chen, Y. Chen, W.-L. Jin, Z.-G. Li, and D.-Y. Zhao, “Configuration analysis of deepwater S-lay pipeline,” China Ocean Engineering, vol. 25, no. 3, pp. 519–530, 2011. View at Publisher · View at Google Scholar · View at Scopus 10. L.-Z. Wang, F. Yuan, and Z. Guo, “Numerical analysis for pipeline installation by S-lay method,” in Proceedings of the 29th ASME International Conference on Ocean, Offshore and Arctic Engineering (OMAE '10), pp. 591–599, ASME, Shanghai, China, June 2010. View at Publisher · View at Google Scholar · View at Scopus 11. L.-Z. Wang, F. Yuan, Z. Guo, and L.-L. Li, “Numerical analysis of pipeline in J-lay problem,” Journal of Zhejiang University A, vol. 11, no. 11, pp. 908–920, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 12. L.-Z. Wang, F. Yuan, Z. Guo, and L.-L. Li, “Analytical prediction of pipeline behaviors in J-lay on plastic seabed,” Journal of Waterway, Port, Coastal and Ocean Engineering, vol. 138, no. 2, pp. 77–85, 2012. View at Publisher · View at Google Scholar · View at Scopus 13. M.-L. Duan, Y. Wang, S. Estefen, N. He, L.-N. Li, and B.-M. Chen, “An installation system of deepwater risers by an S-lay vessel,” China Ocean Engineering, vol. 25, no. 1, pp. 139–148, 2011. View at Publisher · View at Google Scholar · View at Scopus 14. M. Szczotka, “A modification of the rigid finite element method and its application to the J-lay problem,” Acta Mechanica, vol. 220, no. 1–4, pp. 183–198, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 15. F. Yuan, Z. Guo, L. Li, and L. Wang, “Numerical model for pipeline laying during S-lay,” Journal of Offshore Mechanics and Arctic Engineering, vol. 134, no. 2, Article ID 021703, 2011. View at Publisher · View at Google Scholar · View at Scopus 16. F. Andreuzzi and G. Maier, “Simplified analysis and design of abandonment and recovery of offshore pipelines,” Ocean Management, vol. 7, no. 1–4, pp. 211–230, 1981. View at Publisher · View at Google Scholar · View at Scopus 17. T. K. Datta, “Abandonment and recovery solution of submarine pipelines,” Applied Ocean Research, vol. 4, no. 4, pp. 247–252, 1982. View at Publisher · View at Google Scholar · View at Scopus 18. Y. J. Dai, J. Z. Song, and G. Feng, “A study on abandonment and recovery operation of submarine pipelines,” The Ocean Engineering, vol. 18, pp. 75–78, 2000 (Chinese). 19. J. Z. Xing, C. T. Liu, and X. H. Zeng, “Nonlinear analysis of submarine pipelines during single point lifting,” The Ocean Engineering, vol. 20, pp. 29–33, 2002 (Chinese). 20. X. G. Zeng, M. L. Duan, and J. H. Chen, “Research on several mathematical models of offshore pipe lifting or lowering by one point,” The Ocean Engineering, vol. 31, pp. 32–37, 2013 (Chinese). 21. H. M. Irvine, Cable Structures, MIT Press, Cambridge, Mass, USA, 1981. 22. D. Dixon and D. Rutledge, “Stiffened catenary calculations in pipeline laying problem,” ASME Journal of Engineering for Industry, vol. 90, no. 1, pp. 153–160, 1968. View at Publisher · View at Google Scholar 23. C. P. Sparks, Fundamentals of Marine Riser Mechanics—Basic Principles and Simplified Analysis, PennWell, 2007. 24. G. A. Jensen, N. Säfström, T. D. Nguyen, and T. I. Fossen, “A nonlinear PDE formulation for offshore vessel pipeline installation,” The Ocean Engineering, vol. 37, no. 4, pp. 365–377, 2010. View at Publisher · View at Google Scholar · View at Scopus 25. L. F. Shampine, I. Gladwell, and S. Thompson, Solving ODEs with MATLAB, 2003. 26. O. Manual, 2009, http://www.orcina.com/SoftwareProducts/OrcaFlex/Documentation. 27. A. A. Sonin, The Physical Basis of Dimensional Analysis, Department of Mechanical Engineering, MIT, Cambridge, Mass, USA, 2001.
{"url":"http://www.hindawi.com/journals/jam/2014/298281/","timestamp":"2014-04-18T01:51:23Z","content_type":null,"content_length":"184990","record_id":"<urn:uuid:2ec13525-83ef-4c76-a518-74cc5a34a551>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
Beaumont, TX Math Tutor Find a Beaumont, TX Math Tutor ...I am patient, attentive and engage students in their studies. I have tutored many students of varying ages all with great success. I find that many students struggle with math simply because it is very different from any other subject and don't know quite how to study it. 30 Subjects: including geometry, linear algebra, ACT Math, reading ...I am typically available after 5:30PM on weekdays, and I am free anytime on weekends.I graduated from the Georgia Institute of Technology in May 2013 with a degree in Chemical and Biomolecular Engineering. I received my degree with Honors and graduated with a 3.38. I have taken classes ranging ... 20 Subjects: including calculus, algebra 1, algebra 2, ACT Math ...References are available upon request.I made an A in Algebra 1 in high school, and have been using it ever since in all of my high school and college math classes. I also have been tutoring in Algebra 1 for a number of years. I made an A in Physics 1 in high school, and have been using it ever since in my high school and college physics classes. 5 Subjects: including algebra 1, geometry, prealgebra, physics ...For good results of the students, I always believe in evaluation. So sometimes I give home works or take small quizzes. I always try to motivate my students for their studies, and always look for different techniques to improve the performance of the students. 19 Subjects: including prealgebra, geometry, algebra 2, precalculus In general, I start teaching basic materials, and depending on the capabilities of the students, I incorporate advance topics so that the students gradually learn, keep them for a longer period, and apply them to the real world. I give home works and check/correct them on the next day with explana... 4 Subjects: including algebra 1, chemistry, physics, German Related Beaumont, TX Tutors Beaumont, TX Accounting Tutors Beaumont, TX ACT Tutors Beaumont, TX Algebra Tutors Beaumont, TX Algebra 2 Tutors Beaumont, TX Calculus Tutors Beaumont, TX Geometry Tutors Beaumont, TX Math Tutors Beaumont, TX Prealgebra Tutors Beaumont, TX Precalculus Tutors Beaumont, TX SAT Tutors Beaumont, TX SAT Math Tutors Beaumont, TX Science Tutors Beaumont, TX Statistics Tutors Beaumont, TX Trigonometry Tutors
{"url":"http://www.purplemath.com/beaumont_tx_math_tutors.php","timestamp":"2014-04-16T10:46:57Z","content_type":null,"content_length":"23810","record_id":"<urn:uuid:b4f3ef15-7cd2-43ae-93d2-d15ac1122151>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
example of square root property Author Message donybnoi Posted: Thursday 28th of Dec 20:40 Hi friends . I am badly in need of some help. My example of square root property homework has started to get on my nerves. The classes move so fast , that I hardly ever get a chance to clarify my confusion . Is there any tool that can help me cope with this homework mania ? Registered: 12.04.2002 oc_rana Posted: Saturday 30th of Dec 14:12 Hi Dude , Algebrator available at the website http://www.pocketmath.net/multiplying-fractions.html can be of great assistance to you. I am a mathematics coach who give private tuitions to students and I advocate Algebrator to my students since that helps them a lot when they sit to work out their homework by themselves at home. Registered: 08.03.2007 From: egypt,alexandria LifiIcPoin Posted: Saturday 30th of Dec 15:22 Algebrator beyond doubt is a great piece of algebra software. I remember having difficulties with binomials, algebraic signs and exponent rules. By typing in the problem from workbook and merely clicking Solve would give step by step solution to the math problem. It has been of great help through several Intermediate algebra, Remedial Algebra and Algebra 2. I seriously recommend the program. Registered: 01.10.2002 From: Way Way Behind Dolknankey Posted: Monday 01st of Jan 09:09 function domain, dividing fractions and like denominators were a nightmare for me until I found Algebrator, which is truly the best algebra program that I have ever come across. I have used it through several algebra classes – Intermediate algebra, Pre Algebra and Remedial Algebra. Simply typing in the algebra problem and clicking on Solve, Algebrator generates step-by-step solution to the problem, and my algebra homework would be ready. I really recommend the program. Registered: 24.10.2003 From: Where the trout streams flow and the air is nice p3980 Posted: Wednesday 03rd of Jan 08:50 Yeah! That's a great alternative to the high priced private coaching and costly online coaching. The single page formula list offered there has served me in every Basic Math internal that I have had in the past. Even if you are an intermediate in College Algebra, the Algebrator is very useful since it offers both easy and tough exercises and drills for practice. Registered: 23.07.2005 MichMoxon Posted: Thursday 04th of Jan 17:05 Here you go kid, http://www.pocketmath.net/solving-quadratic-equations-using-the-quadratic-formula-1.html Registered: 21.08.2001
{"url":"http://www.pocketmath.net/math-softwares/radicals/example-of-square-root.html","timestamp":"2014-04-17T01:45:53Z","content_type":null,"content_length":"56507","record_id":"<urn:uuid:9e90b5d6-3b05-441b-b0c9-2115fa4d0acc>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
How do you measure your performance? July 30, 2005, 06:07 PM I shoot at least once a week and I like to keep my targets so I can see how I'm performing. The problem I had was that it was very subjective..."Hmmm, that one looks good". But was it really? And it was hard to be objective about which gun was giving me the best performance. I have some that are "favorites" -- but do they really shoot the best for me? So I decided to try to find a "scientific" way to measure gun performance. Here's what I came up with: 1) Take the target and draw a rectangle (not a circle!) that encloses all the hits with the minimum area. No funky shapes...just a rectangle. 2) If there are one or two "flyers" that really ruin an otherwise good group, draw separate "With" and "Without" rectangles. 3) Note the distance shot and the number of rounds fired I take that info and I plug it into an Excel spreadsheet that calculates the following: A) Area of the hit (for both With and Without Flyers case) B) Average range shot (for all the data points for a particular handgun) C) A "score" that is calculated by taking the area, dividing it by the number of rounds squared and then divided by the range. D) The average score for that handgun My thought was that not all groups are created equal. I think you ought to get credit for the distance you shot and the number of rounds fired. I decided to square the number of rounds because I think it gets harder and harder to hold a tight group as the round count goes up. You could reverse it and square the distance if you thought that was more important than the number of bullet holes. What I end up with a way to look at different guns and see if my "instinct" matches the facts. Turns out that some of the guns I thought were "bad" shooters are, in fact, better than some that are in my favorite category. It only works when you shoot comparable courses of fire. You can't throw targets that were weak hand only into the results (unless you had a really good day!) I have a separate page in the spreadsheet for each gun, and it shows how many data points are in the sample. That way you can tell when you're comparing 30 targets for one gun against only 3 or 4 for another. It also displays the average distance for all those data points...so if you shoot one gun at 7 yds all the time and another at 15 you can take that into account when comparing results. In the future, I'll probably parse the data so you can see results for a particular range so you can compare apples to apples. The This is what I'm trying. I'm interested to hear what other systems (if any) people use to measure and track their performance. I'd like to end up with something that can help me recognize and diagnose problems in my shooting because I have some data to show my past performance. Any suggestions or ideas on other systems would be great.
{"url":"http://www.thehighroad.org/archive/index.php/t-149407.html","timestamp":"2014-04-20T09:12:55Z","content_type":null,"content_length":"16092","record_id":"<urn:uuid:4dc333c1-2297-4939-8bd4-ddb1ce88731f>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Word problem/Quadratic equations Richay...try to identify the range of numbers that the solution definitely *isn't*. For example...say I let n+2 = 4...what is the product of 4 and -6?...what is the product of 2 and 3? ( since n+2 = 4, our third number...our first and second are n and n+1 )... secondly can n be somewhere around +/- 10 million?...whereabouts does n lie? Be aware that potentially the problem might be expressed in any of these ways: -6n = (n+1)(n+2) -6n = (n-1)(n-2) -6n(n-2) = (n-1)(n) -6n(n+2) = (n+1)(n) Only two of these is any help, the other two will not...which two cannot help and why? (By finding the range of numbers that your solution isnt, you should be able to solve it just by inspection and then you should try to justify your choices by choosing and solving the correct
{"url":"http://www.physicsforums.com/showpost.php?p=981078&postcount=8","timestamp":"2014-04-17T21:37:44Z","content_type":null,"content_length":"7666","record_id":"<urn:uuid:9fcb4ad9-c917-461b-ab84-8429e49b8244>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - Law of Total Probability/Bayes' Theorem Law of Total Probability/Bayes' Theorem Can somebody explain to me, using an example, what those 2 theorems actually are? Like, when I see a problem, how do I know what I'm gonna use? I know Total Probability is "unconditional Probability", but I don't really get that. Supose that F1, F2......Fn are events such that Fi[itex]\bigcap[/itex]Fj=∅ whenever i≠j and F1[itex]\bigcup[/itex]......[itex]\bigcup[/itex]Fn=S. Then for any event E, P(E)= P(E I F1) P(F1)+....+P(E I Fn) P(Fn). Bayes' is for conditional probabilities, but apparently you calculate those conditional probabilities differently... For any events E and F, the conditional probabilities P( E I F) and P(F I E) are connected by the following formula: P(E I F)=P(F I E) P(E)/P(F) The other definition of conditional probability was P(E I F)= P(E[itex]\bigcap[/itex]F)/P(F). Can't figure out what the difference is, when I use which one..etc. kai_sikorski Feb23-12 08:44 PM Re: Law of Total Probability/Bayes' Theorem Supose that F1, F2......Fn are events such that Fi⋂Fj=∅ whenever i≠j and F1⋃......⋃Fn=S. Then for any event E, P(E)= P(E I F1) P(F1)+....+P(E I Fn) P(Fn). A lot of the time in probability problems it's easiest to break down the problem into mutually exclusive cases and deal with them separately. Like what's the probability that the sum of two dice is less than 6? + X ≤ 6) = P(X =1) + P(X =2) + P(X =4) +P(X =4) +P(X So above in the sum you break down the cases based on the result of the first die. All times are GMT -5. The time now is 11:38 PM. Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd. © 2014 Physics Forums
{"url":"http://www.physicsforums.com/printthread.php?t=580772","timestamp":"2014-04-21T04:38:59Z","content_type":null,"content_length":"6564","record_id":"<urn:uuid:70f423a8-281f-47ef-b0d3-87541ec5e6c3>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
Erwinna Math Tutor Find an Erwinna Math Tutor ...Thank you for the opportunity to help your child.I am certified as an Elementary School Teacher and Teacher of Students with Disabilities. I am also Highly Qualified in Middle School Mathematics. I am familiar and have worked with many math programs including Holt, Everyday Math, McDougal Littell, Singapore, Digi-Blocks, Pinpoint, and Number Worlds. 13 Subjects: including prealgebra, English, elementary (k-6th), phonics I am a recently graduated college student who studied Early Childhood/Elementary Education. I have been tutoring since my Junior year in high school. I am very fun, energetic, and patient and enjoy learning almost as much as I enjoy making learning fun and understandable. 14 Subjects: including algebra 1, algebra 2, reading, prealgebra ...I work one-on-one with my students to focus on their weaknesses by utilizing their strengths. Algebra 1 has become a very important subject in Pennsylvania with the advent of the Keystone Exams, and it is a requirement if a student will progress successfully throughout High School. Trigonometry is one of those subjects that takes a student out of their comfort zones. 12 Subjects: including prealgebra, algebra 1, algebra 2, calculus ...I also took calculus, statistics, and business calculus in college. I have had extensive experience tutoring in all of these subjects. My senior year of college, I researched and wrote an honors thesis. 36 Subjects: including geometry, phonics, soccer, astronomy ...I have spent most of the last 20 years in maintenance area finding ways to improve the operation of the equipment, save money and reduce operating costs.I enjoy math and enjoy helping others understand math I have helped all my kids (5 of them) from elementary to college math I have a degree in... 20 Subjects: including geometry, trigonometry, algebra 2, algebra 1
{"url":"http://www.purplemath.com/Erwinna_Math_tutors.php","timestamp":"2014-04-18T05:58:44Z","content_type":null,"content_length":"23723","record_id":"<urn:uuid:d9e9baac-1abb-414b-a193-72e97257881e>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: January 2000 [00157] [Date Index] [Thread Index] [Author Index] Re: Re: Verifying PrimeQ for n >10^16 • To: mathgroup at smc.vnet.net • Subject: [mg21552] Re: [mg21518] Re: Verifying PrimeQ for n >10^16 • From: Andrzej Kozlowski <andrzej at tuins.ac.jp> • Date: Sat, 15 Jan 2000 02:03:57 -0500 (EST) • Sender: owner-wri-mathgroup at wolfram.com The reason why PrimeQ is limited to primes less that 10^16 has nothing do do with Mathematica, memory and any computer related factors. It's just mathematics. PrimeQ checks if a number is prime by applying three tests: the 2 and 3 strong pseudo-prime test (basicaly Fermat's little theorem) and the so called Lukas test. Any number that fails any of these is definitely composite but it is possible for a number to pass all the tests and not be prime. It's just that no such number has ever been found. In very few (if any) experts believe that these tests are sufficient but a "false prime" may be so large that it may never be encountered. > From: sniff <sniff at home.com> To: mathgroup at smc.vnet.net > Organization: SBC Internet Services > Date: Fri, 14 Jan 2000 02:43:34 -0500 (EST) > To: mathgroup at smc.vnet.net > Subject: [mg21552] [mg21518] Re: Verifying PrimeQ for n >10^16 > You could use the Lucas-Lehmer theorem to proof for Mersenne primes larger > than 10^16 that PrimeQ works or not. If you can find one case where PrimeQ > fails, you are done. Otherwise, ................ > You can find more about the LL theorem on the Web or in Knuth's book about > "The Art of Computer Programming". I am not sure why PrimeQ is limited to > primes less than 10^16. Usually Mathematica does a good job as long as enough > virtual memory space is available. - One reason for this strange limitation > could be the fact that very large primes that are _not_ Mersenne primes are > held top secret. The are excellent seeds for encryption. > GTO > Ersek, Ted R wrote in message <85i09v$1o5 at smc.vnet.net>... >> The documentation for Mathematica 4 indicates PrimeQ has been >> proven correct for all (n<10^16), but there may be larger composite > integers >> that PrimeQ indicates are prime. >> I have often wanted to write a program that would verify PrimeQ, starting >> with 10^16, for each integer that PrimeQ says is prime. >> The ProvablePrimeQ package goes about this in a way that is more rigorous >> than we need. To prove that n is prime ProvablePrimeQ proves that certain >> numbers less than n are prime, and it goes on in a recursive manner until > it >> gets to 2 which is definitely prime. >> If we are going to efficiently prove that n is prime the program should >> assume that PrimeQ is correct for all integers less than n. So far I > haven't >> been successful in writing such a program. This is mostly due to the fact >> that I have very little background in number theory. I wonder if anyone >> could provide the needed program. >> Now to improve the rate of progress we could have a few hundred computers >> work on it at the same time. One computer could verify PrimeQ for the > first >> 100,000 numbers after 10^16 that PrimeQ says are prime. Another computer >> could verify PrimeQ for the next 100,000 numbers PrimeQ says are prime, and >> so on. Each computer could churn away on it over night when they would >> otherwise do nothing. Does anyone have an estimate of how fast this would >> work? Rather slow I bet, but what do we have to loose. >> I understand there may be no case where PrimeQ is wrong. The frustrating >> thing is that thanks to Kurt Godel's theorem it may be impossible to prove >> that PrimeQ is always right (that's Godel with two dots over the "o"). >> -------------------- >> Regards, >> Ted Ersek >> On 12-18-99 Mathematica tips, tricks at >> http://www.dot.net.au/~elisha/ersek/Tricks.html >> had a major update
{"url":"http://forums.wolfram.com/mathgroup/archive/2000/Jan/msg00157.html","timestamp":"2014-04-16T16:29:36Z","content_type":null,"content_length":"38197","record_id":"<urn:uuid:2bf00086-ac7b-44d9-8633-00d318bae8f3>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Pre-Algebra - Math Learning Guides Basic Algebra is where we finally put the algebra in pre-algebra. The concepts taught here will be used in every math class you will take from here on. We’ll introduce you to some exciting stuff like drawing graphs and solving some complicated equations. Don't let the word "algebra" intimidate you. You actually have been using this type of math for years. In fact, many people find algebra to be one of the easier types of math to learn, since it is full of common sense rules and ideas. "As long as algebra is taught in school, there will be prayer in school. " – Cokie Roberts Basic algebra will help you become a better problem solver. You'll size up a problem, brainstorm creative options for solving it, and chart a clear path to its solution. Your mad problem solving skills will serve you well throughout your entire life. You have some exciting things ahead of you: getting through school, building a career, and leaving your mark on the
{"url":"http://www.shmoop.com/basic-algebra/","timestamp":"2014-04-19T10:02:13Z","content_type":null,"content_length":"27253","record_id":"<urn:uuid:3da01485-1613-4f28-b4f4-37fa7f9000fd>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
Soquel ACT Tutor ...I've taken the course at Cabrillo College. It had both lecture and lab components as well as field trips to view plants and fungi in the wild. I passed the class taught by Nicole Crane with a high B. 31 Subjects: including ACT Math, chemistry, reading, calculus ...I am currently a full-time instructional assistant in the Math Learning Center at Cabrillo College, where I have been tutoring off and on for the past 10 years. I taught algebra 2 at Georgiana Bruce Kirby Preparatory School in Santa Cruz during the 2010-2011 school year. I have extensive experi... 30 Subjects: including ACT Math, Spanish, calculus, statistics ...My priority is to make sure your specific educational needs are being met. Whether it's reviewing the fundamentals or tackling topics that go beyond the scope of your class, I'm here to help. CLASS SUBJECTS Math - All levels Physics - All levels Chemistry - All levels TEST PREPARATION SAT I: M... 14 Subjects: including ACT Math, chemistry, calculus, physics ...Most recently, I have been tutoring UCSC students and faculty in English and writing. When tutoring, I tune into the needs of the student. I listen deeply for what is hard, confusing or distressing and then help clear up the block with new approaches or techniques. 25 Subjects: including ACT Math, English, reading, chemistry ...I'm proficient with math up to Calculus 11A and can tutor in PSAT/SAT and ACT prep as well as AP Chemistry, Biology, U.S History and English. I've had roughly 4 years of tutoring experience, mostly in high school and college classes. I'm an easy going tutor that likes to take time explaining concepts as well as workarounds and shortcuts. 18 Subjects: including ACT Math, chemistry, English, biology
{"url":"http://www.purplemath.com/Soquel_ACT_tutors.php","timestamp":"2014-04-17T19:32:07Z","content_type":null,"content_length":"23440","record_id":"<urn:uuid:a53b70eb-0c08-4440-9128-6d7fd0c94aeb>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
Singular Linear Least Squares Contents Previous Next Subchapters Singular Linear Least Squares Syntax linlsq(A, b, bound) See Also nlsq , linlsqb Returns a vector x, that solves the problem minimize | A x - b | with respect to x The matrix A is real, double-precision, or complex, b is a column vector with the same type and row dimension as A, and bound is a scalar with the same type as A such that absolute singular values of the matrix A that are less than bound are replaced by 0 before solving the minimization problem. If A is not complex, bound must have the same type as A. If A is complex, bound must be double-precision. The returned vector, x, has the same type as A. If more than one solution exists, the solution of minimum norm is returned (i.e., the value of x that minimizes |x| over the set of solutions). The operation "\" uses a QR factorization to solve similar problems. If the rank of A is less than both its row and column dimension, the "\" operator will generate an error if you compute A \ b. The linlsq function can solve such problems. A = {1., 1.} b = {1., 2.} eps = 1e-7 linlsq(A, b, eps)
{"url":"http://www.omatrix.com/manual/linlsq.htm","timestamp":"2014-04-19T04:20:12Z","content_type":null,"content_length":"5919","record_id":"<urn:uuid:74aca138-b16f-4365-8ec6-d39b6531661c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
The estimation of Value at Risk and Expected Shortfall November 19, 2012 By Pat An introduction to estimating Value at Risk and Expected Shortfall, and some hints for doing it with R. “The basics of Value at Risk and Expected Shortfall” provides an introduction to the subject. Starting ingredients Value at Risk (VaR) and Expected Shortfall (ES) are always about a portfolio. There are two basic ingredients that you need: • The positions within the portfolio • A history of prices for the assets involved With these you can derive two less basic ingredients: • The portfolio weights come from the positions and the current prices • The asset return history can be found from the price history These can be used to estimate market risk. There are other risks — such as credit risk — that may not be included in the price history. Multivariate estimation VaR and ES are each a single risk number at the portfolio level while we are starting at the asset level. One approach is to estimate a variance matrix of the asset returns and then use the portfolio weights to collapse to the portfolio variance. This is most likely to be done when it is desired to see the sources of risk rather than just have a single number. Univariate estimation Estimation is simpler with a single time series of returns for the portfolio — the portfolio as it is now. We can get this by matrix-multiplying the matrix of simple returns of the assets in the portfolio by the portfolio weights. In R notation this is: R1 <- assetSimpRetMatrix %*% portWts or safer would be: R1 <- assetSimpRetMatrix[, names(portWts)] %*% portWts Note that this is similar to the computation that was warned about in “An easy mistake with returns”. But in this case we don’t want the returns of a real portfolio — we want the hypothetical returns of our portfolio as it now exists. The R1 object computed above holds the (hypothetical) simple returns of the portfolio. Modeling is often better with log returns. You can transform from simple to log returns like: r1 <- log(R1 + 1) There are additional choices, of course, but some common methods are: • historical (use the empirical distribution over some number of the most recent time periods) • normal distribution (estimate parameters from the data) and use the appropriate quantile • t-distribution (usually assuming the degrees of freedom rather than estimating them) • fit a univariate garch model and simulate ahead R Implementations Here is an incomplete guide to VaR and ES in the R world. My search for R functionality was: This package has functions VaR and ES. They take a vector or matrix of asset returns. Component VaR, marginal VaR and component ES can be done. Distributions include historical, normal and a Cornish-Fisher approximation. Here are some examples where spxret11 is a vector of the daily log returns of the S&P 500 during 2011. So we are getting the risk measure (in returns) for the first day of 2012. > VaR(spxret11, method="historical") VaR -0.02515786 > VaR(spxret11, method="gaussian") VaR -0.0241509 > VaR(spxret11, method="gaussian", p=.99) VaR -0.03415703 > ES(spxret11, method="historical") ES -0.03610873 > ES(spxret11, method="gaussian") ES -0.03028617 If the first argument is a matrix, then each column can be thought of as an asset within the portfolio. This is illustrated with some data from the package: > data(edhec) > VaR(edhec[, 1:5], portfolio_method="component") no weights passed in, assuming equal weighted portfolio [1,] 0.02209855 Convertible Arbitrage CTA Global 0.0052630876 -0.0001503125 Distressed Securities Emerging Markets 0.0047567783 0.0109935244 Equity Market Neutral Convertible Arbitrage CTA Global 0.238164397 -0.006801916 Distressed Securities Emerging Markets 0.215252972 0.497477204 Equity Market Neutral Package actuar This package also has a VaR function that works with a special form of distribution objects. Doing it yourself There is not very much functionality available in R for Value at Risk and Expected Shortfall probably because it is extremely easy to do whatever you want yourself. Warning: none of the functions given below have been tested. There is a reasonably high probability of bugs. The functions are placed in the public domain — you are free to copy them and use them however you like. Here is a definition of a simple function for historical estimation of Value at Risk: VaRhistorical <- function(returnVector, prob=.05, notional=1, digits=2) if(prob > .5) prob <- 1 - prob ans <- -quantile(returnVector, prob) * notional signif(ans, digits=digits) This is used for a 13 million dollar portfolio like: > VaRhistorical(spxret11, notional=13e6) The expected shortfall is barely more complicated: EShistorical <- function(returnVector, prob=.05, notional=1, digits=2) if(prob > .5) prob <- 1 - prob v <- quantile(returnVector, prob) ans <- -mean(returnVector[returnVector <= v]) * signif(ans, digits=digits) This can be used like: > EShistorical(spxret11, notional=13e6) [1] 470000 So the Value at Risk is $330,000 and the Expected Shortfall is $470,000. normal distribution There’s a better (in a statistical sense) version later, but here is a simple approach to getting Value at Risk assuming a normal distribution: VaRnormalEqwt <- function(returnVector, prob=.05, notional=1, expected.return=mean(returnVector), if(prob > .5) prob <- 1 - prob ans <- -qnorm(prob, mean=expected.return, sd=sd(returnVector)) * notional signif(ans, digits=digits) This is used like: > VaRnormalEqwt(spxret11, notional=13e6) [1] 310000 > VaRnormalEqwt(spxret11, notional=13e6, + expected.return=0) [1] 310000 Computing the Expected Shortfall in this case is slightly complicated because we need to find the expected value of the tail. Numerical integration works fine for this. ESnormalEqwt <- function(returnVector, prob=.05, notional=1, expected.return=mean(returnVector), if(prob > .5) prob <- 1 - prob retsd <- sd(returnVector) v <- qnorm(prob, mean=expected.return, sd=retsd) tailExp <- integrate(function(x) x * dnorm(x, mean=expected.return, sd=retsd), -Inf, v)$value / prob ans <- -tailExp * notional signif(ans, digits=digits) The result for our example with this is: > ESnormalEqwt(spxret11, notional=13e6) [1] 390000 A much better approach that is still quite simple is to use exponential smoothing to get the volatility (as the original RiskMetrics did): VaRnormalExpsmo <- function(returnVector, prob=.05, notional=1, expected.return=mean(returnVector), lambda=.97, digits=2) if(prob > .5) prob <- 1 - prob retsd <- sqrt(tail(pp.exponential.smooth( returnVector^2), 1)) ans <- -qnorm(prob, mean=expected.return, sd=retsd) * signif(ans, digits=digits) where pp.exponential.smooth is taken from “Exponential decay models”. > VaRnormalExpsmo(spxret11, notional=13e6) [1] 340000 t distribution The tricky bit with the t distribution is remembering that it doesn’t have 1 as its standard deviation: VaRtExpsmo <- function(returnVector, prob=.05, notional=1, lambda=.97, df=7, digits=2) if(prob > .5) prob <- 1 - prob retsd <- sqrt(tail(pp.exponential.smooth( returnVector^2), 1)) ans <- -qt(prob, df=df) * retsd * sqrt((df - 2)/df) * signif(ans, digits=digits) The result of this one is: > VaRtExpsmo(spxret11, notional=13e6) There are several choices for garch estimation in R. extreme value theory There are also several choices of packages for extreme value theory. See, for instance, the Finance task view. What have I missed in the R world? Are there any bugs in my functions? daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/the-estimation-of-value-at-risk-and-expected-shortfall/","timestamp":"2014-04-17T13:12:46Z","content_type":null,"content_length":"44329","record_id":"<urn:uuid:ae3ecdec-9a64-4094-8993-8753d3875233>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
Totowa Trigonometry Tutor Find a Totowa Trigonometry Tutor ...You will gain the knowledge and confidence you need to succeed! I have been tutoring and teaching since I was in high school myself because I love doing it! During and after earning a Master of Science in mathematics, I spent 8 years teaching at the post-secondary level in universities and community colleges. 10 Subjects: including trigonometry, calculus, statistics, geometry ...I have been playing guitar for over 14 years and I compose and perform regularly. I have taught private guitar lessons to students of all ages. When I'm not thinking about music or math I love reading Philosophy and playing Halo 4.Algebra becomes second nature when you start doing upper-level mathematics. 22 Subjects: including trigonometry, calculus, elementary math, precalculus ...The ACT Math section requires speed and endurance because of its length. Test taking skills for the ACT Math are different from the SAT Math. The ACT Math section require some pre-calculus (albeit at a basis level) that I can teach. 17 Subjects: including trigonometry, calculus, physics, statistics ...I took the test July 27th 2013. I have been accepted to medical school and am matriculating in August, though I currently tutor full time. I am excited at the idea of helping other students to overcome the stressful burden of the MCAT. 24 Subjects: including trigonometry, chemistry, physics, geometry My business, management, and engineering experience spans over 20 years. I am a Renewable Energy Consultant (Solar: PV, Thermal), and hold a bachelor’s degree in electrical engineering and a master’s degree in manufacturing engineering from New York University - Polytechnic Institute. I am a New York State Licensed Electrical Technology Instructor. 21 Subjects: including trigonometry, reading, statistics, English Nearby Cities With trigonometry Tutor Cedar Grove, NJ trigonometry Tutors East Rutherford trigonometry Tutors Fairfield, NJ trigonometry Tutors Glen Rock, NJ trigonometry Tutors Haledon trigonometry Tutors Hawthorne, NJ trigonometry Tutors Hillcrest, NJ trigonometry Tutors Lincoln Park, NJ trigonometry Tutors Little Falls, NJ trigonometry Tutors North Caldwell, NJ trigonometry Tutors North Haledon, NJ trigonometry Tutors Paterson, NJ trigonometry Tutors Verona, NJ trigonometry Tutors Wayne, NJ trigonometry Tutors Woodland Park, NJ trigonometry Tutors
{"url":"http://www.purplemath.com/Totowa_Trigonometry_tutors.php","timestamp":"2014-04-16T10:15:02Z","content_type":null,"content_length":"24102","record_id":"<urn:uuid:1e4e5b9a-e9dc-487f-9948-6321d0aa3c07>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
Reading | Pre-School and Kindergarten | Science | Social Studies | Technology | Writing | Images for Teaching | High School to Adulthood | Newsletters | Research/Research Based Practices General | Specific Webmath is a math-help web site that generates answers to specific math questions and problems, as entered by a user, at any particular moment. The math answers are generated and displayed real-time, at the moment a web user types in their math problem and clicks "solve." In addition to the answers, Webmath also shows the student how to arrive at the answer. Get help with General Math, K-8 Math, Algebra, Plots and Geometry, Trig and Calculus and other Stuff! Brightstorm Math brings math help to you in clear, well taught, free videos whenever you need them. Choose from Algebra, Geometry, Algebra 2, Trigonometry, Precalculus and Calculus. This site has over 2000 online videos taught step by step with clear explanations. Choose a topic and what you need help with, within that topic. Watch a teacher teach the concept while she/he explains their thinking. If you need more help, there are three follow-up problems. This is a must visit site! Exploring Math Through Space- Texas Instruments announced that it has teamed up with the National Aeronautics and Space Administration (NASA) to develop free online math content for teaching Algebra I through AP Calculus. The "Exploring Space Through Math" curriculum is aligned with standards from the National Council of Teachers of Mathematics. Portions of the curriculum are available on NASA's web site now, and more content will be posted as it is created. The materials aim to use students' interest in space exploration as a "hook" to get them interested in math. Students will be able to design a space capsule, control the launch of a shuttle, and more-learning key math concepts along the way. Illuminations by the National Council of Teacher of Mathematics (NCTM) is a must visit site! Explore their library of 104 online activities that help to make math come alive in the classroom or at home. View a collection of 545 lessons for preK-12 math educators. Check out their approved web links in: Figure This Math Challenges for Families (English and Spanish) has 80 real world (middle school and up) reasons to use math, along with a step by step guide on how to solve the problems. Included are both a teacher and family corner for using the resources on the website. Among the options are ideas on how parents can help their children at home. Interactivate is a set of free, online courseware for exploration in science and mathematics for grades 3 - 12. . Choose from activities, lessons, and discussions. Learners can choose from 157 Interactive Activities, a Dictionary and 48 visual Tools. Instructors can pick from 108 Lessons and 153 Discussions that model teaching a concept, by using a dialogue between a mentor and a student. Thinking Blocks can help word problems come to life. Teachers and students can watch a video and hear an explanation of how to use visual components to solve a word problem. A math tutor uses blocks to visualize math stories, refers back to the story problem and then solves the word problem with a number operation. Choose from Addition and Substation; Multiplication and Division; Ratio and Proportion. (It may be useful for some students to have a calculator available as they watch the word problem being solved or use the independent practice option.) Math Apprentice The path to anything you want to be! The goal of this project is to connect math with 16 real world careers. Middle school students play the role of an intern at one of eight companies in a growing metropolis. Students are greeted by an employee of the company who then explains the math behind the job. They may then choose to solve specific problems or explore math concepts on their own. A Math Dictionary for Kids by Jenny Eather is a fabulous, interactive dictionary for students with over 500 mathematical terms in simple language. I have used this site for explaining math terms in kid friendly language, for word walls, as a learning center, and taped in student math books or on desks, to aid memory. This is a must visit site. Math is Fun by Rod Pierce is a fabulous site I use when I need a refresher on a math topic or when researching and adapting material for students. This site explains math concepts from K to HS is easy terms. In many cases, he explains the concepts and terms multiple ways. Web Math is another must visit site that clearly explains math. Topics include General Math, K-8 Math, Algebra, Plots and Geometry, Tri and Calculus. This site has interactive learning sections. One section is on comparing fractions. Input your fractions and the site converts the fractions to shaded circles and explains why one is larger than the other. Rain Forest Math Ideas by Jenny Eather provides wonderful math practice from grades K through 6. She includes number systems, operations and calculations, strategies and processes, patterns and algebra, measurement, space and geometry, chance and probability, data analysis and money. Since this site is from Australia, the only draw back is the use of the metric system in measurement Teacher2Teacher (T2T) is a question-and-answer service supporting the needs of the mathematics teaching community. T2T is staffed by a panel of teaching professionals, called Teacher2Teacher Associates, who answer questions received by the service. You can also subscribe to a free newsletter. math2.org Of particular interest on this site, is the English/Spanish Math reference Tables from Addition to Calculus. The Math Page (This site is primarily for high school and college) by Lawrence Spector from the Borough of Manhattan Community College offer clear explanations on seven areas including: Skill in Arithmetic, Topics in Pre-Calculus, The Evolution of Real Numbers Primarily for H.S. and up. The Teacher Place has a self guided tour to help you explore all the resources at this site for Pre-K through College level teachers. Marcia Lesson Links from Vacaville, CA are math game ideas. TrackStar for Teachers from the UK has some of the most amazing teacher resources and student games in Early Childhood, K-2, 3-4, 5-9, 9-12, and College Age!! Tracks can include lesson plans, videos lessons, and/or game links. Teachers submit tracks on topics in certain grade ranges. You can sort tracks by "top tracks" (there are 273 top tracks) by topic and grade level. Check out the tutorial for more in-depth information. Favorite tracks from Track Star for Teachers Click on favorite tracks, type in the track number next to the topic you are interested in. • Place Value: Track number 296765. You will get 15 lessons on Place value by Anne Walner. Make sure you check out lesson 8, a video explaining place value. • Telling time: Track number 296772. You will get 15 lessons on telling time by Anne Walner. The kids will love lesson 12, telling time with a dragon. *Teaching Time is a free download or for purchase CD for teaching time. Student games are uncluttered and easy to use. Matching Analog to Digital Clock Game Practice your facts and explore math sites Math Baseball practice addition and more Math Magician games Sites for Multiplication Tricks and Ideas: Useful Sites with Advertising and/or Pop Ups (for a fee): *AAA Math is a free on the web math site divided by subject or grade level for grades K through grade 8. It provides unlimited practice and feedback to students; however there are pop- up ads to deal with. This site also offers practice for a fee on a CD without ads. *Print and Learn for Kids has a wide variety of free K- grade 3 math reference charts and printable worksheets. For a Fee: *Cosmeo by the Discovery Channel allows you to access multiple math text books and get help with specific examples on specific pages. *These websites are provided for information only. The Sherlock Center does not endorse these websites and is not responsible for any materials/subscriptions purchased.
{"url":"http://ric.edu/sherlockcenter/mathres.html","timestamp":"2014-04-16T13:06:46Z","content_type":null,"content_length":"21855","record_id":"<urn:uuid:bc067690-52db-4d59-9c52-a6ee95cae37b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Interpretting results of model with dichotomous outcome jkirby posted on Friday, July 06, 2001 - 8:55 am I am estimating a model with several observed dichotomous outcomes. To simplify, lets say that I have one exogenous variable (X), and two endogenous variables (Y1 and Y2), all of which are dichotomous. X affects both Y1 and Y2, and Y1 affects Y2. I would like to be able to say something about the extent to which X affects the probability of Y2 (both directly and indirectly through Y1), rather than limiting my discussion to how the underlying latent variables are related. Reviewers have requested that I go beyond the sign and direction of coefficients in my interpretation--- a reasonable request--- I am just not sure how to do it. Any help on calculated/interpretting predicted probabilities or using some other approach to interpretation would be greatly appreciated. bmuthen posted on Sunday, July 08, 2001 - 10:53 am You can study how x affects y2 probabilities directly and indirectly as follows. Assume y*_1 = g_1*x + e_1, y*_2 = b*y*_1 + g_2*x + e_2, where y* denotes the underlying latent response variable and b and g are regression coefficients. It follows that y*_2 = b*g_1*x + g_2*x + b*e_1 + e_2. Mplus assumes that V(y* | x) = 1, so that V(b*e_1 + e_2) = 1. Then P(y_2 = 1 | x) = P(y*_2 > t_2 | x) = F(- t_2 + b*g_1*x + g_2*x), where tau_2 is the threshold for y_2 and F is the standard normal distribution function found in tables. The second term in the argument of F is the indirect effect and the third term is the direct effect. Using different values of x, the effects of x on y_2 = 1 probabilities via these two terms can be computed. Anonymous posted on Monday, September 06, 2004 - 5:11 pm I have two (unrelated) models from which I am trying to calculate the probability of a binary outcome (labeled u3 in Model 1 and u2 in Model 2) for given values of the other variables. I have the Day 3 MPlus handouts, which are proving quite helpful for this, but I still have a few questions. Model 1 MODEL: f1 BY u1 u2; u3 ON f1 x1 x2; ANALYSIS: TYPE = general MISSING h1; Model 2 MODEL: y1 ON x1; u1 ON y1; u2 ON u1; For Model 1, MPlus outputs a residual variance for the outcome of interest (as in page 19 of the handout). I was planning to plug this into the probability equation as shown on pages 21-22. However, the Model 2 output does not contain a residual variance. Is this 1, as you imply in your response above, or do I need to calculate it using other items from the output (and, if so, how?) Finally, for Model 1, to compare the effects of f1, x1, and x2 on u3, are the Std or StdYX estimates most appropriate? Thanks from a novice! Linda K. Muthen posted on Wednesday, September 29, 2004 - 4:26 pm I would need to see your two outputs to understand why you are getting a residual variance in one and not the other. With categorical outcomes, residual variances are printed at the end of the results with r-square when you request a standardized solution. I think you would look at stdyx. Anonymous posted on Tuesday, January 11, 2005 - 9:55 am I have two unrelated questions. First, is there anywhere in the output that specifies whether a model was estimated using logit or probit. I thought I read in the user manual (or in the discussion) that random models using MLR are estimated as probit models. But in the output, I noticed that default logit thresholds were mentioned. Second, I estimated a logit model using Mplus and STATA. While the coefficients on the independent variables were all virtually identical, the intercepts were quite different. Does Mplus calculate intercepts differently? Linda K. Muthen posted on Wednesday, January 12, 2005 - 4:52 pm Weighted least squares estimation is done using probit regression. Maximum likelihood estimation including MLR is done using logistic regression. Mplus uses thresholds instead of intercepts which should be the negative of the intercepts. You may be comparing probit and logit given your misunderstanding in paragraph one. Peggy Tonkin posted on Tuesday, February 01, 2005 - 6:05 am I am modeling continuous mediators with a categorical outcome. I asked for the IND effects for each mediator on the outcome and get the specific indirect and sum of indirects. Can I add these to the direct effect to get the total effect of each mediator on the outcome? Peggy Tonkin Linda K. Muthen posted on Tuesday, February 01, 2005 - 7:12 pm See MODEL INDIRECT in the Mplus User's Guide for a full description of IND. Say y IND x1 not y IND x2 x1 to get all possible indirect effects and a total effect. peggy tonkin posted on Wednesday, February 02, 2005 - 8:00 am Thank You. Peggy Tonkin Anonymous posted on Wednesday, June 01, 2005 - 1:11 pm I am working on a path analysis with categorical dependent variables using MPLUS (including indirect effect), but I don't know how to interpret the coefficients of the direct and indirect effects. I have seen your answer regarding this, but still feel a bit confused by the formula you gave. Could you please give a concrete example? For instance, how to interpret the following probit regression (direct and indirect effects): CATEGORICAL IS y x1 MODEL: y ON x1 x2 x3 x1 ON x2 x4 y x1 x2 The result: Estimates S.E. Est./S.E. y ON x1 -0.463 0.107 -4.328 x2 -0.295 0.161 -1.838 x3 0.063 0.383 0.165 x1 ON x2 0.309 0.161 1.913 x4 0.004 0.067 0.062 Effects from x2 to y Sum of indirect -0.143 0.084 -1.705 Specific indirect x2 -0.143 0.084 -1.705 bmuthen posted on Wednesday, June 01, 2005 - 6:09 pm I think you are in the WLSMV - probit framework where you can think in terms of continuous latent response variables underlying the categorical outcomes. So for y on x1 x2 x3; x1 on x2 x4; where y and x1 are categorical, the continuous latent response variables can be called x1* and y*. The indirect effect of say x2 on y via x1 is therefore viewed as an indirect effect of x2 on y* via x1* and is obtained as the product of the coefficients for x1* regressed on x2 and y* on x1* (which are the coefficients printed in the regular output), and this product is interpreted exactly the way you would interpret it had x1* and y* been observed (continuous) variables. And as you say, more has been said in earlier posts. Anonymous posted on Wednesday, June 01, 2005 - 7:20 pm Thanks for answering my question above. Three quick questions: 1. In MPLUS's probit regression, is threshold the constant term in STATA's probit regression (the sign of MPLUS's threshold is opposite to the sign of STATA's constant term)? 2. To get the threshold value, I add "TYPE=MEANSTRUCTURE" in ANALYSIS. In the results: y1$1 -2.524 1.488 -1.696 y2$1 2.341 1.608 1.456 Does -2.524 refer to the threshold in the equation where y1 is the dependent variable? 3. BTW, how does MPLUS obtain standard errors for indirect effect in path analysis, when dependent variables are categorical? Anonymous posted on Wednesday, June 01, 2005 - 7:59 pm Sorry. one more question: In the answer regarding interpreting coefficients you gave in 2001 (first message in this section), it seems you were addressing the case when there are two endogenous variales (y*_1 and y*_2) and one exogenous variable (x). If I have more exogeneous variables, when I interpret the coefficient of one particular variable, do I need to take the mean value of other exogeneous variables? Or, can I disregard the value of other exogeneous variables and only use the formula you gave, which is P(y_2 = 1 | x) = P(y*_2 > t_2 | x) = F(- t_2 + b*g_1*x + g_2*x)? I guess I need to control the value of other variables, but I want to make sure. Thanks! bmuthen posted on Thursday, June 02, 2005 - 9:12 am Answers to your questions: 1. Yes. 2. Yes 3. Two ways: Delta method and bootstrap (see User's Guide). The Delta method considers the product of slope estimates; the principle is the same as with continuous outcomes. 4. You need to use values for all your exogenous variables, since each slope refers to a partial effect just like in standard regression analysis. Anonymous posted on Friday, July 22, 2005 - 12:06 pm Hi, I have two questions: 1. What's the major difference between a probit model estimated by WLSMV in MPLUS and a probit model estimated by ML? 2. In SEM, when the dependent variables are ordered categorical, you said MPLUS takes them as continuous latent variables by using WLSMV estimation. Is the "latent variable" here the same as "latent variable" in factor analysis? I ask this because my impression is that a latent variable is often based on several observated variables. But when you take ONE categorical variable as a latent variable, there is only one observed variable -- are you saying that in this case, a latent variable is actually based on observed categories in the observed categorical variable? bmuthen posted on Friday, July 22, 2005 - 12:22 pm 1. The results of those two estimators would probably be very similar (we plan to have probit ML in Mplus in the future). 2. Yes, the latent variable here is a continuous latent response variable underlying a single observed categorical variable. It is not a factor with multiple indicators but is specific to a certain observed variable. It can be thought of as what you really want to measure, whereas your measurement is a crude reflection of the response variable - the observed categories inform about which range (between neighboring thresholds) the response variable is in, but not its specific value. It is sometimes called a response propensity. Anonymous posted on Saturday, August 20, 2005 - 8:17 am Hi, I am estimating a SEM model by using probit estimation (WLSMV). In one of the equations, the dependent variable (Y1) is binary, and in this equation, the coefficient of a continuous independent variable (X1) is .23 (p<.001). I have two questions: 1. Can I interpret the coefficient as: for one unit increase of X1, the latent continuous variable underlying Y1 increases by .23? 2. How can I interpret the coefficient in terms of probability? Readers are used to seeing that the effects of independent variables on a dichotomous dependent variable are interpreted in a way of probability change. Since this is a probit model by WLSMV, not a logit one by ML, I am not sure how to get this alternative interpretation. Linda K. Muthen posted on Saturday, August 20, 2005 - 9:33 am 1. Yes. 2. The probability you ask for is computed as P, P = 1 - probability ((threshold - z)/sqrt(theta)), threshold = the threshold of the dichotomous event, theta = the residual variance for y* of the dichotomous event obtained from the standardized solution, and, for example z = a*eta1 + b*eta2 + c*x, where a, b, and c are the estimated regression coefficients of y* for the dichotomous event, regressed on two factors and one x. P is the conditional probability of the event given those factor values and x value. To compute P you choose values of eta1, eta2, and x that you are interested in and evaluate z for those values. You then use a normal probability table to obtain probability ((threshold - z)/sqrt(theta)), from which you obtain the desired P. Anonymous posted on Saturday, August 20, 2005 - 2:44 pm Thanks a lot for your response. I can only find the threshold of the dichotomous event, but don't know how to find theta and eta1/eta2. 1. How to obtain "the residual variance for y* of the dichotomous event from the standardized solution? 2. What do you mean by standardized solution above? 3. What are eta1 and eta2? Or, Could you please give me a real example? bmuthen posted on Sunday, August 21, 2005 - 1:41 pm 1. Theta is the residual variance which is found in the output next to the R-square values (at least if you request a standardized solution). 2. If you type "Standardized" in the OUTPUT command you get slopes standardized to unit variance. 3. In this example, eta1 and eta2 are factors used to illustrate the case where you have not only x's but also factors that influence the categorical outcome. If you don't have factors, then you drop that part. Peter Martin posted on Friday, October 14, 2005 - 3:34 am Referring to the discussion of the last few postings (Aug 2005): How can I calculate the predicted probabilities of a probit when I am doing a path model using multiple imputation? When type= imputation, standardized output is not available, so it seems I don't get the residual variance of y*. I notice that I do get a matrix of thetas in the TECH1 output; will that contain the resid var of y*, though (i.e. are these the same thetas, or is there a homonym here)? Anyway, in my output the theta matrix contains only zeros (so they are unusable for the probability calculation). If I may make a suggestion: It would be quite nice to have estimated probabilities as an output option in MPlus - similar to the postestimation programs people like Gary King or Scott Long have written for STATA. But maybe that's asking too much? Mplus is a brilliant programme as it is, of course. bmuthen posted on Friday, October 14, 2005 - 10:12 am To get the y* residual variance you have to use the parameter estimates printed (which have been averaged over the imputed data sets) and the formulas of Appendix 2 of the Version 2 Tech Appendix on the web site - see especially formula (43). We have it on our list to add more output features for imputed runs and also the estimated probabilities for individuals. Diana Clarke posted on Wednesday, March 29, 2006 - 9:06 am Hi Bengt, 1. In a SEM model with categorical main independent variable (5-level nominal so 4 dummy variable created), categorical/ordinal endogenous variables (mediators) and a binary outcome, would one report the standardized or the unstandardized coefficients? Can you clearly explain the pros and cons of each? 2. Whan are the benefits of calculating and reporting the probabilities? Bengt O. Muthen posted on Wednesday, March 29, 2006 - 6:49 pm Standardization is beta*SD(x)/SD(y). 1. I would not standardize wrt to x here, only wrt y. Standardization wrt to x is only suitable when x is continuous - it does not make to talk about a standard deviation (SD) change for a categorical variable (you want to consider changing categories). Standardization wrt to categorical mediators or ultimate outcomes may or may not be done. I personally feel that the rush to standardization is often not necessary - I like raw coefficients. Certainly, in logistic regression one typically does not standardize. But is is possible to do so considering as the variance the variance of the latent response variable underlying the categorical variable. 2. I think reporting key estimated probabilities for categorical dependent variables is much better than standardizations. This clearly shows what the model implies. Diana Clarke posted on Thursday, March 30, 2006 - 5:06 am Hi Bengt, Thanks for the response above (March 29, 2006 - 9:06 am). I have a few follow-up questions related to the calculation of the probabilities using the scenario below: in a model: y1 on d2 d3 d4 x1 x2 x3; y2 on y1 d2 d3 d4 x1 x3; y3 on y1 y2 d2 d3 d4 x1 x2; y4 on y1 y3 d2 d3 d4 x1 x2 x3; where y1-y3 are 4-level ordinal variable, y4 is binary x1 and x2 are dichotomous variables and d2-d4 are dummy variables that represents my main independent variable with d1 the referent category left out. 1. How would I calculate the probability of y4=1 for different categories of my main independent variables (i.e. the probability of an event (y4=1) for d2=1 compared to d4=1)? 2. Would I have to do this at each threshold value for each endogenous variable in the model (i.e. 3 threshold values for each)? 3. With respect to the continuous exogenous variable, is it sufficient to just include the group mean value for the variable? Bengt O. Muthen posted on Thursday, March 30, 2006 - 3:43 pm I assume you use the WLSMV estimator (so probit and u* variables used for mediation), and not ML (logit and u variables used for mediation). Then it is simple: 1.You would express y4* in terms of the "reduced-form", that is in terms of the x variables d2, d3, d4, x1, x2, x3 (just like you would in a regular mediational path model for continuous outcomes). Then you are looking at a regular probit regression for which our V4 UG chapter 13 gives prob formulas. 2. No, because your y1-y3 are ordinal variables which have each only 1 slope and therefore the category does not have an influence; this makes is simple. 3. Yes. Diana Clarke posted on Friday, March 31, 2006 - 6:14 am Can you supply the formulas? I have V3 of the UG which only supply the prob formulas for the logistic coefficients. Linda K. Muthen posted on Friday, March 31, 2006 - 7:59 am The Version 4 Mplus User's Guide is available on the website as a pdf. Diana Clarke posted on Thursday, April 06, 2006 - 12:56 pm Hi Bengt, Thanks for your response above. However, if my model contains a correlation between two of my mediating variables, how is this taken into account in calculating the probability. That is, the complete model is: y1 on d2 d3 d4 x1 x2 x3; y2 on y1 d2 d3 d4 x1 x3; y3 on y1 d2 d3 d4 x1 x2; y4 on y1 y3 d2 d3 d4 x1 x2 x3; y2 with y3; and I am trying to calculate the effect of d2 on y4. Bengt O. Muthen posted on Thursday, April 06, 2006 - 5:59 pm Having that correlation in the model changes the parameter estimates and therefore the indirect effects, but not the procedure for calculating the probabilities (as a function of indirect and direct For instance, if you have x--> y1 -->z with slopes a1 and b1 x--> y2 -->z with slopes a2 and b2 then z expressed as a function of x is E(z | x) = b1*a1 + b2*a2 irrespective of y1 and y2 having correlated residuals. Diana Clarke posted on Wednesday, April 26, 2006 - 10:39 pm Hi Bengt, Since one can obtain beta coefficients for the specific indirect paths with MPLUS once your model is recursive, I am assuming that one could calculate the probability of X on Y through a specific indirect path (as opposed to the overall indirect path). Is this correct? Linda K. Muthen posted on Thursday, April 27, 2006 - 8:52 am Yes, you can do this using the VIA option of MODEL INDIRECT. Hossein Azadi posted on Sunday, October 08, 2006 - 9:03 pm How can I calculate direct and indirect effect in path analysis by SPSS? Linda K. Muthen posted on Monday, October 09, 2006 - 8:32 am I don't know if SPSS has an automatic way to calculate indirect effects. You would need to contact their technical support. Antonio A. Morgan-Lopez posted on Monday, October 09, 2006 - 11:38 am Kris Preacher (@ U. Kansas) has some nice SPSS macros (and corresponding papers) to calculate indirect effects in single mediator models, multiple mediator models and med mod/mod med models @ http:// Hossein Azadi posted on Friday, October 13, 2006 - 2:33 am So, would you please kindly introduce me an appropriate package for Path Analysis? Linda K. Muthen posted on Friday, October 13, 2006 - 9:10 am Mplus can estimate a path model and provide indirect effects. Hossein Azadi posted on Monday, October 16, 2006 - 10:37 pm Thanks a lot. One more question: is there any free dwonloadable version (such as student version) of Mplus available on the web? If so, would you please kindly put its web address on the board? Linda K. Muthen posted on Tuesday, October 17, 2006 - 7:12 am Yes, we have a free demo which is exactly the same as the regular version except for a limitation on the number of variables that can be analyzed. The full user's guide is also on the website. You can access both via www.statmodel.com. Hossein Azadi posted on Wednesday, October 18, 2006 - 2:26 am Many thanks, Hossein Azadi Nadia micali posted on Monday, September 17, 2007 - 3:52 am Sorry for the silly question, but doing a logistic regression on Mplus (using on) i get an estimate of 4.6, how should this be interpreted (i.e. what is it)? you also get an odds ratio, so it si not an oods ratio. how do you use it tracing a path? Linda K. Muthen posted on Tuesday, September 18, 2007 - 4:39 am It is a logit, that is, a log odds. If you are asking about an indirect effect, you can use the probit link and then the indirect effect is the product of the two regression coefficients in the indirect effect. Magda Mónica Martins Rocha posted on Wednesday, December 19, 2007 - 2:03 am I'm trying to understand the results i have from a confirmatory factor analysis where all the six indicators are binary. One of the threshols is -1.363. is it possible, and what does it means? Thank you Magda Rocha Linda K. Muthen posted on Wednesday, December 19, 2007 - 9:05 am Assuming you are using WLSMV, the threshold is a z-score indicating a probability of greater than .5. Hossein posted on Thursday, March 12, 2009 - 5:22 am I have a dependent variable (Y) which is nominal with three levels (degrdation, constant, improvement), and several independent variables (Xs) which are interval. Can I estimate any regression here? If so, what kind? and is it possible with SPSS? Linda K. Muthen posted on Thursday, March 12, 2009 - 7:25 am You can specify a multinomial logistic regression in Mplus. You will have to check with SPSS to see if they do multinomial logistic regression. Thomas Schlösser posted on Wednesday, March 18, 2009 - 4:28 am One nominal independent X (3 conditions). Six mediating nominal variables M1-M6, each of them with different number of groups (each mediating variable describes a membership to different clusters in one of six measures) One binary dependent variable Y (a decision of the subjects) The idea is to show the changes in behavior Y (decision) effected by a different treatment (X: condition) is mediated by some of the changes (belonging to one cluster and not another) within the six I also used Hayes' (beta) indirect script for binary outcomes but then tried to do this with MPlus. The problem is obviously that in both approaches the mediators (are or) have to be defined as categoricals. But doing this, mediation depends on ordering of the nominal variables, of course. Is it possible to force Mplus to do a multinominal regression M on X and Y on M? Doing this manually with SPSS it shows that multinominal regression for some of the mediators brings significant dependencies in both directions. With the bootstrapping procedure I am forced to set variables to categorical. Using MonteCarlo I cannot build IND or VIA effects. Grouping of course only works with one mediating variable, which hinders to show specific indirect effects. Do you have an answer for me? Would be so great, it's my disseration. Thank you, Thomas Bengt O. Muthen posted on Friday, March 20, 2009 - 12:16 pm So it sounds like you want a multinomial logistic regression of M on X and a binary logistic regression of the Y on M. The latter of course needs to be interpreted as Y probabilities shifting as a function of the nominal M categories. The way I can see this done (using ML) is to represent M by a latent class variable c, making M and c the same by using the M intercepts to connect its categories with those of c. Y prob's (thresholds) would then shift as a function of the c classes. I don't know how one would think of indirect effects in this context, however. TS posted on Saturday, March 21, 2009 - 1:42 am Thank you for your answer. I will try it this way. I got two further questions: 1. Is it a problem that I have six latent variables each with two, three or four nominal categories at the end? And 2. (to your last sentence) Do you mean there is no way to think about indirect effects or is it a technical problem? There is a strong direct effect of changes in X changing probabilities to be in one or the other state of Y. But the mediators may carry some of the changes meaning a specific pattern within the six mediators may sig. change the probability of changing state in Y. Some of the mediators can be interpreted as categorical and it can be shown that such significant specific indirect effects exist for them. So I wonder if the sig. nominal connection of M and X and the sig nominal connection of Y on M carry such effect. Thank you again, Thomas Bengt O. Muthen posted on Saturday, March 21, 2009 - 10:40 am 1. No. 2. I mean that an indirect effect is not simply the product of 2 slopes in this case. As you say, the mediator class probability is influenced by x and the mediator class influences the mean of the y so there is certainly an indirect effect of x, but a more complex one. Mary E. Mackesy-Amiti posted on Friday, November 19, 2010 - 1:17 pm In interpreting the results of a path analysis with a dichotomous outcome using WLSMV - What is the difference between this probabilty: [posted on Saturday, August 20, 2005 - 9:33 am] P = 1 - probability ((threshold - z)/sqrt(theta)), and this: [posted on Sunday, July 08, 2001 - 10:53 am] P(y_2 = 1 | x) = P(y*_2 > t_2 | x) = F(- t_2 + b*g_1*x + g_2*x) thank you Bengt O. Muthen posted on Saturday, November 20, 2010 - 7:55 am They are the same. This has to do with the symmetry property F(v) = 1-F(-v). See for instance intro stat books for the case where F is the normal distribution function (Phi). So for your two versions, threshold = t_2 and z = the g*x expression. The only difference is that in the second expression it is assumed that the residual variance (theta) is 1. cathy labrish posted on Sunday, August 26, 2012 - 6:51 pm Quick question re how to interpret coefficients from a regression of a continuous latent on an observed binary (and an observed ordinal). In the case of a binary outcome, what is the reference category 0 or 1 (eg. if my equation is y=.345eta1 then do I interpret this as for every one unit increase in eta1 my log odds of y being 0 increase by .345 or do I interpret it as my log odds of y being 1 increases by .345). Similarly, for an ordinal outcome, do I interpret y as the likelihood of being in the next lower category or the likelihood of being in the next higher category. Linda K. Muthen posted on Tuesday, August 28, 2012 - 11:15 am The regression of a continuous latent variable is a linear regression. For a binary item, zero is the reference category. See Chapter 14 of the user's guide for interpretation. Todd Hartman posted on Saturday, January 05, 2013 - 9:34 am Could someone clarify the interpretation of indirect effects using predicted probabilities in a simple mediation model with a binary dependent variable? (I've carefully read through multiple threads, the user guides, and scoured the Internet--a concrete example would help me fix ideas.) The path model is X --> M --> Y, where X is a binary treatment variable, M is a continuous mediating variable, and Y is a binary outcome. Below are the unstandardized coeff. (using WLSMV): M ON X: a = .089 (.035) Y ON M: b = 1.99 (.17) ON X: c = .054 (.142) Intercepts (for model with M): .673 (.021) Thresholds (Y$1): 1.355 (.139) Indirect Effect (X to Y): .176 (.071) Using the formula provided in the user guides and this thread to calculate predicted probabilities: P(Y=1|X) = F(-t + a*b*X + c*X) So, when X = 0: P(Y=1|X) = F(-1.355 + .177(0) + .054(0)) = .087 And, when X = 1: P(Y=1|X) = F(-1.355 + .177(1) + .054(1)) = .130 1) Do these calculations look correct? My concern is that these predicted probabilities seem pretty low when I look at the raw data. For instance, the mean value of Y when X is 0 is .49, and it is .59 when X equals 1. Am I missing an intercept or something? Bengt O. Muthen posted on Saturday, January 05, 2013 - 4:30 pm Even when you condition on X there is some variation left in M, namely its residual. This means that to get the probability of Y you have to integrate out this residual and the formula is more complex than what you have. You can see how this is done in the paper Muthén, B. (2011). Applications of causally defined direct and indirect effects in mediation analysis using SEM in Mplus. Submitted for publication. which is on our web site under Papers, Mediational Modeling. The Tech Appendix goes through the formulas in Section 13.2. Note that you may be better off presenting causal effects as discussed in the paper - you will find Mplus scripts there. Todd Hartman posted on Sunday, January 06, 2013 - 11:47 am Thanks for clarifying. Your manuscript was very helpful, particularly the appendices showing the calculations for the causal indirect effects and specific examples with accompanying Mplus scripts. I do have one more follow-up question. I noticed for your aggressive behavior and intention to stop smoking continuous mediators (for Examples 1 and 2, respectively), you transform these variables: agg5 = (sctaa15s-2.400)/1.100; (On page 117) intent = (intent-1.456)/0.8854; (On page 120) Could you explain what you've done here (and why)? I'm just wondering whether I would need to use some sort of standardized mediator to obtain proper estimates. Bengt O. Muthen posted on Sunday, January 06, 2013 - 5:37 pm These standardizations are just done for easier interpretation. Subtracting the mean is typically done when interaction terms are considered. Todd Hartman posted on Monday, January 07, 2013 - 10:29 pm Ah, that makes sense. Thanks so much for your help--everything works beautifully now for a path model with a binary outcome. What about a path model with an ordinal outcome? Seems to be a common situation and my hope is to use Mplus exclusively rather than having to switch back and forth between different software to do all of these types of analyses (like Imai et al.'s 'mediation' package in R). Is there a straightforward way of modifying the formulas/scripts to calculate causal indirect effects for a particular category of an ordinal outcome? For instance, for a 4-category outcome variable (1,2,3,4), would it be possible to substitute 'mbeta2' for the ordinal outcome (from 'mbeta0' for the binary threshold) to get the indirect effect of moving from a value of 3 to 4? [y$1] (mbeta0); [y$2] (mbeta1); [y$3] (mbeta2); Bengt O. Muthen posted on Tuesday, January 08, 2013 - 2:37 pm I am glad you are moving ahead on it. Please send any paper you write on it - these techniques need to be more widely used. The effect formulas generalize directly to an ordered categorical (ordinal) and an unordered categorical (nominal) outcome. For a 0/1 binary outcome, the expected value for the outcome is the same as the probability of category 1. With an ordered categorical outcome, the expected value for the outcome is a sum over the non-zero categories, weighted by their probabilities. This, however, assumes a certain scoring for the categories. For example, an equidistant scoring such as 0, 1, 2,... may not be substantively motivated due to the difference between two adjacent categories representing a substantively larger difference than two other adjacent categories. As an alternative, the probability for each category can be considered, an approach that is also suitable for a nominal outcome. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=23&page=140","timestamp":"2014-04-20T03:12:29Z","content_type":null,"content_length":"109551","record_id":"<urn:uuid:a1573ab2-d229-404e-ae23-74cd1130b333>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Proof that Twin-Primes is an infinite set; Polignac Conjecture proof #1548 Correcting Math Posted: Feb 12, 2014 11:51 PM Proof that Twin-Primes is an infinite set; Polignac Conjecture proof #1548 Correcting Math I did these proofs several years back, if my memory is correct, after I discovered that infinity borderline is 1*10^603 and the technique I used was a gauge measure rod technique. Proof of the Infinitude of Twin Primes When mathematics is honest about its definitions of finite versus infinite, it seeks a borderline between the two concepts, otherwise they are just one concept. From several proofs of regular polyhedra and of the tractrix versus circle area we find the borderline to be 1*10^603. That causes a measuring rod to use for all questions of sets as to whether they are finite or infinite. The Naturals are infinite because there are exactly 1*10^603 inside of 1*10^603 (not counting 0). The algebraic-closure of numbers is 1*10^1206 which forms a density measure. So are the twin-primes finite or infinite? Well, are there 1*10^603 twin-primes between 0 and 1*10^1206? The question is related to the minimal infinitude set of Perfect Squares. The Perfect-Squares {1, 4, 9, 16, 25, . .} are the minimal infinite set. The perfect-squares are a special set since they are "minimal infinite" since there are exactly 1*10^603 of them from 0 to 1*10^1206. How about Twin-Primes? Well, what we do is make a induction count. There are 15 twin primes from 0 to 100. There are 14 more twin primes or 29 twin primes from 0 to 200. There are 8 more twin primes or 37 twin primes from 0 to 300. So the series progression for Twin-Primes from 0 to 300 is that of: 15, 29, 37, . . There are 10 Perfect Squares from 0 to 100. There are 4 more Perfect Squares or 14 Perfect Squares from 0 to 200. There are 3 more Perfect Squares or 17 Perfect Squares from 0 to 300. So the series progression for Perfect Squares from 0 to 300 is that of: 10, 14, 17, . . Obviously the Twin Primes are always ahead of the Perfect Squares, and hence the Twin Primes are an infinite set. Proof of the Polignac Conjecture up to a special number How about quad-primes, those separated by 4 units, then those separated by 6 units, then those separated by 8 units, etc etc? This is called the Polignac Conjecture. Well, the proof is the same as Twin Primes, except, however, at some moment or point between 0 and 1*10^1206, the Polignac Primes will be a finite set, since the confinement of the borderline of infinity impacts the set of Polignac primes. So here is the question? What prime separation distance do the Polignac Primes turn from being an infinite set to that of a finite set? For Polignac primes, we do a induction count. There are 13 quad primes from 0 to 100. There are 10 more quad primes or 23 quad primes from 0 to 200. There are 8 more quad primes or 31 quad primes from 0 to 300. So the series progression for Quad-Primes from 0 to 300 is that of: 13, 23, 31, . . Now for Polignac primes separated by 6, then by 8 then by 10 etc etc, may start out slow from 0 to 100 then from 0 to 1000 then from 0 to 10,000 etc etc but they all come in more numerous than their counterpart-- the Perfect Squares. This happens up until some special separation distance where the distance is so large that the number of them falls below the count of the Perfect Squares. So the Polignac Conjecture is true up to this special number, that they are infinite sets until that special number is reached. There are 10 Perfect Squares from 0 to 100. There are 4 more Perfect Squares or 14 Perfect Squares from 0 to 200. There are 3 more Perfect Squares or 17 Perfect Squares from 0 to 300. So the series progression for Perfect Squares from 0 to 300 is that of: 10, 14, 17, . . Obviously the Quad Primes are always ahead of the Perfect Squares, and hence the Quad Primes are an infinite set. The Polignac Conjecture is true up until that special separation distance is reached since infinity borderline is 1*10^603 and there must be 1*10^603 of an entity between 0 to 1*10^1206 in order to be an infinite set. Recently I re-opened the old newsgroup of 1990s and there one can read my recent posts without the hassle of mockers and hatemongers. Archimedes Plutonium Date Subject Author 2/12 Proof that Twin-Primes is an infinite set; Polignac Conjecture proof plutonium.archimedes@gmail.com /14 #1548 Correcting Math 2/13 necessity & sufficiency; can you show either or both Brian Q. Hutchings 2/14 Re: necessity & sufficiency; can you show either or both Brian Q. Hutchings 2/14 Re: Proof that Twin-Primes is an infinite set; Polignac Conjecture Brian Q. Hutchings /14 proof #1548 Correcting Math 2/15 foorP Brian Q. Hutchings 2/15 Re: foorP Blippie 2/15 Baez wants to go through life never defining infinity Re: foorP plutonium.archimedes@gmail.com 2/16 Re: Baez wants to go through life never defining infinity Re: foorP Brian Q. Hutchings 2/17 Re: Baez wants to go through life never defining infinity Re: foorP Brian Q. Hutchings
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2619421","timestamp":"2014-04-19T04:18:58Z","content_type":null,"content_length":"30418","record_id":"<urn:uuid:9e7bff28-dfde-4eab-aa42-82d240afa883>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-dev] numpy - dual.py problems Fernando Perez Fernando.Perez at colorado.edu Sun Jan 8 10:44:25 CST 2006 Arnd Baecker wrote: > Hi Travis, > On Sat, 7 Jan 2006, Travis Oliphant wrote: >>>Later on Travis' wrote: >>>"""The solution is that now to get at functions that are in both numpy and >>>scipy and you want the scipy ones first and default to the numpy ones if >>>scipy is not installed, there is a numpy.dual module that must be >>>loaded separately that contains all the overlapping functions.""" >>>I think this is fine, if a user does this in his own code, >>>but I have found the following `from numpy.dual import`s within numpy >>> core/defmatrix.py: from numpy.dual import inv >>> lib/polynomial.py: from numpy.dual import eigvals, lstsq >>> lib/mlab.py:from numpy.dual import eig, svd >>> lib/function_base.py: from numpy.dual import i0 >>BTW, these are all done inside of a function call. > Yes - I saw that. >>I want to be able >>to use the special.i0 method when its available inside numpy (for kaiser >>window). I want to be able to use a different inverse for matrix >>inverses and better eig and svd for polynomial root finding. > Are these numerically better, or just faster? > I very much understand your point of view on this > (until Fernando's mail I would have silently agreed ;-). > On the other I think that Fernando's point, > that the mere installation of scipy > will change the behaviour of numpy implicitely, > without the user being aware of this > or having asked for the change. > Now, it could be that this works fine in 99.9% of > the cases, but if it does not, it might > be very hard to track down. > So I am still thinking that something like a > numpy.enable_scipy_functions() > might be a better approach. >>So, I don't see this concept of enhancing internal functions going >>away. Now, I don't see the current numpy.dual approach as the >>*be-all*. I think it can be improved on. In fact, I suppose some >>mechanism for registering replacement functions should be created >>instead of giving special place to SciPy. SciPy could then call these >>functions. This could all be done inside of numpy.dual. So, I think >>the right structure is there.... > Anyway, sorry if I am wasting your time with this > discussion, I don't feel too strongly about this point > (especially after the version check), > maybe Fernando would like to add something - > also I have to move on to other stuff > (but whom do I tell that ;-). Well, I do think that having code like (current SVN): abdul[numpy]> egrep -r 'from numpy.dual' * | grep -v '\.svn/' core/defmatrix.py: from numpy.dual import inv dual.py:# Usage --- from numpy.dual import fft, inv lib/function_base.py: from numpy.dual import i0 lib/polynomial.py: from numpy.dual import eigvals, lstsq lib/mlab.py:from numpy.dual import eig, svd random/mtrand/mtrand.pyx: from numpy.dual import svd sort of defeats the whole purpose of dual, doesn't it? dual is meant to isolate the contributions from full scipy, so that the _existence_ of scipy isn't a hidden side-effect for numpy. If this is the case, then I think there shouldn't be code in numpy ever doing 'from dual import...'. Otherwise, we might as well go back to simply writing 'from scipy import ...' as before, no? Just like in scipy not all packages are auto-loaded (see Pearu's response on that today) and you have to call scipy.pkgload() if you want the whole thing in memory, I do think that numpy should be strict about the no-hidden side-effects policy suggested by dual. Providing a numpy.load_dual() or numpy.enable_scipy() or something would be OK, but I think it should be done Note that if an explicit call is made, it should set a global (numpy-level) flag, so that any code can check for this condition: In numpy.__init__, we should have scipy_dual_loaded = False def dual_load(): global scipy_dual_loaded from dual import ... scipy_dual_loaded = True This will at least let you check whether this thing was called by other libraries you may be importing. I am trying to ensure that we have a mechanism for tracking this kind of side effect, because the call could be made by code you didn't write yourself. With this, at least you can do something like: if _something_weird and numpy.scipy_dual_loaded: print 'scipy effects, check for conflicts in that direction' Ultimately, I think that this should be reversible, with a dual_unload() matching routine, but that's icing on the cake. I do feel that at least the explicit dual_load() is the technically correct solution, even at a (minor) loss of convenience. More information about the Scipy-dev mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-dev/2006-January/004994.html","timestamp":"2014-04-18T13:43:35Z","content_type":null,"content_length":"7846","record_id":"<urn:uuid:11437337-ce87-43b1-b03d-aa3ecd95440a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
converting rectangular coordinates to polar May 25th 2009, 08:03 PM #1 May 2009 converting rectangular coordinates to polar rectangular coordinates: (6, -8) ok i got r=10 which im pretty sure is right but im unsure about q. so i got q= -53.13) but i checked it on a conversion thing and it said q is like 307??? what did i do wrong Polar coordinates Hello tomatoes Your answer is OK, but if you need to give an angle between $0^o$ and $360^o$, then you'll need to subtract $53$ from $360$, which is where the $307$ comes from. There are two values of $\theta$ in the range $0\le\theta<360$ for which $\tan\theta = -\tfrac43$. They are (to the nearest degree) $180 - 53 = 127^o$ and $360 - 53 = 307^o$. Since $(6, -8)$ is in the fourth quadrant, the answer you want is $307^o$. May 25th 2009, 10:09 PM #2
{"url":"http://mathhelpforum.com/pre-calculus/90491-converting-rectangular-coordinates-polar.html","timestamp":"2014-04-16T16:53:01Z","content_type":null,"content_length":"36082","record_id":"<urn:uuid:9ae639df-8615-4b57-8506-8b030965b781>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
Ancient Chinese used bamboo sticks as calculator Approximately 2,300 years ago the ancient Chinese wrote the world's oldest decimal multiplication table on bamboo sticks. According to experts, it was a very effective calculator that let one do the calculations not only with integers but also fractions. No country in the world had similar calculators at that time. Five years ago, Beijing Tsinghua University received a gift of nearly two and a half thousand dirty and moldy bamboo sticks. Most likely, they were found by raiders of ancient tombs and then sold at a market in Hong Kong. According to radiocarbon analysis, this artifact was created in about 305 BC, which corresponds to China's Warring States period. Despite the military conflicts, this historical period (481, 475 or 453-221 BC) is characterized by flourishing trade and commerce, spread of iron tools, construction of large irrigation projects, development of agriculture, and population growth. At that time groups of educated citizens professionally engaged in intellectual work have emerged. The Warring States period is often identified with the "golden age" of Chinese philosophy. This period immediately preceded the formation of the Qin Empire. According to the information on Nature portal, each strip is 12.7 mm wide and up to half a meter long. From top to bottom they are covered with ancient writing. According to Chinese historians, this important artifact has 65 ancient texts written in black ink. Due to the fact that the threads connecting pages into a single manuscript scroll have decayed, and some bamboo sticks have disappeared and others have been broken, the transcript of texts turned into a real puzzle for the researchers. Scientists noticed a "canvas" consisting of 21 bamboo strips inscribed only with numbers. As suggested by Chinese mathematicians, it was the oldest known multiplication tables in the world. When the strips are placed properly, one will notice that the top line and the rightmost column contain the same 19 numbers arranged from right to left and top to bottom, respectively: 0.5, integers from one to nine, and the numbers dividable by 10, from 10 to 90. Like in modern decimal multiplication table, the numbers at the intersection of each row and column are the results of multiplication of relevant numbers. The table can also be used to multiply any integer or integer and a half from 0.5 to 99.5. According to a working version, the numbers not represented in the table first have to be broken down into components. For example, 22.5 × 35.5 can be transformed as follows: (20 + 2 + 0.5) x ( 30 + 5 + 0.5 ). To solve this problem one should perform nine multiplications 20 × 30, 20 × 5 20 × 0.5, 2 × 30 and so on. The end result will be the sum of these results. This is quite an effective ancient calculator. Science historians note the antiquity of Chinese mathematical practice, but are quite careful in the description of the mathematical theory of the ancient Chinese. Among the earliest known Chinese mathematical treatises there are "The Arithmetical Classic of the Gnomon" (Zhou Bi Suan Jing) and "Mathematical treatise in nine sections " (Chiu Chang Suan Shu) that date back to 5-2 and 3-1 centuries BC, respectively. Some scientists mention possible contacts of Chinese mathematicians with Indian ones, but it happened much later, in 5-7 century. Likely the discovered multiplication table was used by Chinese officials to calculate the area of land, counting crop yields or taxes. This calculator can also be used for division and extracting square roots. However, modern scientists are not sure whether such complex operations were performed in that era. In any event, according to the historian of mathematics at New York University Joseph Dauben, this is the earliest artifact of a decimal multiplication table in the world. The American scientist is confident that the ancient Chinese used complex arithmetic in theoretical and commercial purposes in the era of the Warring States. This happened before the first emperor Ying Zheng who unified the entire China and took the title of Qin Shi Huang (first Huang of the Qin Dynasty). Later, he ordered to burn many books and banned private libraries in an attempt to reverse the country's intellectual tradition. Until now, a text dating back to the Qin Dynasty (221-206 years BC) was considered the oldest Chinese multiplication table. It is a series of short sentences, for example, "six eight forty eight." It contained only the simplest multiplications. Multiplication tables of ancient Babylon are much older. They are approximately 4,000 years old, but a set of tables used for multiplication was bulky, with separate tables for multiplication by 1-20, 30 ... 50 No calculations were possible without a large library of tables in Babylon. Furthermore, they did not have a decimal multiplication table. In Europe the first multiplication tables appeared only during the Renaissance era. Igor Bukker
{"url":"http://english.pravda.ru/science/earth/27-01-2014/126683-ancient_chinese_calculator-0/","timestamp":"2014-04-19T17:02:23Z","content_type":null,"content_length":"48262","record_id":"<urn:uuid:f6feb079-e1af-4b37-ac25-8de1e53dc0c8>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
I need help solving this equation. -9h - 6 + 12h + 40 = 22. h equals what? -9h - 6 + 12h + 40 = 22. Combine on the left to simplify 12h-9h +40-6 = 22 3h +34 = 22 We need the part with the unknown on one side and the plain numbers on the other side. Since 34 is being added on the left, we must do the opposite thing. Subtract 34 from both sides. Then we need the unknown to stand alone. Since h is being multiplied by 3, we must do the opposite thing. Divide both sides by The right answer of this question is 3h+34=22 3h=22-34 h=12/3 h=4 The right answer of this question is 3h+34=22 3h=22-34 h=12/3 h=4 There is a small, but important typo: 22-34 = -12 Divide that by 3 for the final answer. Ahhh, so the quotient of a negative and positive is negative! You go Ctelady! ...Rich B.
{"url":"http://mathhelpforum.com/math-topics/1457-i-need-help-solving-equation.html","timestamp":"2014-04-19T07:10:10Z","content_type":null,"content_length":"39956","record_id":"<urn:uuid:39ee7361-ea2f-4efb-b4ab-b5e7e7086859>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Predicting maximum values from minimum values October 4th 2011, 07:42 AM #1 Oct 2011 Predicting maximum values from minimum values Firstly, apologies if I've posted my question in the wrong section! I play an online soccer game in which each player has a maximum and minimum value. In some circumstances only the minimum value of the player is provided but it is possible to predict the maximum values too. Therefore I'm attempting to work out the formula by which the maximum value of the player can be established by reference to the minimum value. This is well beyond my mathematical expertise - could anybody help? Here are some of their details: Player A: 19 years old. Minimum value: £14,405,525 Maximum value: £26,676,897 Player B: 24 years old. Minimum value: £11,736,726 Maximum value: £21,734,678 Player C: 25 years old. Minimum value: £15,845,786 Maximum value: £24,814,589 Player D: 20 years old. Minimum value: £10,295,713 Maximum value: £19,066,136 Is it possible to work out the formula that links these minimum and maximum values? Any help greatly appreciated! Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/statistics/189549-predicting-maximum-values-minimum-values.html","timestamp":"2014-04-18T17:17:39Z","content_type":null,"content_length":"29689","record_id":"<urn:uuid:90dcd248-8108-459b-ac6e-8af26ea9edcd>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
How Do Cube Roots Work? Date: 11/04/2003 at 13:01:17 From: Gina Subject: i don't understand cubed square roots? I do not understand how to do a square root problem when it is asking you to do something with a cubed root. For example: \3 / 216 = ? I don't understand what I am supposed to do with the 3. Date: 11/04/2003 at 17:24:43 From: Doctor Douglas Subject: Re: i don't understand cubed square roots? Hi Gina. Thanks for writing to the Math Forum. You are correct, you are being asked to find the cube root of 216. This is similar to the case of a square root, in which you have to find the number X such that X * X is equal to the given number. I assume that you're familiar with this type of problem (e.g. sqrt(25) = ?, Answer: 5 or -5). For the cube root (of 216), you are looking for a number such that X * X * X = 216 This is what the notation /\3 / 216 = X means. To find X, you can use a guess-and-check method with a calculator. If you are interested in the method for computing a cube root by hand without a calculator, you should check out the following web page from our archives: Cube Root by Hand I hope this helps explain what is going on! - Doctor Douglas, The Math Forum Date: 11/04/2003 at 17:33:20 From: Doctor Riz Subject: Re: i don't understand cubed square roots? Hi Gina - Thanks for writing. When you take a square root of something, you are trying to find the number that when you SQUARE it you get the number inside the radical sign. For example, the square root of 9 is 3 because when you square 3 you get 9 ( 3^2 = 3*3 = 9 ). Similarly, the square root of 25 is 5 because 5^2 = 5*5 = 25. When you take a cube root, you are trying to find the number that when you CUBE it you get the number inside the radical sign. For example, the cube root of 8 is 2 because 2^3 = 2*2*2 = 8. The cube root of 1000 is 10 because 10^3 = 10*10*10 = 1000. The cube root of 216 is 6 because 6^3 = 6*6*6 = 216. The little 3 you see tucked into the radical sign indicates that it's a cube root. We could write a little 2 in the same place for a square root, but we agree not to bother, so if you see a radical sign without a little number you know it's a square root. If you saw one with a little 4 in there, that would mean a 4th root. That tells you to look for the number that when you raise it to the 4th power you get what's inside the radical. For example, the 4th root of 81 is 3 because 3^4 = 3*3*3*3 = 81. The key thing to realize is that the radical sign itself just tells you that you are dealing with a root of some sort. It's the little index number that tells you what KIND of root you are finding. No index number means it's a square root, 3 means cube root, 4 means fourth root and so on. Most calculators have a square root key, and many of today's calculators also have a key where you can specify what kind of root you want to take. One final comment--you may know that you can't take a square root of a negative number, at least not when using real numbers. For example, there is no real square root of -9 because (3)^2 = 3*3 = 9 and (-3)^2 = (-3)*(-3) = 9. Whether you square 3 or -3 you get +9 both times. There is no number you can square and get -9. But with cube and other odd roots, you can take them of negative numbers. We saw that the cube root of 8 was 2 since 2^3 = 2*2*2 = 8. What if it's the cube root of -8? Note that (-2)^3 = (-2)*(-2)*(-2) = -8. When you multiply an odd number of negatives, the answer is negative. That's why you can take odd roots of negative numbers. Hope this helps--write back if you are still confused! - Doctor Riz, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/64385.html","timestamp":"2014-04-20T01:51:09Z","content_type":null,"content_length":"8888","record_id":"<urn:uuid:1429286e-45e1-4ffa-b703-696f02e0b1c2>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
defferiantial equation June 6th 2008, 10:47 PM #1 Jun 2008 defferiantial equation When a cake is removed from an oven, the temperature of the cake is 210F. The cake is left to cool at room temperature (70F.), and after 30 minutes, the temperature of the cake is 140F According to Newton's law of cooling, the rate of change of temperature of a body is proportional to the temperature difference between the body and the environment. Set up and solve a differential equation to determine when the temperature of the cake will be 100F. How would you set up this problem? When a cake is removed from an oven, the temperature of the cake is 210F. The cake is left to cool at room temperature (70F.), and after 30 minutes, the temperature of the cake is 140F According to Newton's law of cooling, the rate of change of temperature of a body is proportional to the temperature difference between the body and the environment. Set up and solve a differential equation to determine when the temperature of the cake will be 100F. How would you set up this problem? The DE is $\frac{dT}{dt} = k (T - 70)$ subject to the boundary conditions T(0) = 210 and T(30) = 140. The boundary conditions are used to find the value of arbitrary constant of integration and the proportionality constant k. Solve the DE. Use the solution to find the value of t when T = 100. I suggest you carefully study examples from your class notes and/or textbook, as well as using the search string Newton Law Cooling in a famous search engine .... The DE is $\frac{dT}{dt} = k (T - 70)$ subject to the boundary conditions T(0) = 210 and T(30) = 140. The boundary conditions are used to find the value of arbitrary constant of integration and the proportionality constant k. Solve the DE. Use the solution to find the value of t when T = 100. I suggest you carefully study examples from your class notes and/or textbook, as well as using the search string Newton Law Cooling in a famous search engine .... Here's an example. See the first part of post #2 More examples: posts # 2 and 8 June 6th 2008, 10:56 PM #2 June 6th 2008, 11:02 PM #3
{"url":"http://mathhelpforum.com/differential-equations/40864-defferiantial-equation.html","timestamp":"2014-04-19T05:53:52Z","content_type":null,"content_length":"39843","record_id":"<urn:uuid:e7d3f663-f45e-4169-af2c-6ee1549626b4>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
Stata Software Hosmer-Lemeshow test This package performs the Hosmer-Lemeshow test to assess the fit of, typically, a logistic regression model. The program allows you to change the groupings used and plot the observed and predicted values within each group. In addition the program can be used to assess out of sample predictions. Obtain the Hosmer-Lemeshow test package (zipped) here or type the following within Stata. net from http://www.ucl.ac.uk/statistics/research/biostatistics Penalised logistic regression The package fits logistic regression models with a likelihood penalised by lambda*B'PB where B is the beta vector of length c and P a c*c penalisation matrix. By default P is a diagonal matrix with elements Var(xj) so that the likelihood is penalised by lambda times the sum of standardised beta's squared. An alternative penalisation term is lambda*sum_j(|bj|) which is the lasso of Tibshirani. This has the effect of shrinking some coefficients exactly to zero, providing a form of variable selection. Obtain the penalised logistic regression package (zipped) here or type the following within Stata. net from http://www.ucl.ac.uk/statistics/research/biostatistics The stepdown routine is used after an estimation command to approximate the linear predictor to a given level of accuracy (assessed using R2). Obtain the stepdown package (zipped) here or type the following within Stata. net from http://www.ucl.ac.uk/statistics/research/biostatistics Click here to view the accompanying insert (postscript). Generalized Additive Models An update of the original STB program (STB-43 sg79) can be found on Patrick Royston's webpage. Page last modified on 12 nov 10 16:54
{"url":"http://www.ucl.ac.uk/statistics/research/biostatistics/software/","timestamp":"2014-04-18T18:33:54Z","content_type":null,"content_length":"19478","record_id":"<urn:uuid:f5405b6c-22b7-49a0-8d21-fe7eee5387d4>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
Olema Math Tutor Find an Olema Math Tutor ...I worked there from 2008 to 2010, when I graduated (BA in Comparative Religion, with honors and a 3.77 GPA). As a tutor I worked with a variety of UCSB undergraduates each day with writing assignments in a wide range of subjects, styles, and ability levels during half hour to hour private session... 11 Subjects: including prealgebra, algebra 1, reading, English ...Therefore, I know what elementary students learn in Math and how to help them learn elementary in an easy and and fun way. Vietnamese is my mother language, and I graduated from high school in Vietnam. Right now, I'm also teaching Vietnamese weekly at a church. 18 Subjects: including calculus, precalculus, trigonometry, statistics ...I am a native Spanish speaker, so I can tutor Spanish and English as a Second Language. Additionally, I can tutor any of the technical subjects that I mentioned above in Spanish as well as English. My teaching methodology is to aid my students in developing an intuition that will allow them to ... 15 Subjects: including algebra 1, algebra 2, calculus, Microsoft Excel ...As an English and Writing Instructor, I have worked with many students and adult learners to improve their writing and computer skills. I have tutored students in Microsoft Word and showed them how to create cover letters and resumes. I have edited and proofread many cover letters and resumes for successful colleagues and career professionals. 34 Subjects: including SAT math, chemistry, algebra 1, algebra 2 ...I have worked for both Kaplan and McGraw-Hill designing testing materials for tests such as the SAT/ACT and GRE. I know the ins and outs of how tests are constructed and how to maximize your score, and am happy to share my knowledge with you. I thank you for your consideration. 14 Subjects: including trigonometry, ASVAB, biostatistics, algebra 1
{"url":"http://www.purplemath.com/Olema_Math_tutors.php","timestamp":"2014-04-19T20:06:14Z","content_type":null,"content_length":"23500","record_id":"<urn:uuid:b3d12794-ffd3-4ad9-89aa-b70fbd63e6b8>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: @HaperFink22 Hans bought some bottles of juice at the convenience store. Each bottle holds 16 ounces. In this situation, what would the total number of ounces be considered? function independent variable (I think this one) domain range Thanks! :) • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fb50c7be4b05565342aca80","timestamp":"2014-04-21T08:01:43Z","content_type":null,"content_length":"37294","record_id":"<urn:uuid:e293b88f-ca42-4f5f-9918-c9145f6c6a27>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
Jersey, GA Math Tutor Find a Jersey, GA Math Tutor ...I have also taught special education. I am currently certified in these areas. I have taught reading, writing, math, social studies, and science. 13 Subjects: including algebra 1, reading, writing, English ...Teaching emotionally and behaviorally disordered students, I am trained to teach autistic students and have taught many during my 32 years in the classroom. All autistic students I taught made academic progress. I have been certified by the state of Georgia (K-12) and have taught many LD/dyslexic students. 19 Subjects: including prealgebra, algebra 1, reading, English I have been teaching math at the middle and high school level for the past 10 years. I have also been able to help students who struggle in math pull their grades up to an A or B. I strive to ensure understanding in all of my students. 7 Subjects: including calculus, prealgebra, precalculus, trigonometry I love to see people come to understand math and science! After completing my bachelor's degrees (with honors) in physics and atmospheric science, I took additional graduate courses in atmospheric science. I have now been teaching physics, calculus, trigonometry, algebra and geometry full time for four years at a missionary school in Thailand. 20 Subjects: including linear algebra, SAT math, trigonometry, precalculus I have been tutoring all my life. My strengths lie in my ability to sit down and find out where the problem is, formulate a plan and work from there. As a computer professional, my methods not only include tutoring by word of mouth, but finding creative solutions such as physical or computer games. 21 Subjects: including calculus, linear algebra, discrete math, Java Related Jersey, GA Tutors Jersey, GA Accounting Tutors Jersey, GA ACT Tutors Jersey, GA Algebra Tutors Jersey, GA Algebra 2 Tutors Jersey, GA Calculus Tutors Jersey, GA Geometry Tutors Jersey, GA Math Tutors Jersey, GA Prealgebra Tutors Jersey, GA Precalculus Tutors Jersey, GA SAT Tutors Jersey, GA SAT Math Tutors Jersey, GA Science Tutors Jersey, GA Statistics Tutors Jersey, GA Trigonometry Tutors Nearby Cities With Math Tutor Bishop, GA Math Tutors Bogart Math Tutors Good Hope, GA Math Tutors High Shoals, GA Math Tutors Loganville, GA Math Tutors Madison, GA Math Tutors Mansfield, GA Math Tutors N High Shoals, GA Math Tutors Newborn Math Tutors North High Shoals, GA Math Tutors Oxford, GA Math Tutors Porterdale Math Tutors Redan Math Tutors Rutledge, GA Math Tutors Statham Math Tutors
{"url":"http://www.purplemath.com/jersey_ga_math_tutors.php","timestamp":"2014-04-16T07:33:46Z","content_type":null,"content_length":"23603","record_id":"<urn:uuid:65686cf4-5deb-490c-b289-42e00eac9a05>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
Foy, CA Algebra Tutor Find a Foy, CA Algebra Tutor ...The student's mother was so happy with his improvement that she personally sent a letter to thank me after seeing her son's English grades improve so dramatically. Furthermore, I have often the added advantage of being able to communicate some of the subtler nuances of English in the speaker's n... 44 Subjects: including algebra 1, algebra 2, chemistry, reading ...My Education : I earned a B.S in Psychology at University of Washington (focus on perception and memory), pursuing my MBA at USC (Youngest student in my MBA class year) and M.S at USC My Tests: I received a 2400 on the SAT and scored in the 90th percentile on the GMAT My Tutoring experience: I... 25 Subjects: including algebra 1, English, reading, writing ...I don't tell students what to do; I teach them how to do it. I've been professionally tutoring since I was 17, when the Princeton Review hired me to teach SAT classes in Michigan. I later graduated magna cum laude from Yale with joint bachelor’s and master’s degrees in history and then completed an Master's of Philosophy in literature at King's College, Cambridge. 42 Subjects: including algebra 2, algebra 1, reading, English ...When approaching problem-solving chemistry, you need to put pencil to paper and try different approaches. Often, it's not use to attempt to figure out the best approach in advance; you're better off trying several ways immediately, and seeing what you get. When approaching difficult concepts, you need to sketch something out immediately. 17 Subjects: including algebra 1, algebra 2, chemistry, Spanish Hello! I am a Mathematics graduate from University of Riverside. I plan on becoming a teacher. 6 Subjects: including algebra 1, geometry, SAT math, elementary (k-6th) Related Foy, CA Tutors Foy, CA Accounting Tutors Foy, CA ACT Tutors Foy, CA Algebra Tutors Foy, CA Algebra 2 Tutors Foy, CA Calculus Tutors Foy, CA Geometry Tutors Foy, CA Math Tutors Foy, CA Prealgebra Tutors Foy, CA Precalculus Tutors Foy, CA SAT Tutors Foy, CA SAT Math Tutors Foy, CA Science Tutors Foy, CA Statistics Tutors Foy, CA Trigonometry Tutors Nearby Cities With algebra Tutor Cimarron, CA algebra Tutors Dockweiler, CA algebra Tutors Dowtown Carrier Annex, CA algebra Tutors Green, CA algebra Tutors Griffith, CA algebra Tutors La Tijera, CA algebra Tutors Oakwood, CA algebra Tutors Pico Heights, CA algebra Tutors Rimpau, CA algebra Tutors Sanford, CA algebra Tutors Santa Western, CA algebra Tutors Vermont, CA algebra Tutors Westvern, CA algebra Tutors Wilcox, CA algebra Tutors Wilshire Park, LA algebra Tutors
{"url":"http://www.purplemath.com/foy_ca_algebra_tutors.php","timestamp":"2014-04-20T21:30:20Z","content_type":null,"content_length":"23826","record_id":"<urn:uuid:4f9a7ca6-78ba-48c9-85f3-ed60765250fa>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
Jersey, GA Math Tutor Find a Jersey, GA Math Tutor ...I have also taught special education. I am currently certified in these areas. I have taught reading, writing, math, social studies, and science. 13 Subjects: including algebra 1, reading, writing, English ...Teaching emotionally and behaviorally disordered students, I am trained to teach autistic students and have taught many during my 32 years in the classroom. All autistic students I taught made academic progress. I have been certified by the state of Georgia (K-12) and have taught many LD/dyslexic students. 19 Subjects: including prealgebra, algebra 1, reading, English I have been teaching math at the middle and high school level for the past 10 years. I have also been able to help students who struggle in math pull their grades up to an A or B. I strive to ensure understanding in all of my students. 7 Subjects: including calculus, prealgebra, precalculus, trigonometry I love to see people come to understand math and science! After completing my bachelor's degrees (with honors) in physics and atmospheric science, I took additional graduate courses in atmospheric science. I have now been teaching physics, calculus, trigonometry, algebra and geometry full time for four years at a missionary school in Thailand. 20 Subjects: including linear algebra, SAT math, trigonometry, precalculus I have been tutoring all my life. My strengths lie in my ability to sit down and find out where the problem is, formulate a plan and work from there. As a computer professional, my methods not only include tutoring by word of mouth, but finding creative solutions such as physical or computer games. 21 Subjects: including calculus, linear algebra, discrete math, Java Related Jersey, GA Tutors Jersey, GA Accounting Tutors Jersey, GA ACT Tutors Jersey, GA Algebra Tutors Jersey, GA Algebra 2 Tutors Jersey, GA Calculus Tutors Jersey, GA Geometry Tutors Jersey, GA Math Tutors Jersey, GA Prealgebra Tutors Jersey, GA Precalculus Tutors Jersey, GA SAT Tutors Jersey, GA SAT Math Tutors Jersey, GA Science Tutors Jersey, GA Statistics Tutors Jersey, GA Trigonometry Tutors Nearby Cities With Math Tutor Bishop, GA Math Tutors Bogart Math Tutors Good Hope, GA Math Tutors High Shoals, GA Math Tutors Loganville, GA Math Tutors Madison, GA Math Tutors Mansfield, GA Math Tutors N High Shoals, GA Math Tutors Newborn Math Tutors North High Shoals, GA Math Tutors Oxford, GA Math Tutors Porterdale Math Tutors Redan Math Tutors Rutledge, GA Math Tutors Statham Math Tutors
{"url":"http://www.purplemath.com/jersey_ga_math_tutors.php","timestamp":"2014-04-16T07:33:46Z","content_type":null,"content_length":"23603","record_id":"<urn:uuid:65686cf4-5deb-490c-b289-42e00eac9a05>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
XingyiPedia | XingyiMax.com From Wikipedia, the free encyclopedia The Gini coefficient (also known as the Gini index or Gini ratio) (/dʒini/) is a measure of statistical dispersion intended to represent the income distribution of a nation's residents. It was developed by the Italian statistician and sociologist Corrado Gini and published in his 1912 paper "Variability and Mutability" (Italian: Variabilità e mutabilità).^1^2 The Gini coefficient measures the inequality among values of a frequency distribution (for example levels of income). A Gini coefficient of zero expresses perfect equality, where all values are the same (for example, where everyone has the same income). A Gini coefficient of one (or 100%) expresses maximal inequality among values (for example where only one person has all the income).^3^4 However, a value greater than one may occur if some persons have negative income or wealth. For larger groups, values close to or above 1 are very unlikely in practice. Gini coefficient is commonly used as a measure of inequality of income or wealth.^5 For OECD countries, in the late 2000s, considering the effect of taxes and transfer payments, the income Gini coefficient ranged between 0.24 to 0.49, with Slovenia the lowest and Chile the highest.^6 The countries in Africa had the highest pre-tax Gini coefficients in 2008–2009, with South Africa the world's highest at 0.7.^7^8 The global income inequality Gini coefficient in 2005, for all human beings taken together, has been estimated to be between 0.61 and 0.68 by various sources.^9^10 There are some issues in interpreting a Gini coefficient. The same value may result from many different distribution curves. The demographic structure should be taken into account. Countries with an aging population, or with a baby boom, experience an increasing pre-tax Gini coefficient even if real income distribution for working adults remains constant. Scholars have devised over a dozen variants of the Gini coefficient.^11^12^13 The Gini coefficient is usually defined mathematically based on the Lorenz curve, which plots the proportion of the total income of the population (y axis) that is cumulatively earned by the bottom x% of the population (see diagram). The line at 45 degrees thus represents perfect equality of incomes. The Gini coefficient can then be thought of as the ratio of the area that lies between the line of equality and the Lorenz curve (marked A in the diagram) over the total area under the line of equality (marked A and B in the diagram); i.e., G = A / (A + B). If all people have non-negative income (or wealth, as the case may be), the Gini coefficient can theoretically range from 0 (complete equality) to 1 (complete inequality); it is sometimes expressed as a percentage ranging between 0 and 100. In practice, both extreme values are not quite reached. If negative values are possible (such as the negative wealth of people with debts), then the Gini coefficient could theoretically be more than 1. Normally the mean (or total) is assumed positive, which rules out a Gini coefficient less than zero. A low Gini coefficient indicates a more equal distribution, with 0 corresponding to complete equality, while higher Gini coefficients indicate more unequal distribution, with 1 corresponding to complete inequality. When used as a measure of income inequality, the most unequal society (assuming no negative incomes) will be one in which a single person receives 100% of the total income and the remaining people receive none (G = 1−1/N); and the most equal society will be one in which every person receives the same income (G = 0). An alternative approach would be to consider the Gini coefficient as half of the relative mean difference, which is a mathematical equivalence. The mean difference is the average absolute difference between two items selected randomly from a population, and the relative mean difference is the mean difference divided by the average, to normalize for scale. The Gini index is defined as a ratio of the areas on the Lorenz curve diagram. If the area between the line of perfect equality and the Lorenz curve is A, and the area under the Lorenz curve is B, then the Gini index is A / (A + B). Since A + B = 0.5, the Gini index is G = 2 * A or G = 1 – 2 B. If the Lorenz curve is represented by the function Y = L (X), the value of B can be found with integration and: $B = \int_0^1 L(X) dX.$ In some cases, this equation can be applied to calculate the Gini coefficient without direct reference to the Lorenz curve. For example (taking y to mean the income or wealth of a person or • For a population uniform on the values y[i], i = 1 to n, indexed in non-decreasing order (y[i] ≤ y[i+1]): $G = \frac{1}{n}\left ( n+1 - 2 \left ( \frac{\sum\limits_{i=1}^n \; (n+1-i)y_i}{\sum\limits_{i=1}^n y_i} \right ) \right )$ This may be simplified to: $G = \frac{2 \Sigma_{i=1}^n \; i y_i}{n \Sigma_{i=1}^n y_i} -\frac{n+1}{n}$ This formula actually applies to any real population, since each person can be assigned his or her own y[i].^14 • For a discrete probability function f(y), where y[i], i = 1 to n, are the points with nonzero probabilities and which are indexed in increasing order (y[i] < y[i+1]): $G = 1 - \frac{\Sigma_{i=1}^n \; f(y_i)(S_{i-1}+S_i)}{S_n}$ $S_i = \Sigma_{j=1}^i \; f(y_j)\,y_j\,$ and $S_0 = 0\,$ • For a cumulative distribution function F(y) that has a mean μ and is zero for all negative values of y: $G = 1 - \frac{1}{\mu}\int_0^\infty (1-F(y))^2dy = \frac{1}{\mu}\int_0^\infty F(y)(1-F(y))dy$ (This formula can be applied when there are negative values if the integration is taken from minus infinity to plus infinity.) • Since the Gini coefficient is half the relative mean difference, it can also be calculated using formulas for the relative mean difference. For a random sample S consisting of values y[i], i = 1 to n, that are indexed in non-decreasing order (y[i] ≤ y[i+1]), the statistic: $G(S) = \frac{1}{n-1}\left (n+1 - 2 \left ( \frac{\Sigma_{i=1}^n \; (n+1-i)y_i}{\Sigma_{i=1}^n y_i}\right ) \right )$ is a consistent estimator of the population Gini coefficient, but is not, in general, unbiased. Like G, G (S) has a simpler form: $G(S) = 1 - \frac{2}{n-1}\left ( n - \frac{\Sigma_{i=1}^n \; iy_i}{\Sigma_{i=1}^n y_i}\right )$. There does not exist a sample statistic that is in general an unbiased estimator of the population Gini coefficient, like the relative mean difference. For some functional forms, the Gini index can be calculated explicitly. For example, if y follows a lognormal distribution with the standard deviation of logs equal to $\sigma$, then $G = \mathrm {erf}(\sigma/2)$ where $\mathrm{erf}$ is the error function. Sometimes the entire Lorenz curve is not known, and only values at certain intervals are given. In that case, the Gini coefficient can be approximated by using various techniques for interpolating the missing values of the Lorenz curve. If (X[k], Y[k]) are the known points on the Lorenz curve, with the X[k] indexed in increasing order (X[k – 1] < X[k]), so that: • X[k] is the cumulated proportion of the population variable, for k = 0,...,n, with X[0] = 0, X[n] = 1. • Y[k] is the cumulated proportion of the income variable, for k = 0,...,n, with Y[0] = 0, Y[n] = 1. • Y[k] should be indexed in non-decreasing order (Y[k] > Y[k – 1]) If the Lorenz curve is approximated on each interval as a line between consecutive points, then the area B can be approximated with trapezoids and: $G_1 = 1 - \sum_{k=1}^{n} (X_{k} - X_{k-1}) (Y_{k} + Y_{k-1})$ is the resulting approximation for G. More accurate results can be obtained using other methods to approximate the area B, such as approximating the Lorenz curve with a quadratic function across pairs of intervals, or building an appropriately smooth approximation to the underlying distribution function that matches the known data. If the population mean and boundary values for each interval are also known, these can also often be used to improve the accuracy of the approximation. The Gini coefficient calculated from a sample is a statistic and its standard error, or confidence intervals for the population Gini coefficient, should be reported. These can be calculated using bootstrap techniques but those proposed have been mathematically complicated and computationally onerous even in an era of fast computers. Ogwang (2000) made the process more efficient by setting up a “trick regression model” in which the incomes in the sample are ranked with the lowest income being allocated rank 1. The model then expresses the rank (dependent variable) as the sum of a constant A and a normal error term whose variance is inversely proportional to y[k]; $k = A + \ N(0, s^{2}/y_k)$ Ogwang showed that G can be expressed as a function of the weighted least squares estimate of the constant A and that this can be used to speed up the calculation of the jackknife estimate for the standard error. Giles (2004) argued that the standard error of the estimate of A can be used to derive that of the estimate of G directly without using a jackknife at all. This method only requires the use of ordinary least squares regression after ordering the sample data. The results compare favorably with the estimates from the jackknife with agreement improving with increasing sample size. The paper describing this method can be found here: http://web.uvic.ca/econ/ewp0202.pdf However it has since been argued that this is dependent on the model’s assumptions about the error distributions (Ogwang 2004) and the independence of error terms (Reza & Gastwirth 2006) and that these assumptions are often not valid for real data sets. It may therefore be better to stick with jackknife methods such as those proposed by Yitzhaki (1991) and Karagiannis and Kovacevic (2000). The debate continues.^citation needed Guillermina Jasso (1979) ^15 and Angus Deaton (1997, 139) independently proposed the following formula for the Gini coefficient: $G = \frac{N+1}{N-1}-\frac{2}{N(N-1)\mu}(\Sigma_{i=1}^n \; P_iX_i)$ where $\mu$ is mean income of the population, P[i] is the income rank P of person i, with income X, such that the richest person receives a rank of 1 and the poorest a rank of N. This effectively gives higher weight to poorer people in the income distribution, which allows the Gini to meet the Transfer Principle. Note that the Jasso-Deaton formula rescales the coefficient so that its value is 1 if all the $X_i$ are zero except one. Gini coefficients of representative income distributions Income Distribution Function Gini Coefficient (rounded) y = 1 for all x 0.0 y = x^⅓ 0.143 y = x^½ 0.200 y = x + b (b = 10% of max income) 0.273 y = x + b (b = 5% of max income) 0.302 y = x 0.333 y = x^2 0.500 y = x^3 0.600 y = x^p, p > 0 p/(p + 2) Given the normalization of both the cumulative population and the cumulative share of income used to calculate the Gini coefficient, the measure is not overly sensitive to the specifics of the income distribution, but rather only on how incomes vary relative to the other members of a population. The exception to this is in the redistribution of wealth resulting in a minimum income for all people. When the population is sorted, if their income distribution were to approximate a well known function, then some representative values could be calculated. Some representative values of the Gini coefficient for income distributions approximated by some simple functions are tabulated below. While the income distribution of any particular country need not follow such simple functions, these functions give a qualitative understanding of the income distribution in a nation given the Gini coefficient. The effects of minimum income policy due to redistribution can be seen in the linear relationships above. Generalized inequality index The Gini coefficient and other standard inequality indices reduce to a common form. Perfect equality—the absence of inequality—exists when and only when the inequality ratio, $r_j = x_j / \overline {x}$, equals 1 for all j units in some population (for example, there is perfect income equality when everyone’s income $x_j$ equals the mean income $\overline{x}$, so that $r_j=1$ for everyone). Measures of inequality, then, are measures of the average deviations of the $r_j=1$ from 1; the greater the average deviation, the greater the inequality. Based on these observations the inequality indices have this common form:^16 $\mathrm{Inequality} = \Sigma_j \, p_j \, f(r_j)\, ,$ where p[j] weights the units by their population share, and f(r[j]) is a function of the deviation of each unit’s r[j] from 1, the point of equality. The insight of this generalised inequality index is that inequality indices differ because they employ different functions of the distance of the inequality ratios (the r[j]) from 1. Gini coefficient of income distributions Gini coefficients of income are calculated on market income as well as disposable income basis. The Gini coefficient on market income – sometimes referred to as pre-tax Gini index – is calculated on income before taxes and transfers, and it measures inequality in income without considering the effect of taxes and social spending already in place in a country. The Gini coefficient on disposable income – sometimes referred to as after-tax Gini index – is calculated on income after taxes and transfers, and it measures inequality in income after considering the effect of taxes and social spending already in place in a country.^6^17^18 The difference in Gini indices between OECD countries, on after-taxes and transfers basis, is significantly narrower.^18 For OECD countries, over 2008–2009 period, Gini coefficient on pre-taxes and transfers basis for total population ranged between 0.34 to 0.53, with South Korea the lowest and Italy the highest. Gini coefficient on after-taxes and transfers basis for total population ranged between 0.25 to 0.48, with Denmark the lowest and Mexico the highest. For United States, the country with the largest population in OECD countries, the pre-tax Gini index was 0.49, and after-tax Gini index was 0.38, in 2008–2009. The OECD averages for total population in OECD countries was 0.46 for pre-tax income Gini index and 0.31 for after-tax income Gini Index.^6^19 Taxes and social spending that were in place in 2008–2009 period in OECD countries significantly lowered effective income inequality, and in general, "European countries — especially Nordic and Continental welfare states — achieve lower levels of income inequality than other countries."^20 Using the Gini can help quantify differences in welfare and compensation policies and philosophies. However it should be borne in mind that the Gini coefficient can be misleading when used to make political comparisons between large and small countries or those with different immigration policies (see limitations of Gini coefficient section). The Gini index for the entire world has been estimated by various parties to be between 0.61 and 0.68.^9^10^21 The graph shows the values expressed as a percentage, in their historical development for a number of countries. US income Gini indices over time Gini indexes – before and after taxes between 1980 and 2010^6 Taxes and social spending in most countries have significant moderating effect on income inequality Gini indices. For the late 2000s, the United States had the 4th highest measure of income inequality out of the 34 OECD countries measured, after taxes and transfers had been taken into account.^22 The table below presents the Gini indices for household income, without including the effect of taxes and transfers, for the United States at various times, according to the US Census Bureau.^23^24^25^26 The Gini values are a national composite, with significant variations in Gini between the states. The states of Utah, Alaska and Wyoming have a pre-tax income inequality Gini coefficient that is 10% lower than the U.S. average, while Washington D.C. and Puerto Rico 10% higher. After including the effects of federal and state taxes, the U.S. Federal Reserve estimates 34 states in the USA have a Gini coefficient between 0.30 and 0.35, with the state of Maine the lowest.^27 At the county and municipality levels, the pre-tax Gini index ranged from 0.21 to 0.65 in 2010 across the United States, according to Census Bureau estimates.^28 Income Gini coefficient United States, 1947–2011 Year pre-tax Gini Comments 1947 0.413 (estimated) 1967 0.397 (first year reported) 1968 0.386 1970 0.394 1980 0.403 1990 0.428 2000 0.462 2005 0.469 2006 0.470 2007 0.463 2008 0.467 2009 0.468 2010 0.469 2011 0.477 Regional income Gini indices According to UNICEF, Latin America and the Caribbean region had the highest net income Gini index in the world at 48.3, on unweighted average basis in 2008. The remaining regional averages were: sub-Saharan Africa (44.2), Asia (40.4), Middle East and North Africa (39.2), Eastern Europe and Central Asia (35.4), and High-income Countries (30.9). Using the same method, the United States is claimed to have a Gini index of 36, while South Africa had the highest income Gini index score of 67.8.^29 World income Gini index since 1800s The table below presents the estimated world income Gini index over the last 200 years, as calculated by Milanovic.^31 Taking income distribution of all human beings, the worldwide income inequality has been constantly increasing since the early 19th century. There was a steady increase in global income inequality Gini score from 1820 to 2002, with a significant increase between 1980 and 2002. This trend appears to have peaked and begun a reversal with rapid economic growth in emerging economies, particularly in the large populations of BRIC countries.^32 Income Gini coefficient World, 1820–2005 Year World Gini index^9^29^33 1820 0.43 1850 0.53 1870 0.56 1913 0.61 1929 0.62 1950 0.64 1960 0.64 1980 0.66 2002 0.71 2005 0.68 Gini coefficient is widely used in fields as diverse as sociology, economics, health science, ecology, engineering and agriculture.^34 For example, in social sciences and economics, in addition to income Gini coefficients, scholars have published education Gini coefficients and opportunity Gini coefficients. Gini coefficient of education Education Gini index estimates the inequality in education for a given population.^35 It is used to discern trends in social development through educational attainment over time. From a study of 85 countries, Thomas et al. estimate Mali had the highest education Gini index of 0.92 in 1990 (implying very high inequality in education attainment across the population), while the United States had the lowest education inequality Gini index of 0.14. Between 1960 and 1990, South Korea, China and India had the fastest drop in education inequality Gini Index. They also claim education Gini index for the United States slightly increased over the 1980 – 1990 period. Gini coefficient of opportunity Similar in concept to income Gini coefficient, opportunity Gini coefficient measures inequality of opportunity.^36^37^38 The concept builds on Amartya Sen's suggestion^39 that inequality coefficients of social development should be premised on the process of enlarging people’s choices and enhancing their capabilities, rather than process of reducing income inequality. Kovacevic in a review of opportunity Gini coefficient explains that the coefficient estimates how well a society enables its citizens to achieve success in life where the success is based on a person’s choices, efforts and talents, not his background defined by a set of predetermined circumstances at birth, such as, gender, race, place of birth, parent's income and circumstances beyond the control of that individual. In 2003, Roemer^36^40 reported Italy and Spain exhibited the largest opportunity inequality Gini index amongst advanced economies. Gini coefficients and income mobility In 1978, A. Shorrocks introduced a measure based on income Gini coefficients to estimate income mobility.^41 This measure, generalized by Maasoumi and Zandvakili,^42 is now generally referred to as Shorrocks index, sometimes as Shorrocks mobility index or Shorrocks rigidity index. It attempts to estimate whether the income inequality Gini coefficient is permanent or temporary, and to what extent a country or region enables economic mobility to its people so that they can move from one (e.g. bottom 20%) income quantile to another (e.g. middle 20%) over time. In other words, Shorrocks index compares inequality of short-term earnings such as annual income of households, to inequality of long-term earnings such as 5-year or 10-year total income for same households. Shorrocks index is calculated in number of different ways, a common approach being from the ratio of income Gini coefficients between short-term and long-term for the same region or country.^43 A 2010 study using social security income data for the United States since 1937 and Gini-based Shorrocks indexes concludes that its income mobility has had a complicated history, primarily due to mass influx of women into the country's labor force after World War II. Income inequality and income mobility trends have been different for men and women workers between 1937 and the 2000s. When men and women are considered together, the Gini coefficient-based Shorrocks Index trends imply long-term income inequality has been substantially reduced among all workers, in recent decades for the United States.^43 Other scholars, using just 1990s data or other short periods have come to different conclusions.^44 For example, Sastre and Ayala, conclude from their study of income Gini coefficient data between 1993 and 1998 for six developed economies, that France had the least income mobility, Italy the highest, and the United States and Germany intermediate levels of income mobility over those 5 years.^45 Features of Gini coefficient Gini coefficient has features that make it useful as a measure of dispersion in a population, and inequalities in particular.^46 It is a ratio analysis method making it easier to interpret. It also avoids references to a statistical average or position unrepresentative of most of the population, such as per capita income or gross domestic product. For a given time interval, Gini coefficient can therefore be used to compare diverse countries and different regions or groups within a country; for example states, counties, urban versus rural areas, gender and ethnic groups. Gini coefficients can be used to compare income distribution over time, thus it is possible to see if inequality is increasing or decreasing independent of absolute incomes. Other useful features of Gini coefficient include:^47^48^49 • Anonymity: it does not matter who the high and low earners are. • Scale independence: the Gini coefficient does not consider the size of the economy, the way it is measured, or whether it is a rich or poor country on average. • Population independence: it does not matter how large the population of the country is. • Transfer principle: if income (less than the difference), is transferred from a rich person to a poor person the resulting distribution is more equal. Limitations of Gini coefficient The Gini coefficient is a relative measure. Its proper use and interpretation is controversial.^50 Mellor explains^51 it is possible for the Gini coefficient of a developing country to rise (due to increasing inequality of income) while the number of people in absolute poverty decreases. This is because the Gini coefficient measures relative, not absolute, wealth. Kwok concludes^52 that changing income inequality, measured by Gini coefficients, can be due to structural changes in a society such as growing population (baby booms, aging populations, increased divorce rates, extended family households splitting into nuclear families, emigration, immigration) and income mobility. Gini coefficients are simple, and this simplicity can lead to oversights and can confuse the comparison of different populations; for example, while both Bangladesh (per capita income of $1,693) and the Netherlands (per capita income of $42,183) had an income Gini index of 0.31 in 2010,^53 the quality of life, economic opportunity and absolute income in these countries are very different, i.e. countries may have identical Gini coefficients, but differ greatly in wealth. Basic necessities may be available to all in a developed economy, while in an undeveloped economy with the same Gini coefficient, basic necessities may be unavailable to most or unequally available, due to lower absolute wealth. Table A. Different income with the same Gini Index^46 Household Country A Country B Group Annual Annual Income ($) Income ($) 1 20,000 9,000 2 30,000 40,000 3 40,000 48,000 4 50,000 48,000 5 60,000 55,000 Total Income $200,000 $200,000 Country's Gini 0.2 0.2 Different income distributions with the same Gini coefficient Even when the total income of a population is the same, in certain situations two countries with different income distributions can have the same Gini index (e.g. cases when income Lorenz Curves cross).^46 Table A illustrates one such situation. Both countries have a Gini index of 0.2, but the average income distributions for household groups are different. As another example, in a population where the lowest 50% of individuals have no income and the other 50% have equal income, the Gini coefficient is 0.5; whereas for another population where the lowest 75% of people have 25% of income and the top 25% have 75% of the income, the Gini index is also 0.5. Economies with similar incomes and Gini coefficients can have very different income distributions. Bellù and Liberati claim that to rank income inequality between two different populations based on their Gini indices is sometimes not possible, or misleading.^54 Extreme wealth inequality, yet low income Gini coefficient A Gini index does not contain information about absolute national or personal incomes. Populations can have very low income Gini indices, yet simultaneously very high wealth Gini index. By measuring inequality in income, the Gini ignores the differential efficiency of use of household income. By ignoring wealth (except as it contributes to income) the Gini can create the appearance of inequality when the people compared are at different stages in their life. Wealthy countries such as Sweden can show a low Gini coefficient for disposable income of 0.31 thereby appearing equal, yet have very high Gini coefficient for wealth of 0.79 to 0.86 thereby suggesting an extremely unequal wealth distribution in its society.^55^56 These factors are not assessed in income-based Gini. Table B. Same income distributions but different Gini Index Country A Household Country A Household Annual combined combined number Income ($) number Annual Income ($) 1 20,000 1 & 2 50,000 2 30,000 3 40,000 3 & 4 90,000 4 50,000 5 60,000 5 & 6 130,000 6 70,000 7 80,000 7 & 8 170000 8 90,000 9 120,000 9 & 10 270000 10 150,000 Total Income $710,000 $710,000 Country's Gini 0.303 0.293 Small sample bias – sparsely populated regions more likely to have low Gini coefficient Gini index has a downward-bias for small populations.^57 Counties or states or countries with small populations and less diverse economies will tend to report small Gini coefficients. For economically diverse large population groups, a much higher coefficient is expected than for each of its regions. Taking world economy as one, and income distribution for all human beings, for example, different scholars estimate global Gini index to range between 0.61 and 0.68.^9^10 As with other inequality coefficients, the Gini coefficient is influenced by the granularity of the measurements. For example, five 20% quantiles (low granularity) will usually yield a lower Gini coefficient than twenty 5% quantiles (high granularity) for the same distribution. Philippe Monfort has shown that using inconsistent or unspecified granularity limits the usefulness of Gini coefficient measurements.^58 The Gini coefficient measure gives different results when applied to individuals instead of households, for the same economy and same income distributions. If household data is used, the measured value of income Gini depends on how the household is defined. When different populations are not measured with consistent definitions, comparison is not meaningful. Deininger and Squire (1996) show that income Gini coefficient based on individual income, rather than household income, are different. For United States, for example, they find that individual income-based Gini index was 0.35, while for France they report individual income-based Gini index to be 0.43. According to their individual focussed method, in the 108 countries they studied, South Africa had the world's highest Gini index at 0.62, Malaysia had Asia's highest Gini index at 0.5, Brazil the highest at 0.57 in Latin America and Caribbean region, and Turkey the highest at 0.5 in OECD countries.^59 Table C. Household money income distributions and Gini Index, USA^60 Income bracket % of Population % of Population (in 2010 adjusted dollars) 1979 2010 Under $15,000 14.6% 13.7% $15,000 – $24,999 11.9% 12.0% $25,000 – $34,999 12.1% 10.9% $35,000 – $49,999 15.4% 13.9% $50,000 – $74,999 22.1% 17.7% $75,000 – $99,999 12.4% 11.4% $100,000 – $149,999 8.3% 12.1% $150,000 – $199,999 2.0% 4.5% $200,000 and over 1.2% 3.9% Total Households 80,776,000 118,682,000 United States' Gini 0.404 0.469 on pre-tax basis Gini coefficient is unable to discern the effects of structural changes in populations^52 Expanding on the importance of life-span measures, the Gini coefficient as a point-estimate of equality at a certain time, ignores life-span changes in income. Typically, increases in the proportion of young or old members of a society will drive apparent changes in equality, simply because people generally have lower incomes and wealth when they are young than when they are old. Because of this, factors such as age distribution within a population and mobility within income classes can create the appearance of inequality when none exist taking into account demographic effects. Thus a given economy may have a higher Gini coefficient at any one point in time compared to another, while the Gini coefficient calculated over individuals' lifetime income is actually lower than the apparently more equal (at a given point in time) economy's.^13 Essentially, what matters is not just inequality in any particular year, but the composition of the distribution over time. Kwok claims income Gini index for Hong Kong has been high (0.434 in 2010^53), in part because of structural changes in its population. Over recent decades, Hong Kong has witnessed increasing numbers of small households, elderly households and elderly living alone. The combined income is now split into more households. Many old people are living separately from their children in Hong Kong. These social changes have caused substantial changes in household income distribution. Income Gini coefficient, claims Kwok, does not discern these structural changes in its society.^52 Household money income distribution for the United States, summarized in Table C of this section, confirms that this issue is not limited to just Hong Kong. According to the US Census Bureau, between 1979 and 2010, the population of United States experienced structural changes in overall households, the income for all income brackets increased in inflation-adjusted terms, household income distributions shifted into higher income brackets over time, while the income Gini coefficient increased.^60^61 Another limitation of Gini coefficient is that it is not a proper measure of egalitarianism, as it is only measures income dispersion. For example, if two equally egalitarian countries pursue different immigration policies, the country accepting a higher proportion of low-income or impoverished migrants will report a higher Gini coefficient and therefore may appear to exhibit more income Gini coefficient falls yet the poor get poorer, Gini coefficient rises yet everyone getting richer Table D. Effect of income changes on Gini Index Year 1 Year 2 Year 3 Income bracket Annual Annual Annual Income ($) Income ($) Income ($) Bottom 20% 0 500 0 20% – 40% 1,000 1,200 500 40% – 60% 2,000 2,200 1,000 60% – 80% 5,000 5,500 2,000 Top 20% 7,000 12,000 2,500 Country's Gini 0.48 0.51 0.43 Everyone Everyone better off poorer Arnold describes one limitation of Gini coefficient to be income distribution situations where it misleads. The income of poorest fifth of households can be lower when Gini coefficient is lower, than when the poorest income bracket is earning a larger percentage of all income. Table D illustrates this case, where the lowest income bracket has an average household market income of $500 per year at Gini index of 0.51, and zero income at Gini index of 0.48. This is counter-intuitive and Gini coefficient cannot tell what is happening to each income bracket or the absolute income, cautions Arnold. Feldstein similarly explains one limitation of Gini coefficient as its focus on relative income distribution, rather than real levels of poverty and prosperity in society.^64 He claims Gini coefficient analysis is limited because in many situations it intuitively implies inequality that violate the so-called Pareto improvement principle. The Pareto improvement principle, named after the Italian economist Vilfredo Pareto, states that a social, economic or income change is good if it makes one or more people better off without making anyone else worse off. Gini coefficient can rise if some or all income brackets experience a rising income. Feldstein’s explanation is summarized in Table D. The table shows that in a growing economy, consistent with Pareto improvement principle, where income of every segment of the population has increased, from one year to next, the income inequality Gini coefficient can rise too. In contrast, in another economy, if everyone gets poorer and is worse off, income inequality is less and Gini coefficient lower.^65^66 Inability to value benefits and income from informal economy affects Gini coefficient accuracy Some countries distribute benefits that are difficult to value. Countries that provide subsidized housing, medical care, education or other such services are difficult to value objectively, as it depends on quality and extent of the benefit. In absence of free markets, valuing these income transfers as household income is subjective. The theoretical model of Gini coefficient is limited to accepting correct or incorrect subjective assumptions. In subsistence-driven and informal economies, people may have significant income in other forms than money, for example through subsistence farming or bartering. These income tend to accrue to the segment of population that is below-poverty line or very poor, in emerging and transitional economy countries such as those in sub-Saharan Africa, Latin America, Asia and Eastern Europe. Informal economy accounts for over half of global employment and as much as 90 per cent of employment in some of the poorer sub-Saharan countries with high official Gini inequality coefficients. Schneider et al., in their 2010 study of 162 countries,^67 report about 31.2%, or about $20 trillion, of world's GDP is informal. In developing countries, the informal economy predominates for all income brackets except for the richer, urban upper income bracket populations. Even in developed economies, between 8% (United States) to 27% (Italy) of each nation's GDP is informal, and resulting informal income predominates as a livelihood activity for those in the lowest income brackets.^68 The value and distribution of the incomes from informal or underground economy is difficult to quantify, making true income Gini coefficients estimates difficult.^64^65 Different assumptions and quantifications of these incomes will yield different Gini coefficients.^69^70^71 Gini has some mathematical limitations as well. It is not additive and different sets of people cannot be averaged to obtain the Gini coefficient of all the people in the sets. Alternatives to Gini coefficient Given the limitations of Gini coefficient, other statistical methods are used in combination or as an alternative measure of population dispersity. For example, entropy measures are frequently used (e.g. the Theil Index and the Atkinson index). These measures attempt to compare the distribution of resources by intelligent agents in the market with a maximum entropy random distribution, which would occur if these agents acted like non-intelligent particles in a closed system following the laws of statistical physics. Relation to other statistical measures Gini coefficient closely related to the AUC (Area Under receiver operating characteristic Curve) measure of performance.^72 The relation follows the formula $AUC = (G+1)/2$ Gini coefficient is also closely related to Mann–Whitney U. Gini index is also related to Pietra index — both of which are a measure of statistical heterogeneity and are derived from Lorenz curve and the diagonal line.^73^74 In certain fields such as ecology, Simpson's index is used, which is related to Gini. Simpson index scales as mirror opposite to Gini; that is, with increasing diversity Simpson index takes a smaller value (0 means maximum, 1 means minimum heterogeneity per classic Simpson index). Simpson index is sometimes transformed by subtracting the observed value from the maximum possible value of 1, and then it is known as Gini-Simpson Index.^75 Other uses Although the Gini coefficient is most popular in economics, it can in theory be applied in any field of science that studies a distribution. For example, in ecology the Gini coefficient has been used as a measure of biodiversity, where the cumulative proportion of species is plotted against cumulative proportion of individuals.^76 In health, it has been used as a measure of the inequality of health related quality of life in a population.^77 In education, it has been used as a measure of the inequality of universities.^78 In chemistry it has been used to express the selectivity of protein kinase inhibitors against a panel of kinases.^79 In engineering, it has been used to evaluate the fairness achieved by Internet routers in scheduling packet transmissions from different flows of traffic.^80 In statistics, building decision trees, it is used to measure the purity of possible child nodes, with the aim of maximising the average purity of two child nodes when splitting, and it has been compared with other equality measures.^81 The Gini coefficient is sometimes used for the measurement of the discriminatory power of rating systems in credit risk management.^82 The discriminatory power refers to a credit risk model's ability to differentiate between defaulting and non-defaulting clients. The formula $G_1$, in calculation section above, may be used for the final model and also at individual model factor level, to quantify the discriminatory power of individual factors. It is related to accuracy ratio in population assessment models. See also 1. ^ Gini, C. (1912). "Italian: Variabilità e mutabilità" 'Variability and Mutability', C. Cuppini, Bologna, 156 pages. Reprinted in Memorie di metodologica statistica (Ed. Pizetti E, Salvemini, T). Rome: Libreria Eredi Virgilio Veschi (1955). 2. ^ Gini, C. (1909). "Concentration and dependency ratios" (in Italian). English translation in Rivista di Politica Economica, 87 (1997), 769–789. 3. ^ "Current Population Survey (CPS) – Definitions and Explanations". US Census Bureau. 4. ^ Note: Gini coefficient becomes 1, only in a large population where one person has all the income. In the special case of just two people, where one has no income and the other has all the income, the Gini coefficient is 0.5. For 5 people set, where 4 have no income and the fifth has all the income, the Gini coefficient is 0.8. See: FAO, United Nations – Inequality Analysis, The Gini Index Module (PDF format), fao.org. 5. ^ Gini, C. (1936). "On the Measure of Concentration with Special Reference to Income and Statistics", Colorado College Publication, General Series No. 208, 73–79. 6. ^ ^a ^b ^c ^d "Income distribution – Inequality: Income distribution – Inequality – Country tables". OECD. 2012. 7. ^ "South Africa Overview". The World Bank. 2011. 8. ^ Ali, Mwabu and Gesami (March 2002). "Poverty reduction in Africa: Challenges and policy options" (PDF). African Economic Research Consortium, Nairobi. 9. ^ ^a ^b ^c ^d Evan Hillebrand (June 2009). "Poverty, Growth, and Inequality over the Next 50 Years" (PDF). FAO, United Nations – Economic and Social Development Department. 10. ^ ^a ^b ^c "The Real Wealth of Nations: Pathways to Human Development, 2010". United Nations Development Program. 2011. pp. 72–74. ISBN 9780230284456. 11. ^ Shlomo Yitzhaki (1998). "More than a Dozen Alternative Ways of Spelling Gini". Economic Inequality 8: 13–30. 12. ^ Myung Jae Sung (August 2010). Population Aging, Mobility of Quarterly Incomes, and Annual Income Inequality: Theoretical Discussion and Empirical Findings. 13. ^ ^a ^b Blomquist, N. (1981). "A comparison of distributions of annual and lifetime income: Sweden around 1970". Review of Income and Wealth 27 (3): 243–264. doi:10.1111/ 14. ^ "Gini Coefficient". Wolfram Mathworld. 15. ^ Jasso, Guillermina. 1979. “On Gini’s Mean Difference and Gini’s Index of Concentration.” American Sociological Review 44(5):867–70. 16. ^ Firebaugh, Glenn (1999). "Empirics of World Income Inequality". American Journal of Sociology 104 (6): 1597–1630. doi:10.1086/210218.. See also ——— (2003). "Inequality: What it is and how it is measured". The New Geography of Global Income Inequality. Cambridge, MA: Harvard University Press. ISBN 0-674-01067-1. 17. ^ N. C. Kakwani (April 1977). "Applications of Lorenz Curves in Economic Analysis". Econometrica 45 (3): 719–728. doi:10.2307/1911684. JSTOR 1911684. 18. ^ ^a ^b Chu, Davoodi, Gupta (March 2000). "Income Distribution and Tax and Government Social Spending Policies in Developing Countries". International Monetary Fund. 19. ^ "Monitoring quality of life in Europe – Gini index". Eurofound. 26 August 2009. 20. ^ Chen Wang, Koen Caminada, and Kees Goudswaard (July–September 2012). "The redistributive effect of social transfer programmes and taxes: A decomposition across countries". International Social Security Review 65 (3): 27–48. doi:10.1111/j.1468-246X.2012.01435.x. 21. ^ Bob Sutcliffe (April 2007). "Postscript to the article ‘World inequality and globalization’ (Oxford Review of Economic Policy, Spring 2004)". Retrieved 2007-12-13. 22. ^ Income distribution – Inequality. Gini coefficient after taxes and transfers. OECD. StatExtracts. Retrieved: 24 December 2012. 23. ^ "A brief look at post-war U.S. Income Inequality". United States Census Bureau. 1996. 24. ^ "Table 3. Income Distribution Measures Using Money Income and Equivalence-Adjusted Income: 2007 and 2008". Income, Poverty, and Health Insurance Coverage in the United States: 2008. United States Census Bureau. p. 17. 25. ^ "Income, Poverty and Health Insurance Coverage in the United States: 2009". Newsroom. United States Census Bureau. 26. ^ "Income, Poverty and Health Insurance Coverage in the United States: 2011". Newsroom. United States Census Bureau. September 12, 2012. Retrieved January 23, 2013. 27. ^ Daniel H. Cooper, Byron F. Lutz, and Michael G. Palumbo (September 22, 2011). "Quantifying the Role of Federal and State Taxes in Mitigating Income Inequality". Federal Reserve, Boston, United 28. ^ Adam Bee (February 2012). "Household Income Inequality Within U.S. Counties: 2006–2010". Census Bureau, U.S. Department of Commerce. 29. ^ ^a ^b Isabel Ortiz and Matthew Cummins (April 2011). "Global Inequality: Beyond the Bottom Billion". UNICEF. p. 26. 30. ^ Berg, Andrew G.; Ostry, Jonathan D. (2011). "Equality and Efficiency". Finance and Development (International Monetary Fund) 48 (3). Retrieved September 10, 2012. 31. ^ Milanovic, Branko (2009). "Global Inequality and the Global Inequality Extraction Ratio". World Bank. 32. ^ Branko Milanovic (September 2011). "More or Less". Finance & Development (International Monetary Fund) 48 (3). 33. ^ Albert Berry and John Serieux (September 2006). "Riding the Elephants: The Evolution of World Economic Growth and Income Distribution at the End of the Twentieth Century (1980–2000)". United Nations (DESA Working Paper No. 27). 34. ^ Sadras, V. O.; Bongiovanni, R. (2004). "Use of Lorenz curves and Gini coefficients to assess yield inequality within paddocks". Field Crops Research 90 (2–3): 303–310. doi:10.1016/ 35. ^ Thomas, Wang, Fan (January 2001). "Measuring education inequality – Gini coefficients of education". The World Bank. 36. ^ ^a ^b John E. Roemer (September 2006). "ECONOMIC DEVELOPMENT AS OPPORTUNITY EQUALIZATION". Yale University. 37. ^ John Weymark (2003). "Generalized Gini Indices of Equality of Opportunity". Journal of Economic Inequality 1 (1): 5–24. doi:10.1023/A:1023923807503. 38. ^ Milorad Kovacevic (November 2010). "Measurement of Inequality in Human Development – A Review". United Nations Development Program. 39. ^ Anthony Atkinson (1999). "The contributions of Amartya Sen to Welfare Economics". Scand. J. Of Economics 101 (2): 173–190. doi:10.1111/1467-9442.00151. 40. ^ Roemer et al. (March 2003). "To what extent do fiscal regimes equalize opportunities for income acquisition among citizens?". Journal of Public Economics 87 (3–4): 539–565. doi:10.1016/ 41. ^ Shorrocks, Anthony (December 1978). "Income Inequality and Income Mobility". Journal of Economic Theory 19 (2): 376–393. doi:10.1016/0022-0531(78)90101-1. 42. ^ Maasoumi and Zanvakili; Zandvakili, Sourushe (1986). "A class of generalized measures of mobility with applications". Economic Letters 22: 97–102. doi:10.1016/0165-1765(86)90150-3. 43. ^ ^a ^b Wojciech Kopczuk, Emmanuel Saez and Jae Song (2010). "Earnings Inequality and Mobility in the United States: Evidence from Social Security Data Since 1937". The Quarterly Journal of Economics 125 (1): 91–128. doi:10.1162/qjec.2010.125.1.91. 44. ^ Wen-Hao Chen (March 2009). "CROSS-NATIONAL DIFFERENCES IN INCOME MOBILITY: EVIDENCE FROM CANADA, THE UNITED STATES, GREAT BRITAIN AND GERMANY". Review of Income and Wealth 55 (1): 75–100. doi: 45. ^ Mercedes Sastre and Luis Ayala (2002). "Europe vs. The United States: Is There a Trade-Off Between Mobility and Inequality?". Institute for Social and Economic Research, University of Essex. 46. ^ ^a ^b ^c Lorenzo Giovanni Bellù and Paolo Liberati (2006). "Inequality Analysis – The Gini Index". Food and Agriculture Organization, United Nations. 47. ^ Julie A. Litchfield (March 1999). "Inequality: Methods and Tools". The World Bank. 48. ^ Stefan V. Stefanescu (2009). "Measurement of the Bipolarization Events". World Academy of Science, Engineering and Technology 57: 929–936. 49. ^ Ray, Debraj (1998). Development Economics. Princeton, NJ: Princeton University Press. p. 188. ISBN 0-691-01706-9. 50. ^ Thomas Garrett (Spring 2010). "U.S. Income Inequality: It's Not So Bad". Inside the Vault (U.S. Federal Reserve, St Louis) 14 (1). 51. ^ John W. Mellor (June 2, 1989). Dramatic Poverty Reduction in the Third World: Prospects and Needed Action. International Food Policy Research Institute. pp. 18–20. 52. ^ ^a ^b ^c KWOK Kwok Chuen (2010). "Income Distribution of Hong Kong and the Gini Coefficient". The Government of Hong Kong, China. 53. ^ ^a ^b "The Real Wealth of Nations: Pathways to Human Development (2010 Human Development Report – see Stat Tables)". United Nations Development Program. 2011. pp. 152–156. 54. ^ Fernando G De Maio (2007). "Income inequality measures". Journal of Epidemiology and Community Health 61 (10): 849–852. doi:10.1136/jech.2006.052969. PMC 2652960. PMID 17873219. 55. ^ Domeij and Floden; Flodén, Martin (2010). "Inequality Trends in Sweden 1978–2004". Review of Economic Dynamics 13 (1): 179–208. doi:10.1016/j.red.2009.10.005. 56. ^ Domeij and Klein (January 2000). "Accounting for Swedish wealth inequality". 57. ^ George Deltas (February 2003). "The Small-Sample Bias of the Gini Coefficient: Results and Implications for Empirical Research". The Review of Economics and Statistics 85 (1): 226–234. doi: 58. ^ Philippe Monfort (2008). "Convergence of EU regions – Measures and evolution". European Union – Europa. p. 6. 59. ^ Klaus Deininger and Lyn Squire (1996). "A New Data Set Measuring Income Inequality". World Bank Economic Review 10 (3): 565–591. doi:10.1093/wber/10.3.565. 60. ^ ^a ^b "Income, Poverty, and Health Insurance Coverage in the United States: 2010 (see Table A-2)". Census Bureau, Dept of Commerce, United States. September 2011. 61. ^ Congressional Budget Office: Trends in the Distribution of Household Income Between 1979 and 2007. October 2011. see p. i–x, with definitions on ii–iii 62. ^ Roger Arnold (2007). Economics. pp. 573–581. ISBN 978-0324538014. 63. ^ Frank Cowell (2007). "Inequality decomposition – three bad measures". Bulletin of Economic Research 40 (4): 309–311. doi:10.1111/j.1467-8586.1988.tb00274.x. 64. ^ ^a ^b Martin Feldstein (August 1998). "Is income inequality really the problem? (Overview)". U.S. Federal Reserve. 65. ^ ^a ^b Taylor and Weerapana (2009). Principles of Microeconomics: Global Financial Crisis Edition. pp. 416–418. ISBN 978-1439078211. 66. ^ Martin Feldstein (1998). "Income inequality and poverty". National Bureau of Economic Research. 67. ^ Friedrich Schneider et al. (2010). "New Estimates for the Shadow Economies all over the World". International Economic Journal 24 (4): 443–461. doi:10.1080/10168737.2010.525974. 68. ^ The Informal Economy. International Institute for Environment and Development, United Kingdom. 2011. ISBN 978-1-84369-822-7. 69. ^ J. Barkley Rosser, Jr., Marina V. Rosser, and Ehsan Ahmed (March 2000). "INCOME INEQUALITY AND THE INFORMAL ECONOMY IN TRANSITION ECONOMIES". Journal of Comparative Economics 28 (1): 156–171. 70. ^ Gorana Krstić and Peter Sanfey (February 2010). "Earnings inequality and the informal economy: evidence from Serbia". European Bank for Reconstruction and Development. 71. ^ Friedrich Schneider (December 2004). "The Size of the Shadow Economies of 145 Countries all over the World: First Results over the Period 1999 to 2003". 72. ^ Hand, David J.; Robert J. Till (2001). "A Simple Generalisation of the Area Under the ROC Curve for Multiple Class Classification Problems". Machine Learning 45 (2): 171–186. doi:10.1023/ 73. ^ Iddo Eliazar and Igor Sokolov (2010). "Measuring statistical heterogeneity: The Pietra index". Physica A-Statistical Mechanics and Its Applications 389 (1): 117–125. doi:10.1016/ 74. ^ Wen-Chung Lee (1999). "Probabilistic Analysis of Global Performances of Diagnostic Tests: Interpreting the Lorenz Curve-Based Summary Measures". Statistics in Medicine 18 (4): 455–471. doi: 10.1002/(SICI)1097-0258(19990228)18:4<455::AID-SIM44>3.0.CO;2-A. PMID 10070686. 75. ^ Robert K. Peet (1974). "The Measurement of Species Diversity". Annual Review of Ecology and Systematics 5: 285–307. doi:10.1146/annurev.es.05.110174.001441. JSTOR 2096890. 76. ^ Wittebolle, Lieven; Marzorati, Massimo et al. (2009). "Initial community evenness favours functionality under selective stress". Nature 458 (7238): 623–626. doi:10.1038/nature07840. PMID 77. ^ Asada, Yukiko (2005). "Assessment of the health of Americans: the average health-related quality of life and its inequality across individuals and groups". Population Health Metrics 3: 7. doi: 10.1186/1478-7954-3-7. PMC 1192818. PMID 16014174. 78. ^ Halffman, Willem; Leydesdorff, L (2010). "Is Inequality Among Universities Increasing? Gini Coefficients and the Elusive Rise of Elite Universities". Minerva 48 (1): 55–72. doi:10.1007/ s11024-010-9141-3. PMC 2850525. PMID 20401157. 79. ^ Graczyk, Piotr (2007). "Gini Coefficient: A New Way To Express Selectivity of Kinase Inhibitors against a Family of Kinases". Journal of Medicinal Chemistry 50 (23): 5773–5779. doi:10.1021/ jm070562u. PMID 17948979. 80. ^ Shi, Hongyuan; Sethu, Harish (2003). "Greedy Fair Queueing: A Goal-Oriented Strategy for Fair Real-Time Packet Scheduling". Proceedings of the 24th IEEE Real-Time Systems Symposium. IEEE Computer Society. pp. 345–356. ISBN 0-7695-2044-8. 81. ^ Gonzalez, Luis (2010). "The Similarity between the Square of the Coeficient of Variation and the Gini Index of a General Random Variable". Journal of Quantitative Methods for Economics and Business Administration 10: 5–18. ISSN 1886-516X. 82. ^ George A. Christodoulakis and Stephen Satchell (Editors) (November 2007). The Analytics of Risk Model Validation (Quantitative Finance). Academic Press. ISBN 978-0750681582. Further reading • Amiel, Y.; Cowell, F.A. (1999). Thinking about Inequality. Cambridge. ISBN 0-521-46696-2. • Anand, Sudhir (1983). Inequality and Poverty in Malaysia. New York: Oxford University Press. ISBN 0-19-520153-1. • Brown, Malcolm (1994). "Using Gini-Style Indices to Evaluate the Spatial Patterns of Health Practitioners: Theoretical Considerations and an Application Based on Alberta Data". Social Science Medicine 38 (9): 1243–1256. doi:10.1016/0277-9536(94)90189-9. PMID 8016689. • Chakravarty, S. R. (1990). Ethical Social Index Numbers. New York: Springer-Verlag. ISBN 0-387-52274-3. • Deaton, Angus (1997). Analysis of Household Surveys. Baltimore MD: Johns Hopkins University Press. ISBN 0-585-23787-5. • Dixon, PM, Weiner J., Mitchell-Olds T, Woodley R. (1987). "Bootstrapping the Gini coefficient of inequality". Ecology (Ecological Society of America) 68 (5): 1548–1551. doi:10.2307/1939238. JSTOR • Dorfman, Robert (1979). "A Formula for the Gini Coefficient". The Review of Economics and Statistics (The MIT Press) 61 (1): 146–149. doi:10.2307/1924845. JSTOR 1924845. • Firebaugh, Glenn (2003). The New Geography of Global Income Inequality. Cambridge MA: Harvard University Press. ISBN 0-674-01067-1. • Gastwirth, Joseph L. (1972). "The Estimation of the Lorenz Curve and Gini Index". The Review of Economics and Statistics (The MIT Press) 54 (3): 306–316. doi:10.2307/1937992. JSTOR 1937992. • Giles, David (2004). "Calculating a Standard Error for the Gini Coefficient: Some Further Results". Oxford Bulletin of Economics and Statistics 66 (3): 425–433. doi:10.1111/ • Gini, Corrado (1912). "Variabilità e mutabilità" Reprinted in Memorie di metodologica statistica (Ed. Pizetti E, Salvemini, T). Rome: Libreria Eredi Virgilio Veschi (1955). • Gini, Corrado (1921). "Measurement of Inequality of Incomes". The Economic Journal (Blackwell Publishing) 31 (121): 124–126. doi:10.2307/2223319. JSTOR 2223319. • Giorgi, G. M. (1990). A bibliographic portrait of the Gini ratio, Metron, 48, 183–231. • Karagiannis, E. and Kovacevic, M. (2000). "A Method to Calculate the Jackknife Variance Estimator for the Gini Coefficient". Oxford Bulletin of Economics and Statistics 62: 119–122. doi:10.1111/ • Mills, Jeffrey A.; Zandvakili, Sourushe (1997). "Statistical Inference via Bootstrapping for Measures of Inequality". Journal of Applied Econometrics 12 (2): 133–150. doi:10.1002/(SICI)1099-1255 • Modarres, Reza and Gastwirth, Joseph L. (2006). "A Cautionary Note on Estimating the Standard Error of the Gini Index of Inequality". Oxford Bulletin of Economics and Statistics 68 (3): 385–390. • Morgan, James (1962). "The Anatomy of Income Distribution". The Review of Economics and Statistics (The MIT Press) 44 (3): 270–283. doi:10.2307/1926398. JSTOR 1926398. • Ogwang, Tomson (2000). "A Convenient Method of Computing the Gini Index and its Standard Error". Oxford Bulletin of Economics and Statistics 62: 123–129. doi:10.1111/1468-0084.00164. • Ogwang, Tomson (2004). "Calculating a Standard Error for the Gini Coefficient: Some Further Results: Reply". Oxford Bulletin of Economics and Statistics 66 (3): 435–437. doi:10.1111/ • Xu, Kuan (January 2004). How Has the Literature on Gini's Index Evolved in the Past 80 Years?. Department of Economics, Dalhousie University. Retrieved 2006-06-01. The Chinese version of this paper appears in Xu, Kuan (2003). "How Has the Literature on Gini's Index Evolved in the Past 80 Years?". China Economic Quarterly 2: 757–778. • Yitzhaki, S. (1991). "Calculating Jackknife Variance Estimators for Parameters of the Gini Method". Journal of Business and Economic Statistics (American Statistical Association) 9 (2): 235–239. doi:10.2307/1391792. JSTOR 1391792. External links Key concepts Related issues Related theories
{"url":"http://xingyimax.com/wiki/?title=Gini_coefficient","timestamp":"2014-04-21T00:01:02Z","content_type":null,"content_length":"234060","record_id":"<urn:uuid:025da482-9d45-4c06-8d32-b76406d7113a>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
Ancient Chinese used bamboo sticks as calculator Approximately 2,300 years ago the ancient Chinese wrote the world's oldest decimal multiplication table on bamboo sticks. According to experts, it was a very effective calculator that let one do the calculations not only with integers but also fractions. No country in the world had similar calculators at that time. Five years ago, Beijing Tsinghua University received a gift of nearly two and a half thousand dirty and moldy bamboo sticks. Most likely, they were found by raiders of ancient tombs and then sold at a market in Hong Kong. According to radiocarbon analysis, this artifact was created in about 305 BC, which corresponds to China's Warring States period. Despite the military conflicts, this historical period (481, 475 or 453-221 BC) is characterized by flourishing trade and commerce, spread of iron tools, construction of large irrigation projects, development of agriculture, and population growth. At that time groups of educated citizens professionally engaged in intellectual work have emerged. The Warring States period is often identified with the "golden age" of Chinese philosophy. This period immediately preceded the formation of the Qin Empire. According to the information on Nature portal, each strip is 12.7 mm wide and up to half a meter long. From top to bottom they are covered with ancient writing. According to Chinese historians, this important artifact has 65 ancient texts written in black ink. Due to the fact that the threads connecting pages into a single manuscript scroll have decayed, and some bamboo sticks have disappeared and others have been broken, the transcript of texts turned into a real puzzle for the researchers. Scientists noticed a "canvas" consisting of 21 bamboo strips inscribed only with numbers. As suggested by Chinese mathematicians, it was the oldest known multiplication tables in the world. When the strips are placed properly, one will notice that the top line and the rightmost column contain the same 19 numbers arranged from right to left and top to bottom, respectively: 0.5, integers from one to nine, and the numbers dividable by 10, from 10 to 90. Like in modern decimal multiplication table, the numbers at the intersection of each row and column are the results of multiplication of relevant numbers. The table can also be used to multiply any integer or integer and a half from 0.5 to 99.5. According to a working version, the numbers not represented in the table first have to be broken down into components. For example, 22.5 × 35.5 can be transformed as follows: (20 + 2 + 0.5) x ( 30 + 5 + 0.5 ). To solve this problem one should perform nine multiplications 20 × 30, 20 × 5 20 × 0.5, 2 × 30 and so on. The end result will be the sum of these results. This is quite an effective ancient calculator. Science historians note the antiquity of Chinese mathematical practice, but are quite careful in the description of the mathematical theory of the ancient Chinese. Among the earliest known Chinese mathematical treatises there are "The Arithmetical Classic of the Gnomon" (Zhou Bi Suan Jing) and "Mathematical treatise in nine sections " (Chiu Chang Suan Shu) that date back to 5-2 and 3-1 centuries BC, respectively. Some scientists mention possible contacts of Chinese mathematicians with Indian ones, but it happened much later, in 5-7 century. Likely the discovered multiplication table was used by Chinese officials to calculate the area of land, counting crop yields or taxes. This calculator can also be used for division and extracting square roots. However, modern scientists are not sure whether such complex operations were performed in that era. In any event, according to the historian of mathematics at New York University Joseph Dauben, this is the earliest artifact of a decimal multiplication table in the world. The American scientist is confident that the ancient Chinese used complex arithmetic in theoretical and commercial purposes in the era of the Warring States. This happened before the first emperor Ying Zheng who unified the entire China and took the title of Qin Shi Huang (first Huang of the Qin Dynasty). Later, he ordered to burn many books and banned private libraries in an attempt to reverse the country's intellectual tradition. Until now, a text dating back to the Qin Dynasty (221-206 years BC) was considered the oldest Chinese multiplication table. It is a series of short sentences, for example, "six eight forty eight." It contained only the simplest multiplications. Multiplication tables of ancient Babylon are much older. They are approximately 4,000 years old, but a set of tables used for multiplication was bulky, with separate tables for multiplication by 1-20, 30 ... 50 No calculations were possible without a large library of tables in Babylon. Furthermore, they did not have a decimal multiplication table. In Europe the first multiplication tables appeared only during the Renaissance era. Igor Bukker
{"url":"http://english.pravda.ru/science/earth/27-01-2014/126683-ancient_chinese_calculator-0/","timestamp":"2014-04-19T17:02:23Z","content_type":null,"content_length":"48262","record_id":"<urn:uuid:f6feb079-e1af-4b37-ac25-8de1e53dc0c8>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
Equilibrium asset prices and bubbles in a continuous time OLG model Brito, Paulo (2008): Equilibrium asset prices and bubbles in a continuous time OLG model. Download (538Kb) | Preview In a Yaari-Blanchard overlapping generations endowment economy, and drawing on the equivalence between Radner (R) and Arrow-Debreu (AD) equi- libria, we prove that equilibrium AD prices have an explicit representation as a double integral equation. This allows for an analytic characterization of the relationship between life-cycle and cohort heterogeneity and asset prices. For a simple distribution, we prove that bubbles may exist, and derive conditions for ruling them out. Item Type: MPRA Paper Original Equilibrium asset prices and bubbles in a continuous time OLG model Language: English Keywords: overlapping generations, asset pricing, bubbles, integral equations, LambertW function Subjects: G - Financial Economics > G1 - General Financial Markets > G12 - Asset Pricing; Trading volume; Bond Interest Rates D - Microeconomics > D5 - General Equilibrium and Disequilibrium > D51 - Exchange and Production Economies Item ID: 10701 Depositing Paulo Brito Date 23. Sep 2008 06:53 Last Modified: 24. Feb 2013 14:08 Blanchard, O. J. (1985). Debt, Deficits and Finite Horizons. Journal of Political Economy, 93(2):223–47. Brito, P. and Dilao, R. (2006). Equilibrium price dynamics in an overlapping- generations exchange economy. Working Paper 27/2006, Department of Economics, ISEG, Technical University of Lisbon. Cass, D. and Yaari, M. E. (1967). Individual saving, aggregate capital accumulation, and efficient growth. In Shell, K., editor, Essays on the Theory of Optimal Economic Growth, pages 233–268. MIT Press. Corless, R. M., Gonnet, G., Hare, D. E. G., Jeffrey, D. J., and Knuth, D. E. (1996). On the Lambert W Function. Advances in Computational Mathematics, 5:329–59. Demichelis, S. and Polemarchakis, H. M. (2007). The determinacy of equilibrium in economies of overlapping generations. Economic Theory, 32(3):471–475. Geanakoplos, J. (2008). Overlapping generations models of general equilibrium. Discussion Paper 1663, Cowles Foundation for Research in Economics, Yale University. Geanakoplos, J. D. and Polemarchakis, H. M. (1991). Overlapping generations. In Hildenbrand, W. and Sonnenschein, H., editors, Handbook of Mathematical Economics, volume IV, pages 1899–1960. North Holland. Gelfand, I. M. and Fomin, S. V. (1963). Calculus of Variations. Dover. Polyanin, A. D. and Manzhirov, A. V. (1998). Handbook of Integral Equations. CRC Press. Shell, K. (1971). Notes on the economics of infinity. Journal of Political Economy, 79:1002–1011. Yaari, M. E. (1965). Uncertain lifetime, life insurance, and the theory of consumer. Review of Economic Studies, 32:137–50. URI: http://mpra.ub.uni-muenchen.de/id/eprint/10701
{"url":"http://mpra.ub.uni-muenchen.de/10701/","timestamp":"2014-04-17T15:28:44Z","content_type":null,"content_length":"19951","record_id":"<urn:uuid:07af1cd3-bc99-44c9-a84c-e7a4f97bba70>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00000-ip-10-147-4-33.ec2.internal.warc.gz"}