content
stringlengths
86
994k
meta
stringlengths
288
619
series convergens mn1 q1 February 18th 2011, 10:56 AM #1 MHF Contributor Nov 2008 series convergens mn1 q1 prove that for n $\geq$1 1 $\leq a_{n}\leq$3 prove that $\mbox{\ensuremath{a_{n}}} converges$ and find the limit Last edited by transgalactic; February 18th 2011 at 11:09 AM. it's a sequence not a serie. Here are some hints: A) Induction using the fact that the function defined by $f(x)=\frac{4x-3}{x}$ is increasing (because f'(x) is postive) then induction using $f(a_{n})=a_{n+1}$ B) $(a_{n})$ is increasing and bounded (is that the word because i don't study maths in english Hard luck . i only started to learn it i tried to look on google for similar solved questions but i dint find any. do you have any web site which explain the theory of this subject and some solved similar questions First of all , i wanna know if you understood my hints , if you didn't i can write the complete solution . I actually have some similar problems in french and arabic , wait some minutes so that i can translate and post them we consider the function f such that $f(x)=(x)^2+\frac{3}{4}x$ 1)show that $f(\left [ 0 \right;\frac{1}{4} ])\subset \left [ 0 \right;\frac{1}{4} ]$ 2) show that for every positive integer n : $0\leq u_{n}\leq \frac{1}{4}$ 3)Study the monotony of $(u_{n})$ then deduce that it converges. 4) compute it's limit . Hard luck . cant understand the first condition you say that the values of F output is a part of its range on convergence we have lagrange test cauchy test you didnt say any thing about it? Last edited by transgalactic; February 18th 2011 at 03:15 PM. We don't have to use Lagrange or Cauchy , those are simple sequences. Let's stay with your problem: A) we will use mathematical induction: for n=1 we have 1=<2=<3 so 1=<a_{0}=<3 we assume that it's true for n and let's prove it for n+1 since 1 $\leq a_{n}\leq$3 and f is increasing we have $1=f(1)\leq f(a_{n})\leq f(3)=3\Leftrightarrow 1\leq a_{n+1}\leq 3$ End of induction . B) let's prove that (a_{n}) is increasing : $a_{n+1}-a_{n}=\frac{4a_{n}-3-\left (a_{n} \right )^2}{a_{n}}=\frac{-(a_{n}-3)(a_{n}-1)}{a_{n}}\geq 0$ we have now proved that it's increasing . Now and since it's bounded , it converges. As i said , to compute the limit , solve the equation f(x)=x we consider the function f such that $f(x)=(x)^2+\frac{3}{4}x$ 1)show that $f(\left [ 0 \right;\frac{1}{4} ])\subset \left [ 0 \right;\frac{1}{4} ]$ 2) show that for every positive integer n : $0\leq u_{n}\leq \frac{1}{4}$ 3)Study the monotony of $(u_{n})$ then deduce that it converges. 4) compute it's limit . Hard luck . The 'initial value' $u_{0}$ is not specified and that isn't a minor detail. The 'recursive relation' can be written as... $\displaystyle \Delta_{n}= u_{n+1}-u_{n}= u^{2}_{n} - \frac{u_{n}}{4}= f(u_{n})$ (1) The function f(*) is represented here... There is only one 'attractive fixed point' at $x_{0}=0$ and that means that, if the sequence converges, it converges to 0. In particular the sequence converges monotonically for $-\frac{3}{4} \le u_{0} < \frac{1}{4}$, converges 'with oscillation' for $-1 < u_{0} < -\frac{3}{4}$ and diverges for $u_{0}< -1$ and $u_{0}> \frac{1}{4}$... Kind regards Last edited by chisigma; March 2nd 2011 at 09:28 AM. Actually u_0=1/5 . Sorry i was in a hurry. We don't have to use Lagrange or Cauchy , those are simple sequences. Let's stay with your problem: A) we will use mathematical induction: for n=1 we have 1=<2=<3 so 1=<a_{0}=<3 we assume that it's true for n and let's prove it for n+1 since 1 $\leq a_{n}\leq$3 and f is increasing we have $1=f(1)\leq f(a_{n})\leq f(3)=3\Leftrightarrow 1\leq a_{n+1}\leq 3$ End of induction . B) let's prove that (a_{n}) is increasing : $a_{n+1}-a_{n}=\frac{4a_{n}-3-\left (a_{n} \right )^2}{a_{n}}=\frac{-(a_{n}-3)(a_{n}-1)}{a_{n}}\geq 0$ we have now proved that it's increasing . Now and since it's bounded , it converges. As i said , to compute the limit , solve the equation f(x)=x We don't have to use Lagrange or Cauchy , those are simple sequences. Let's stay with your problem: A) we will use mathematical induction: for n=1 we have 1=<2=<3 so 1=<a_{0}=<3 we assume that it's true for n and let's prove it for n+1 since 1 $\leq a_{n}\leq$3 and f is increasing we have $1=f(1)\leq f(a_{n})\leq f(3)=3\Leftrightarrow 1\leq a_{n+1}\leq 3$ End of induction . because a_n is between 3 and 1 the numerator is negative and denominator is positive so the whole thing is negative not positive B) let's prove that (a_{n}) is increasing : $a_{n+1}-a_{n}=\frac{4a_{n}-3-\left (a_{n} \right )^2}{a_{n}}=\frac{-(a_{n}-3)(a_{n}-1)}{a_{n}}\geq 0$ we have now proved that it's increasing . Now and since it's bounded , it converges. As i said , to compute the limit , solve the equation f(x)=x We don't have to use Lagrange or Cauchy , those are simple sequences. Let's stay with your problem: A) we will use mathematical induction: for n=1 we have 1=<2=<3 so 1=<a_{0}=<3 we assume that it's true for n and let's prove it for n+1 since 1 $\leq a_{n}\leq$3 and f is increasing we have $1=f(1)\leq f(a_{n})\leq f(3)=3\Leftrightarrow 1\leq a_{n+1}\leq 3$ End of induction . because a_n is between 3 and 1 the numerator is negative and denominator is positive so the whole thing is negative not positive B) let's prove that (a_{n}) is increasing : $a_{n+1}-a_{n}=\frac{4a_{n}-3-\left (a_{n} \right )^2}{a_{n}}=\frac{-(a_{n}-3)(a_{n}-1)}{a_{n}}\geq 0$ we have now proved that it's increasing . Now and since it's bounded , it converges. As i said , to compute the limit , solve the equation f(x)=x the numerator is negative denominator is positive so the whole thing is negative not possitive so to find the limit we put L instead of a_n and a_n+1 and compute the roots? February 18th 2011, 11:24 AM #2 Junior Member Dec 2010 February 18th 2011, 12:07 PM #3 MHF Contributor Nov 2008 February 18th 2011, 12:41 PM #4 Junior Member Dec 2010 February 18th 2011, 01:00 PM #5 Junior Member Dec 2010 February 18th 2011, 03:02 PM #6 MHF Contributor Nov 2008 February 19th 2011, 01:14 AM #7 Junior Member Dec 2010 February 19th 2011, 07:01 AM #8 February 19th 2011, 07:37 AM #9 Junior Member Dec 2010 March 2nd 2011, 12:19 AM #10 MHF Contributor Nov 2008 March 3rd 2011, 06:24 AM #11 MHF Contributor Nov 2008
{"url":"http://mathhelpforum.com/calculus/171744-series-convergens-mn1-q1.html","timestamp":"2014-04-18T19:43:23Z","content_type":null,"content_length":"65272","record_id":"<urn:uuid:53b9533d-81a7-4d8c-a251-08c6ae946a93>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
Limit of Sequence - max(a,b) October 25th 2009, 01:58 PM #1 Limit of Sequence - max(a,b) Prove that if $a,b$ are positive real numbers then $\displaystyle\lim_{n\rightarrow \infty} (a^n +b^n)^{1/n} = \max{(a,b)}$. My attempt is to say that without loss of generality let $a > b$, then for sufficiently large $n$ we will have $a^n + b^n \approx a^n$ hence $\displaystyle\lim_{n\rightarrow \infty} (a^n +b^n)^{1 /n} = \displaystyle\lim_{n\rightarrow \infty}(a^n)^{1/n} = a$. However this step "for sufficiently large $n$, we will have $a^n + b^n \approx a^n$" does not seem rigorous to me, and I can't think of a way of proving this result more rigorously. Any suggestions would be greatly appreciated. Prove that if $a,b$ are positive real numbers then $\displaystyle\lim_{n\rightarrow \infty} (a^n +b^n)^{1/n} = \max{(a,b)}$. My attempt is to say that without loss of generality let $a > b$, then for sufficiently large $n$ we will have $a^n + b^n \approx a^n$ hence $\displaystyle\lim_{n\rightarrow \infty} (a^n +b^n)^{1 /n} = \displaystyle\lim_{n\rightarrow \infty}(a^n)^{1/n} = a$. However this step "for sufficiently large $n$, we will have $a^n + b^n \approx a^n$" does not seem rigorous to me, and I can't think of a way of proving this result more rigorously. Any suggestions would be greatly appreciated. As you did, assume wrg that a>b and then "take a out of the parentheses" : $\sqrt[n]{a^n+b^n}=a\sqrt[n]{1+\left(\frac{b}{a}\right)^n}$ and remember what happens when we have a geometric sequence with quotient q s.t. $|q|<1$ October 25th 2009, 02:18 PM #2 Oct 2009
{"url":"http://mathhelpforum.com/differential-geometry/110404-limit-sequence-max-b.html","timestamp":"2014-04-16T16:35:13Z","content_type":null,"content_length":"37854","record_id":"<urn:uuid:f846624c-8631-4234-a7ba-62d36933a1f7>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
Neffs, PA Trigonometry Tutor Find a Neffs, PA Trigonometry Tutor ...I was the kid on the playground using recess to teach her friends to subtract. After graduating from Lehigh University with a BA in Mathematics and M.Ed. in Secondary Education, I spent six years teaching high school math. I am now a stay at home mom with my two year old daughter, but I miss working with older students. 12 Subjects: including trigonometry, calculus, statistics, geometry ...I have worked in the field of Environmental Research, but my heart has always been in working with students and sharing my passion of the sciences! I have experience tutoring math and science students. I work with students to build a stronger understanding of their course material and to raise not only their averages, but their confidence. 31 Subjects: including trigonometry, chemistry, physics, geometry ...I am looking forward to working with you! Thank you for your timeI have personally taught several classes in Calculus AB and BC, where differential equations is a single part of that course. I have also tutored students (not through WyzAnt, but through other programs) Calculus and Differential Equations. 35 Subjects: including trigonometry, chemistry, calculus, geometry ...I am patient and kind and really do care about all of the students I work with. I feel that I can do a great job. I am available to tutor in the afternoon and early evenings. 9 Subjects: including trigonometry, geometry, algebra 1, algebra 2 ...I have worked with students from all backgrounds; I have also worked extensively with children with disabilities. I hold a BA in Anthropology from Florida Atlantic University and am currently a graduate student in the Anthropology Department. As a lifelong student and educator, excellent study skills are essential to my daily responsibilities. 55 Subjects: including trigonometry, reading, Spanish, writing Related Neffs, PA Tutors Neffs, PA Accounting Tutors Neffs, PA ACT Tutors Neffs, PA Algebra Tutors Neffs, PA Algebra 2 Tutors Neffs, PA Calculus Tutors Neffs, PA Geometry Tutors Neffs, PA Math Tutors Neffs, PA Prealgebra Tutors Neffs, PA Precalculus Tutors Neffs, PA SAT Tutors Neffs, PA SAT Math Tutors Neffs, PA Science Tutors Neffs, PA Statistics Tutors Neffs, PA Trigonometry Tutors
{"url":"http://www.purplemath.com/Neffs_PA_Trigonometry_tutors.php","timestamp":"2014-04-18T23:32:53Z","content_type":null,"content_length":"24171","record_id":"<urn:uuid:c6e137fc-13f4-40d0-8e03-9b01c156a5bc>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Our Mathematical Universe Max Tegmark has a new book out, entitled Our Mathematical Universe, which is getting a lot of attention. I’ve written a review of the book for the Wall Street Journal, which is now available (although now behind a paywall, if not a subscriber, you can try here). There’s also an old blog posting here about the same ideas. Tegmark’s career is a rather unusual story, mixing reputable science with an increasingly strong taste for grandiose nonsense. In this book he indulges his inner crank, describing in detail an uttery empty vision of the “ultimate nature of reality.” What’s perhaps most remarkable about the book is the respectful reception it seems to be getting, see reviews here, here, here and here. The Financial Times review credits Tegmark as the “academic celebrity” behind the turn of physics to the multiverse: As recently as the 1990s, most scientists regarded the idea of multiple universes as wild speculation too far out on the fringe to be worth serious discussion. Indeed, in 1998, Max Tegmark, then an up-and-coming young cosmologist at Princeton, received an email from a senior colleague warning him off multiverse research: “Your crackpot papers are not helping you,” it said. Needless to say, Tegmark persisted in exploring the multiverse as a window on “the ultimate nature of reality”, while making sure also to work on subjects in mainstream cosmology as camouflage for his real enthusiasm. Today multiple universes are scientifically respectable, thanks to the work of Tegmark as much as anyone. Now a physics professor at Massachusetts Institute of Technology, he presents his multiverse work to the public in Our Mathematical Universe. The New Scientist is the comparative voice of reason, with the review there noting that “there does seem to be something a little questionable with this vast multiplication of multiverses”. The book explains Tegmark’s categorization of multiverse scenarios in terms of “Level”, with Level I just lots of unobservable extensions of what we see, with the same physics, an uncontroversial notion. Level III is the “many-worlds” interpretation of quantum mechanics, which again sticks to our known laws of physics. Level II is where conventional notions of science get left behind, with different physics in other unobservable parts of the universe. This is what has become quite popular the past dozen years, as an excuse for the failure of string theory unification, and it’s what I rant about all too often here. Tegmark’s innovation is to postulate a new, even more extravagant, “Level IV” multiverse. With the string landscape, you explain any observed physical law as a random solution of the equations of M-theory (whatever they might be…). Tegmark’s idea is to take the same non-explanation explanation, and apply it to explain the equations of M-theory. According to him, all mathematical structures exist, and the equations of M-theory or whatever else governs Level II are just some random mathematical structure, complicated enough to provide something for us to live in. Yes, this really is as spectacularly empty an idea as it seems. Tegmark likes to claim that it has the virtue of no free parameters. In any multiverse-promoting book, one should look for the part where the author explains what their scenario implies about physics. At Level II, Susskind’s book The Cosmic Landscape could come up with only one bit of information in terms of predictions (the sign of the spatial curvature), and Steve Hsu soon argued that even that one bit isn’t there. There’s only small part of Tegmark’s book that deals with the testability issue, the end of Chapter 12. His summary of Chapter 12 claims that he has shown: The Mathematical Universe Hypothesis is in principle testable and falsifiable. His claim about falsifiability seems to be based on last page of the chapter, about “The Mathematical Regularity Prediction” which is that: physics research will uncover further mathematical regularities in nature. This is a prediction not of the Level IV multiverse, but a “prediction” of the idea that our physical laws are based on mathematics. I suppose it’s conceivable that the LHC will discover that at scales above 1 TeV, the only way to understand what we find is not through laws described by mathematics, but, say, by the emotional states of the experimenters. In any case, this isn’t a prediction of Level IV. On page 354 there is a paragraph explaining not a Level IV prediction, but the possibility of a Level IV prediction. The idea seems to be that if your Level II theory turns out to have the right properties, you might be able to claim that what you see is not just fine-tuned in the parameters of the Level II theory, but also fine-tuned in the space of all mathematical structures. I think an accurate way of characterizing this is that Tegmark is assuming something that has no reason to be true, then invoking something nonsensical (a measure on the space of all mathematical structures). He ends the argument and the paragraph though with: In other words, while we currently lack direct observational support for the Level IV multiverse, it’s possible that we may get some in the future. This is pretty much absurd, but in any case, note the standard linguistic trick here: what we’re missing is only “direct” observational support, implying that there’s plenty of “indirect” observational support for the Level IV multiverse. The interesting question is why anyone would possibly take this seriously. Tegmark first came up with this in 1997, putting on the arXiv this preprint. In this interview, Tegmark explains how three journals rejected the paper, but with John Wheeler’s intervention he managed to get it published in a fourth (Annals of Physics, just before the period it published the (in)famous Bogdanov paper). He also explains that he was careful to do this just after he got a new postdoc (at the IAS), figuring that by the time he had to apply for another job, it would not be in prominent position on his CV. One answer to the question is Tegmark’s talent as an impresario of physics and devotion to making a splash. Before publishing his first paper, he changed his name from Shapiro to Tegmark (his mother’s name), figuring that there were too many Shapiros in physics for him to get attention with that name, whereas “Tegmark” was much more unusual. In his book he describes his method for posting preprints on the arXiv, before he has finished writing them, with the timing set to get pole position on the day’s listing. Unfortunately there’s very little in the book about his biggest success in this area, getting the Templeton Foundation to give him and Anthony Aguirre nearly $9 million for a “Foundational Questions Institute” (FQXi). Having cash to distribute on this scale has something to do with why Tegmark’s multiverse ideas have gotten so much attention, and why some physicists are respectfully reviewing the book. A very odd aspect of this whole story is that while Tegmark’s big claim is that Math=Physics, he seems to have little actual interest in mathematics and what it really is as an intellectual subject. There are no mathematicians among those thanked in the acknowledgements, and while “mathematical structures” are invoked in the book as the basis of everything, there’s little to no discussion of the mathematical structures that modern mathematicians find interesting (although the idea of “symmetries” gets a mention). A figure on page 320 gives a graph of mathematical structures which a commenter on mathoverflow calls “truly bizarre” (see here). Perhaps the explanation of all this is somehow Freudian, since Tegmark’s father is the mathematician Harold Shapiro. The book ends with a plea for scientists to get organized to fight things like fringe religious groups concerned that questioning their pseudo-scientific claims would erode their power. and his proposal is that To teach people what a scientific concept is and how a scientific lifestyle will improve their lives, we need to go about it scientifically: we need new science-advocacy organizations that use all the same scientific marketing and fund-raising tools as the anti-scientific coalition employ. We’ll need to use many of the tools that make scientists cringe, from ads and lobbying to focus groups that identify the most effective sound bites. There’s an obvious problem here, since Tegmark’s idea of “what a scientific concept is” appears to be rather different than the one I think most scientists have, but he’s going to be the one leading the media campaign. As for the “scientific lifestyle”, this may be unfair, but while I was reading this section of the book my twitter feed was full of pictures from an FQXi-sponsored conference discussing Boltzmann brains and the like on a private resort beach on an island off Puerto Rico. Is that the “scientific lifestyle” Tegmark is referring to? Who really is the fringe group making pseudo-scientific claims here? Multiverse mania goes way back, with Barrow and Tipler writing The Anthropic Cosmological Principle nearly 30 years ago. The string theory landscape has led to an explosion of promotional multiverse books over the past decade, for instance • Parallel Worlds, Kaku 2004 • The cosmic landscape, Susskind, 2005 • Many worlds in one, Vilenkin, 2006 • The Goldilocks enigma, Davies, 2006 • In search of the Multiverse, Gribbin, 2009 • From eternity to here, Carroll, 2010 • The grand design, Hawking, 2010 • The hidden reality, Greene, 2011 • Edge of the universe, Halpern, 2012 Watching these come out, I’ve always wondered: where do they go from here? Tegmark is one sort of answer to that. Later this month, Columbia University Press will publish Worlds Without End: The Many Lives of the Multiverse, which at least is written by someone with the proper training for this (a theologian, Mary-Jane Rubenstein). I’m still though left without an answer to the question of why the scientific community tolerates if not encourages all this. Why does Nature review this kind of thing favorably? Why does this book come with a blurb from Edward Witten? I’m mystified. One ray of hope is philosopher Massimo Pigliucci, whose blog entry about this is Mathematical Universe? I Ain’t Convinced. For more from Tegmark, see this excerpt at Scientific American, an excerpt at Discover, and this video, this article and interview at Nautilus. There’s also this at Huffington Post, and a Facebook After the Level IV multiverse, it’s hard to see where Tegmark can go next. Maybe the answer is his very new Consciousness as a State of Matter, discussed here. Taking a quick look at it, the math looks quite straightforward, his claims it has something to do with consciousness much less so. Based on my time spent with “Our Mathematical Universe”, I’ll leave this to others to look into… Update: Scott Aaronson has a short comment here. 125 Responses to Our Mathematical Universe 1. In this context it might be a good idea to read (or re-read) Peter Medawar’s review of The Phenomenon of Man. 2. Dear Max, it is your behavior that is having the chilling effects not only on this blog, but throughout the entire physics community. It is quite remarkable that while you are the one with millions upon millions of dollars at your disposal, a professional publicity machine, MIT’s PR department, and legions of Ph.D.-free pop-sci-fanboys, you accuse Peter of being a “creationist” bully for merely reading your book and reflecting on its empty content as a lone individual. To pile irony upon irony, it is also remarkable that while your book does little more than promote a faith-based initiative, which is not testable science, you then have the gall to accuse scientists and objective writers of behaving like religious fanatics. Max Tegmark’s biggest defender Orin writes, “Peter, I’m afraid I haven’t had the time to read the book yet, but I’m familiar with the material I expect to be in it.” Max, you do realize that your career is built more on laymen who have not read your book, than on scientists who have? 3. Well Peter, this “Ph.D.-free pop-sci-fanboy” has given your blog a fair shake. Specimens in irony like the above only do you a disservice by lending credence to Max’s comparison. Enjoy your echo 4. Orin, I hope once you do get around to reading the book under discussion, if you find a non-vacuous argument in it for the MUH or the Level IV multiverse, you return to let me know. 5. Max (or anyone else an expert on inflation) Is there a lower limit on r from inflation? It seems that there are so many models of inflation (for example this ~ 400 page paper on all models of inflation http://arxiv.org/abs/1303.3787) that given any observations (or non-observations) one can always construct any model of inflation. 6. Regardless of whether or not the MUH is true or not, I was wondering about your view regarding the idea of platonism/mathematical realism in general Pete. I know that a vast majority of mathematicians and a good deal of physicists would adopt a view that mathematical structures and truths are independent of human beings and that mathematical propositions are objectively true/ I must admit I think this view is very plausible, considering the history of mathematics and the sciences (especially fundamental physics) and the influence both disciplines have had on each other. This is not to try and bring in mysticism at all, as I am a naturalist and a physicalist, thought the second part gets increasingly harder to penetrate as you decompose “matter” into a collection of atoms that are 99.999% empty space with particles in a nucleus that decompose into further elemental particles represented as mathematical points or “vibrating strands of energy” in String Theory, whatever the hell that would even mean physically. I mean, when modern physics points to fact that solid, physical matter is in fact vast amount of empty space linked together by interactions of profoundly small particles than seem to have an ephemeral existence all their own, is Platonism or realism about abstract structures and mathematical relations underlying the physical world really so outlandish? I agree it could probably never be tested, making it more philosophical than empirical, but do you think it a reasonable view? By the way Frenkel’s book, Love and Math, is brilliant so far and its clear from the reading that he, along with a long list of other mathematicians, wholeheartedly embraces the Platonic view without the slightest bit of crackpot “mysticism.” 7. Pete, I myself favor some form of “Platonism” or realism about mathematics (see a following posting, which has a link to something about the “Putnam-Quine indispensability thesis” and I think Quine had some of the most to the point things to say about how to think about what is “real”). The problem is the claim that this tells you anything you didn’t already know about physics, in particular Tegmark’s claim that it implies that we should think of ourselves as living in a Level IV multiverse, with no particular physical theory=mathematical structure more fundamental than any other. It is this sort of thing that I claim is empty, implying nothing about physics. It’s only use is the ideological one of promoting untestable claims about the “multiverse”. 8. Dear Mr. Tegmark, Like many, I feel that dragging creationism into a scientific debate is both surprising and alarming, or to use your word “disturbing.” I don’t think there is any excuse for this, and, frankly, I think it sets a bad example for observers to see how a scientific debate should be handled. In addition, bringing in an irrelevant buzzword to the debate simply highlights a lack of a clear counterpoint to the argument. 9. Dear Max, Many thanks for replying and for your kind words. I was unaware of your version of the argument you make, but I did comment on Garriga and Velinken’s version in a footnote in Time Reborn. To quote again from there, p 284, “23. Jaume Garigga and Alex Vilenkin have pointed out, in “Anthropic Prediction for Lambda and the Q Catastrophe,” arXiv:hep-th/0508005v1 (2005), that a particular combination of the two constants does better when applied to Weinberg’s argument: It happens to be the cosmological constant divided by the fluctuation size cubed. But this leaves two issues: First, what sets the size of the fluctuations? Second, we already knew that the argument did all right when only the cosmological constant was con- sidered. There are many combinations of the two constants that could be tried; the fact that one combination does better than the others is not surprising and, even if there is an argument for it, this does not constitute evidence for the hypothesis that our universe is one world of a vast multiverse.” I would think that this would also apply to your W which depends also on the dark matter density per photon. Should we be surprised that there is a combination of powers of three constants that is extremized in nature? Given that there are many hypotheses and scenarios and many combinations of constants that could be tried, how strong of a case does this make for the hypothesis that our universe is not unique? Also, Weinberg’s paper did predate the measurement, but yours and Garriga and Vilenkin’s did not. Even if we accept that your argument is a better version of Weinberg’s, it was made after the observation and, in any case, you agree that Weinberg’s logic was wrong. Raphael Sorkin also published a paper before the observation of dark energy, predicting correctly the magnitude that was observed. He did this on the basis of causal set theory, an approach to quantum gravity. No one disagrees with his logic or prediction. It seems that the case here is stronger than for Weinberg as the argument did not have to be improved after the observation. So if you are being logical, and responding to the evidence, shouldn’t you be writing a book promoting causal set theory rather than the multiverse? Instead, Sorkin’s correct prediction is almost never mentioned, except by specialists in quantum gravity. 10. Thanks for the response Pete. I’m pretty much in total agreement with you as far as that’s concerned. And by the way I’m hoping that Frenkel/Holt debate you mentioned in that recent post continues as well. Always good getting some philosophy of mathematics out there in the open. 11. Does Tegmark talk about issues like inverted spectra and Mary in her black and white room? I know these are usually presented as problems for physicalism, but it seems to me they also present a problem for the idea that “everything is maths”. 12. Lee or anyone else, could you point me to Sorkin’s paper where he predicted the value of cosmological constant using causal set theory. Many thanks 13. Shantanu, R.D. Sorkin, “Spacetime and Causal Sets”, in J.C. D’Olivo, E. Nahmad-Achar, M. Rosenbaum, M.P. Ryan, L.F. Urrutia and F. Zertuche (eds.), Relativity and Gravitation: Classical and Quantum (Proceedings of the SILARG VII Conference, held Cocoyoc, Mexico, December, 1990), pages 150-173 (World Scientific, Singapore, 1991); R.D. Sorkin, “Forks in the Road, on the Way to Quantum Gravity ”, talk given at the conference entitled “Directions in General Relativity”, held at College Park, Maryland, May, 1993, Int. J. Th. Phys. 36: 2759–2781 (1997), eprint: gr-qc/9706002 Maqbool Ahmed, Scott Dodelson, Patrick B. Greene, Rafael Sorkin, {\it Ever present Lambda}, astro-ph/0209274. Sorkin was mentioning this in talks over many years. 14. Tegmark should address Peter’s most important point, which is that he doesn’t really have any inner understanding of mathematics as a discipline in itself. 15. Hi Peter, I read the book (I thought it was delightful), and I have to say I’m left even more perplexed than before at your characterization of the MUH as “empty”. Max says (Chapter 12): “If the theory that the Level IV multiverse is correct, then since it has no free parameters whatsoever, all properties of all parallel universes (including the subjective perceptions of self-aware substructures in them) could in principle be derived by an infinitely intelligent mathematician.” Of course this is fleshed out in the book, but I think it is fairly self-evident. Examples of the kind of strategy for falsifiability begin as early as Chapter 6, where he writes: “If we’re living in a random habitable universe, the numbers should still look random, but with a probability distribution that favors habitability. By combining predictions about how the numbers vary across the multiverse with the relevant physics of galaxy formation and so on, we can make statistical predictions for what we should actually observe.” So I continue to not understand why you insist on using the word “empty.” I think Max makes a very persuasive case in the book for the potential of the MUH to be predictive, and even falsifiable (he discusses this extensively in Chapter 11, giving examples of falsification using naive measures, ultimately concluding that the major hurdle to be overcome is a solution to the measure 16. Orin, Sure, all properties of all universes could be “derived”, just look at “all properties of all mathematical structures”. Fine, but this predicts nothing at all about any particular property of our particular universe. The part you quote from the book is referring to the sort of “Level II” multiverse of the anthropic string theory landscape: there’s some specific fundamental physical law that determines the probability distribution he invokes. There are plenty of problems with this, but it’s not what I’m referring to as empty, the Level IV business of the title of his book. I’ve gone over here carefully what is in his book that claims to be a prediction of Level IV, and I’ve explicitly challenged him here and at the Scientific American site where he wrote a response to this, asking him for a falsifiable prediction of “Level IV”. The best he could come up with is “if we find a physical phenomenon not describable mathematically” that would do it. See and the Scientific American site for my comments about why this is empty. If there are all sorts of great examples of how to falsify Level IV in the book that I missed, how come Tegmark is not invoking them when asked about this You could argue that all you have to do is “put a measure” on the space of mathematical structures, but this is again an empty statement until you give some indication of what such a measure should look like. The only thing he discusses in the book is some sort of counting measure, but obviously you get more and more examples of mathematical structures as you increase complexity, so this sort of thing, besides the “measure problem” due to infinity, has the obvious problem that it predicts your mathematical structure will be as complicated as possible, whereas we know that fundamental physical laws are based on remarkably simple mathematical structures. On a related note, Tegmark likes to claim that he just has a “measure problem”, not knowing how to relatively count things, when he has something much worse: he doesn’t know how to characterize the space he is trying to put a measure on (the people trying to use string theory at Level II have a much simpler version of this problem: not only do they not know how to compute relative weights of string vacua, they don’t know what the set of string vacua is). 17. Peter, you seem to be setting the mark for what is not empty just high enough to fit your definition. I think Max has outlined the beginnings of a theory that is clearly not empty in principle. You point out that it is currently empty in practice. That is fine (if you make it clear what you mean), but obviously theories have periods of gestation before predictions are made, and I don’t think it is fair to be quite so dismissive of the fact that they are not yet mature. There is a Chicken/Egg problem here; you won’t be satisfied until the theory is predictive in practice, yet the theory won’t have an honest shot of being predictive in practice until more people work on it. On your point about the measure problem (second-to-last paragraph) and the apparent simplicity of physical laws, I think your argument is not as strong as you think it is. For one, your measure must be “anthropic aware”, a feature Max harps on quite a bit (although I don’t think he uses that wording), and it is not at all clear to me that the “pruning” afforded by such a measure would not alone be a sufficient counter-argument. But additionally (and this is a point Max does not make in the book, but nonetheless his ideas should be discussed independently of this particular book written for lay-audiences), a strong case can be made that as you go up the latter of increasing complexity, there are correspondingly increasing numbers of approximate equivalences between structures of greater and lesser complexity. There are also fairly strong counter-arguments in algorithmic complexity theory (I linked to one example earlier in this thread) that clearly indicate to me that your intuition is off. So no, I don’t think the measure situation is nearly as dire as you think it is. 18. Orin, I’m not talking about “in practice”, but “in principle”. All the arguments I’ve given about the emptiness of “Level IV” are arguments of principle. To show it is non-empty you need to produce a non-empty prediction, even if it is only “in principle”. Tegmark hasn’t been able to do this (and you don’t seem to either). And you can’t hide behind every crackpot’s favorite excuse “OK, I haven’t been able to get anything out of my wonderful theory of everything, but if only lots of people would work on it, maybe they would find something”. It’s not possible that “my intuition is off” about “all mathematical structures”, because I have no intuition at all about what that even means. As far as I can tell it’s a concept every bit as empty as saying “all sets” or some such. There’s no there there. The counting measure I was quoting was Tegmark’s intuition, not mine. Again, I don’t think his problem is a measure problem, his problem is that he doesn’t know anything non-trivial about the space he wants to put a measure on. From your comments, you seem to have your own Tegmarkian theory, since you’re making claims not in his book. If you’ve written it down anywhere, and you have non-trivial implications of such a theory that he doesn’t have, let us know what they are. So far, neither you nor he are able to point to anything in his book that gives, in principle, an implication of this MUH that is not on its face empty. 19. Peter, the quote I provided from Max’s book is exactly such an example, even including the words “in principle.” Apparently you need a specific “in principle” prediction (this seems like an oxymoron; I’m not positive what you mean) rather than a general one. Fine, but the fact that the theory can in principle make specific predictions is the reason I am pestering you about your use of the word “empty.” The theory is not empty. It clearly can in principle make specific predictions, and the practical reasons why it currently cannot are made as plain as day in Max’s book. 20. Orin, This has now become a waste of time, you’re just ignoring whatever I write here. I’ll just cut and paste the relevant part. “Sure, all properties of all universes could be “derived”, just look at “all properties of all mathematical structures”. Fine, but this predicts nothing at all about any particular property of our particular universe. 21. Peter you are playing dumb. Obviously that is not the only implication of what Max wrote when he discusses the prediction of “the subjective perceptions of self-aware substructures” in a multiverse in which he has already established that in principle there are “probability distributions that favor habitability,” with which one can use to falsify the theory. He is not just saying one can in principle derive the properties of all mathematical structures. He is clearly saying that if a reasonable measure is found then one can in principle derive the probability for us to live in a universe with an effective Standard Model lagrangian and General Relativity with a finely tuned cosmological constant, etc, and that if this probability is found to be small then the theory is falsified. 22. Orin, Now we’re back to “all I have to do is find the right measure on the space of all mathematical structures”, and the problem that “all mathematical structures” is an empty concept. From nothing you get nothing. Tegmark gets nowhere with this in his book because it’s inherently empty and can’t go anywhere. If you point to anything other than absurd wishful thinking about explaining everything from nothing, please do so, but so far you’re just wasting your and my time. 23. Suppose some relatively slightly more advanced alien civilization living in Omega Centauri has already solved, for instance, their metabolic syndrome and counteraffects of ageing, integrated successfully with their quantum computing power, essentially have achieved a form of immortality. There’s probably a Hollywood movie playing this out, or at least there should be. What gives when an alien from this civilization visits you, Peter (after all, they originate in our own galaxy) and tells you Everettian postulates about decoherence, many worlds, and ultimately MUH are all true? As a matter of fact, the alien even takes the time to demonstrate to you on his quantum hand-held device how he has uploaded his “essence” innumerable times into his planet’s hosted quantum supercomputer, and in fact is himself living out “many lives” … any and all of which can and do seem no more or less “real” to him. Theoretical emptiness, perhaps. Impossible or improbable? 24. billandturk, If a space alien or a guy with gold tablets appears magically in front of me, and explains to me about how string theory really is the TOE, the multiverse works, the space of all mathematical structures carries a natural measure that explains everything, etc. I am happily going to agree that yes, these ideas are testable science and have been tested. However, if this possibility is the argument from proponents about why their ideas really are non-empty and scientific, I think they have a problem…. 25. Thank you Peter taking the time replying. I am young, dumb, and novice with this subject matter and even more “rookie” with the math underpinning it all; however, your response to my post actually helps me better contextualize and appreciate your points. I like the saying: There are known unknowns – things we know we know; There are known unknowns – things we know we don’t know; and There are unknown unknowns – things we don’t know we don’t know. After having just finished Tegmark’s book, I was inclined to believe his ideas fell into that middle category. I see where you’re coming from, and it also makes sense to me now, that perhaps (not trying to suggest my words are coming out of your mouth and hopefully not to be taken offensively), these ideas are more the latter, “unknown unknowns”, and the fundamental flaw with this category is presuming you can articulate “science” attributes upon it? Oh well, genuinely thanks again! This entry was posted in Book Reviews, Multiverse Mania. Bookmark the permalink.
{"url":"http://www.math.columbia.edu/~woit/wordpress/?p=6551","timestamp":"2014-04-18T10:37:06Z","content_type":null,"content_length":"85025","record_id":"<urn:uuid:83c9a71e-90f2-4699-a8c4-c3f05544d458>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Data manipulation with Mata (was: [Mata] passing a functionto ma [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: Data manipulation with Mata (was: [Mata] passing a functionto mata (new question)) From James Muller <james.muller@internode.on.net> To statalist@hsphsun2.harvard.edu Subject Re: st: Data manipulation with Mata (was: [Mata] passing a functionto mata (new question)) Date Sat, 27 Aug 2005 10:47:07 +1000 Can we embed blocks of Stata code in a mata function definition? Haven't looked, but this would be a way. Might be less visually awkward to do this rather than writing little sub-programs to do the external tasks elsewhere. Reminds me of the Linux goal of getting WINE to run Cygwin, and then have Cygwin run WINE again. Completely pointless, but for some reason a sought-after accomplishment. Jann Ben wrote: Stata has very powerful data manipulation functionality and I think it does not make much sense to mimic Stata commands such as, e.g., -generate- in Mata, because Mata will almost sure be slower. My recomondation is to use Mata only for data manipulation tasks that are hard to implement in terms of standard Stata commands or that are really slow in Stata. For example, manipulation tasks that involve temporarily reshaping the data, can be done more efficiently in Mata. For an example see the code of -supclust- (available form SSC). Type . ssc install supclust . viewsource supclust.ado Maybe StataCorp has a different opinion on this. -----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Fred Wolfe Sent: Friday, August 26, 2005 4:21 PM To: statalist@hsphsun2.harvard.edu Subject: Re: st: [Mata] passing a function to mata (new question) In reference to your comments below, I have found Mata a little daunting with respect to how it might be used for data management and variable manipulation - non-statistical uses. Is there any thought to addressing these uses simply some where? or even having a net course? Your slides from the users meeting were helpful, but more of an overview than an actual tutorial for the uses I describe. Is it worth it for a data manipulator like me to use Mata? * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2005-08/msg00881.html","timestamp":"2014-04-18T15:50:51Z","content_type":null,"content_length":"7730","record_id":"<urn:uuid:fbfae26a-04ca-4dcd-9a60-b7e7d7e4e6e8>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
Clouds and OLR Estimating solar SW radiation from OLR or cloudiness One of the things that the detailed SST balance at 170°W (Fig 5 of the figures at the bottom of this page) suggests is that a good estimate of the radiation is important. During the onset of the 97-98 event intraseasonal cloudiness apparently was a significant contribution to SST change. This set of figures compares two ways of estimating the net solar radiation at 0°, 170°W. First, Reed (1977) found the empirical formula Q=Q0(1-0.62C+0.0019alpha), where C is total cloud cover and alpha is noon solar angle. (For C less than 0.3, Reed finds no reduction in insolation (Q = Second is a regression directly between OLR and SW: Q=0.9285(OLR)-0.6377. (This one was used in Fig 5 of the bottom set). Some preliminaries: 1. Time series of OLR and ISCCP cloud fraction at 0°, 170°W. You want monthly-average OLR? We got monthly-average OLR 2. Time series of SW at 0°, 170°W: Clear-sky and Reed formula (ISCCP) Direct comparisons between Reed formula and OLR regression, for monthly clouds and OLR during 1983-91 when ISCCP clouds were available. Overall, the agreement is good at the lower values (high cloudiness), but the Reed formula suggests that at low cloudiness the insolation is much greater than the OLR regression would indicate. The Reed formula gives a range of SW values about twice as large as the OLR regression. 3. Net SW radiation at the equator: Reed (ISCCP) and OLR regression. Min/max during Jul 83-Jun 91 Try a regression between OLR and ISCCP clouds. Figure 1 suggests that a regression could be found between OLR and cloud fraction that might improve the estimate of solar radiation. The regression from OLR to cloudiness could then be used in the Reed formula. This intuitively makes sense, since OLR is really closest to a measure of cloudiness (at least in the west where much of the cloud cover is associated with deep convection that is well-represented by OLR). Another advantage of this method (rather than regressing directly from OLR to radiation), is that Q0 in the Reed formula contains much of the annual cycle variability. Finally, variations of OLR above about 250 W/m2 or so are not meaningful in the context of radiation received at the sea surface. It would be desirable to be able to ignore those variations. Regressing directly OLR -> radiation means that these meaningless signals have as much significance as the low values. However, since the Reed formula ignores cloud variations below 0.3 sky cover, high values of OLR (that regress to low cloud cover values) are less of a factor. Clouds are available between July 1983 and June 1991. Use monthly-average OLR. Examples at longitudes across the Pacific (scatter diagrams and regression lines): 4. 110°W Some statistics. The correlation is good (>0.8) west of about 140°W, and excellent (>0.9) west of 170°W. East of 140°W the relation is unusable. In the east a lot of cloudiness is stratus, which does not produce an OLR signature. Note the increase in cloud variance, particularly east of 120°W, that is not seen in OLR. 5. Basic stats of OLR Cloud time series estimated by regression: 6. 110°W Regression over the entire tropical Pacific See these results put to good use on the Idealized MJO page Some things related to the heat balance at 170°W A few figures I saved while checking ways of doing the heat balance. Only figure 5 is probably of interest to anyone but me.
{"url":"http://faculty.washington.edu/kessler/ENSO/clouds-n-olr.html","timestamp":"2014-04-20T13:24:41Z","content_type":null,"content_length":"6715","record_id":"<urn:uuid:632341f1-49e4-41af-bb72-30aeadbb6a08>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Congo, PA Math Tutor Find a Congo, PA Math Tutor ...My wife and I share our home with a former rescue dog; a Lhasa Apso named Lucky. We play with grandchildren and travel to mostly warm places.I've successfully completed undergraduate coursework in algebra, calculus 1, 2 and 3 at the University of Pittsburgh and recently at Montgomery County Community College. My grades were Bs. 16 Subjects: including geometry, ASVAB, algebra 1, algebra 2 ...I also volunteer for a camp for children with cancer and oversee our camper and volunteer staff. I use multi-sensory techniques for teaching reading, writing, and math, and also use student interests to begin work in an area of difficulty. I am certified in PA to teach Special Education grades PK-12. 20 Subjects: including SAT math, dyslexia, geometry, algebra 1 ...I am certified via Wyzant in SAT math. Also I have my middle level math certification through the state. I have received my elementary teaching certification both in NJ and PA. 24 Subjects: including algebra 1, prealgebra, ACT Math, geometry ...In high school, I was a member of the gifted program and took many advanced classes, including AP statistics and AP microeconomics. I scored best on the writing section of my SATs, scoring a 680/800. I love reading, writing and grammar components. 12 Subjects: including algebra 1, prealgebra, reading, German ...I feel that getting experience teaching students one on one is the best way for me to have an immediate impact. This will especially help to personalize the teaching experience and is an effective way to create a trusting relationship. I am especially personable and I know I have the ability to... 16 Subjects: including precalculus, algebra 1, algebra 2, calculus Related Congo, PA Tutors Congo, PA Accounting Tutors Congo, PA ACT Tutors Congo, PA Algebra Tutors Congo, PA Algebra 2 Tutors Congo, PA Calculus Tutors Congo, PA Geometry Tutors Congo, PA Math Tutors Congo, PA Prealgebra Tutors Congo, PA Precalculus Tutors Congo, PA SAT Tutors Congo, PA SAT Math Tutors Congo, PA Science Tutors Congo, PA Statistics Tutors Congo, PA Trigonometry Tutors Nearby Cities With Math Tutor Athol, PA Math Tutors Fagleysville, PA Math Tutors Hill Church, PA Math Tutors Landis Store, PA Math Tutors Lobachsville, PA Math Tutors Lower Longswamp, PA Math Tutors Manatawny, PA Math Tutors Niantic, PA Math Tutors Pikeville, PA Math Tutors Sassamansville Math Tutors Schultzville, PA Math Tutors Shanesville, PA Math Tutors West Monocacy, PA Math Tutors Woodchoppertown, PA Math Tutors Worman, PA Math Tutors
{"url":"http://www.purplemath.com/Congo_PA_Math_tutors.php","timestamp":"2014-04-19T15:04:39Z","content_type":null,"content_length":"23760","record_id":"<urn:uuid:6ddacf54-ec98-429e-945b-92fb78e82a7c>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
For the function, use the second derivative test (if possible) to determine if each critical point is a minimum, maximum, or neither. If the second derivative test can't be used, say so. We use the quotient rule to find the derivative: f'(x) is undefined only at x = -3, in which case f is also undefined so this is not a critical point. f'(x) is zero when x^2 + 6x + 5 = 0. Happily, this quadratic factors as x^2 + 6x + 5 = (x + 5)(x + 1), so f'(x) is zero at x = -5 and x = -1. Here's the numberline so far: To use the second derivative test, we need to find the second derivative. Now we evaluate the second derivative at each critical point. At x = -5 we find which means f is concave down and so has a maximum at x = -5. At x = -1 we find which means f is concave up and so has a minimum at x = -1. Thankfully, these are the same answers we found using the first derivative test.
{"url":"http://www.shmoop.com/second-derivatives/second-derivative-test-exercises.html","timestamp":"2014-04-17T10:19:24Z","content_type":null,"content_length":"28712","record_id":"<urn:uuid:2ecffa15-b226-4b2e-b294-df6597e17093>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
celestial mechanics (physics) :: Chaotic orbits celestial mechanics Article Free Pass The French astronomer Michel Hénon and the American astronomer Carl Heiles discovered that when a system exhibiting periodic motion, such as a pendulum, is perturbed by an external force that is also periodic, some initial conditions lead to motions where the state of the system becomes essentially unpredictable (within some range of system states) at some time in the future, whereas initial conditions within some other set produce quasiperiodic or predictable behaviour. The unpredictable behaviour is called chaotic, and initial conditions that produce it are said to lie in a chaotic zone. If the chaotic zone is bounded, in the sense that only limited ranges of initial values of the variables describing the motion lead to chaotic behaviour, the uncertainty in the state of the system in the future is limited by the extent of the chaotic zone; that is, values of the variables in the distant future are completely uncertain only within those ranges of values within the chaotic zone. This complete uncertainty within the zone means the system will eventually come arbitrarily close to any set of values of the variables within the zone if given sufficient time. Chaotic orbits were first realized in the asteroid belt. A periodic term in the expansion of the disturbing function for a typical asteroid orbit becomes more important in influencing the motion of the asteroid if the frequency with which it changes sign is very small and its coefficient is relatively large. For asteroids orbiting near a mean motion commensurability with Jupiter, there are generally several terms in the disturbing function with large coefficients and small frequencies that are close but not identical. These “resonant” terms often dominate the perturbations of the asteroid motion so much that all the higher-frequency terms can be neglected in determining a first approximation to the perturbed motion. This neglect is equivalent to averaging the higher-frequency terms to zero; the low-frequency terms change only slightly during the averaging. If one of the frequencies vanishes on the average, the periodic term becomes nearly constant, or secular, and the asteroid is locked into an exact orbital resonance near the particular mean motion commensurability. The mean motions are not exactly commensurate in such a resonance, however, since the motion of the asteroid orbital node or perihelion is always involved (except for the 1:1 Trojan resonances). For example, for the 3:1 commensurability, the angle θ = λ[A] - 3λ[J] + ϖ[A] is the argument of one of the important periodic terms whose variation can vanish (zero frequency). Here λ = Ω + ω + l is the mean longitude, the subscripts A and J refer to the asteroid and Jupiter, respectively, and ϖ = Ω + ω is the longitude of perihelion (see Figure 2). Within resonance, the angle θ librates, or oscillates, around a constant value as would a pendulum around its equilibrium position at the bottom of its swing. The larger the amplitude of the equivalent pendulum, the larger its velocity at the bottom of its swing. If the velocity of the pendulum at the bottom of its swing, or, equivalently, the maximum rate of change of the angle θ, is sufficiently high, the pendulum will swing over the top of its support and be in a state of rotation instead of libration. The maximum value of the rate of change of θ for which θ remains an angle of libration (periodically reversing its variation) instead of one of rotation (increasing or decreasing monotonically) is defined as the half-width of the resonance. Another term with nearly zero frequency when the asteroid is near the 3:1 commensurability has the argument θ′ = λ[A] - λ[J] + 2ϖ[J]. The substitution of the longitude of Jupiter’s perihelion for that of the asteroid means that the rates of change of θ and θ′ will be slightly different. As the resonances are not separated much in frequency, there may exist values of the mean motion of the asteroid where both θ and θ′ would be angles of libration if either resonance existed in the absence of the other. The resonances are said to overlap in this case, and the attempt by the system to librate simultaneously about both resonances for some initial conditions leads to chaotic orbital behaviour. The important characteristic of the chaotic zone for asteroid motion near a mean motion commensurability with Jupiter is that it includes a region where the asteroid’s orbital eccentricity is large. During the variation of the elements over the entire chaotic zone as time increases, large eccentricities must occasionally be reached. For asteroids near the 3:1 commensurability with Jupiter, the orbit then crosses that of Mars, whose gravitational interaction in a close encounter can remove the asteroid from the 3:1 zone. By numerically integrating many orbits whose initial conditions spanned the 3:1 Kirkwood gap region in the asteroid belt, Jack Wisdom, an American dynamicist who developed a powerful means of analyzing chaotic motions, found that the chaotic zone around this gap precisely matched the physical extent of the gap. There are no observable asteroids with orbits within the chaotic zone, but there are many just outside extremes of the zone. Other Kirkwood gaps can be similarly accounted for. The realization that orbits governed by Newton’s laws of motion and gravitation could have chaotic properties and that such properties could solve a long-standing problem in the celestial mechanics of the solar system is a major breakthrough in the subject. Do you know anything more about this topic that you’d like to share?
{"url":"http://www.britannica.com/EBchecked/topic/101285/celestial-mechanics/77435/Chaotic-orbits","timestamp":"2014-04-17T19:53:48Z","content_type":null,"content_length":"91696","record_id":"<urn:uuid:698755ea-0f2d-4ede-8500-ba61a91a129a>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
CCSU Theses & Dissertations Mathematics -- Study and teaching (Secondary); Mathematics -- Study and teaching (Higher) The prediction of college mathematics grades has been widely researched. Results from other studies have shown that the prediction of college mathematics grades can vary depending on the set of predictor variables considered. For instance, some... Hospitals -- Emergency services -- Utilization; Children -- Health and hygiene Emergency department care and primary care are ideally distinct parts of the health care delivery system. In theory, each answers a specific and different health care need. However, in practice this distinction blurs. Many visits to hospital... Data mining, at times, involves applying cluster analysis to large databases containing multivariate normal parameters. Most finite mixture clustering methods use the expectation-maximum (EM) algorithm to find solutions to the unknown parameters,... Display a larger image and more item information when the pointer pauses over a thumbnail Select the collections to add or remove from your search
{"url":"http://content.library.ccsu.edu/cdm/search/collection/ccsutheses/searchterm/subsets","timestamp":"2014-04-17T17:06:18Z","content_type":null,"content_length":"89422","record_id":"<urn:uuid:fff30b3b-d599-461b-b6ae-6c416e156f80>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
Analogies for Group Ring Field As we know for the average Undergrad attempting to grasp (and understand) these abstract mathematical concepts can be challenging to say the least. I was (and still am in some sense :P) in that boat. Does anyone have any Analogies or creative ways of explaining these and getting their meaning across while retaining some type of concrete idea?
{"url":"http://www.physicsforums.com/showthread.php?p=3426959","timestamp":"2014-04-20T03:17:45Z","content_type":null,"content_length":"22453","record_id":"<urn:uuid:fff97045-aca3-4a91-9b19-9317bb03de69>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
From Haskell to hardware via cartesian closed categories Since fall of last year, I’ve been working at Tabula, a Silicon Valley start-up developing an innovative programmable hardware architecture called “Spacetime”, somewhat similar to an FPGA, but much more flexible and efficient. I met the founder, Steve Teig, at a Bay Area Haskell Hackathon in February of 2011. He described his Spacetime architecture, which is based on the geometry of the same name, developed by Hermann Minkowski to elegantly capture Einstein’s theory of special relativity. Within the first 30 seconds or so of hearing what Steve was up to, I knew I wanted to help. The vision Steve shared with me included not only a better alternative for hardware designers (programmed in hardware languages like Verilog and VHDL), but also a platform for massively parallel execution of software written in a purely functional language. Lately, I’ve been working mainly on this latter aspect, and specifically on the problem of how to compile Haskell. Our plan is to develop the Haskell compiler openly and encourage collaboration. If anything you see in this blog series interests you, and especially if have advice or you’d like to collaborate on the project, please let me know. In my next series of blog posts, I’ll describe some of the technical ideas I’ve been working with for compiling Haskell for massively parallel execution. For now, I want to introduce a central idea I’m using to approach the problem. Lambda calculus and cartesian closed categories I’m used to thinking of the typed lambda calculi as languages for describing functions and other mathematical values. For instance, if the type of an expression e is Bool → Bool, then the meaning of e is a function from Booleans to Booleans. (In non-strict pure languages like Haskell, both Boolean types include ⊥. In hypothetically pure strict languages, the range is extend to include ⊥, but the domain isn’t.) However, there are other ways to interpret typed lambda-calculi. You may have heard of “cartesian closed categories” (CCCs). CCC is an abstraction having a small vocabulary with associated laws: • The “category” part means we have a notion of “morphisms” (or “arrows”) each having a domain and codomain “object”. There is an identity morphism for and associative composition operator. If this description of morphisms and objects sounds like functions and types (or sets), it’s because functions and types are one example, with id and (∘). • The “cartesian” part means that we have products, with projection functions and an operator to combine two functions into a pair-producing function. For Haskell functions, these operations are fst and snd, together with (&&&) from Control.Arrow. • The “closed” part means that we have a way to represent morphisms via objects, referred to as “exponentials”. The corresponding operations are curry, uncurry, and apply. Since Haskell is a higher-order language, these exponential objects are simply (first class) functions. A wonderful thing about the CCC interface is that it suffices to translate any lambda expression, as discovered by Joachim Lambek. In other words, lambda expressions can be systematically translated into the CCC vocabulary. Any (law-abiding) interpretation of that vocabulary is thus an interpretation of the lambda calculus. Besides intellectual curiosity, why might one care about interpreting lambda expressions in terms of CCCs other than the one we usually think of for functional programs? I got interested because I’ve been thinking about how to compile Haskell programs to “circuits”, both the standard static kind and more dynamic variants. Since Haskell is a typed lambda calculus, if we can formulate circuits as a CCC, we’ll have our Haskell-to-circuit compiler. Other interpretations enable analysis of timing and demand propagation (including strictness). Some future topics • Converting lambda expressions to CCC form. • Optimizing CCC expressions. • Plugging into GHC, to convert from Haskell source to CCC. • Applications of this translation, including the following: □ Circuits □ Timing analysis □ Strictness/demand analysis □ Type simplification (normalization) 6 Comments 1. Max: You are probably aware of GArrows (http://www.cs.berkeley.edu/~megacz/garrows/), which implements a GHC pass translating Haskell terms into a language of arrows, but if not you may find that link 12 September 2013, 11:46 pm 2. Gabor Grief: Conal, are you aware of Adam Megacz’ work? http://www.cs.berkeley.edu/~megacz/garrows/ 13 September 2013, 1:41 pm 3. conal: Max & Gabor & others: Thanks much for the reminder about Adam’s generalized arrows work. I’m revisiting it now. 13 September 2013, 4:58 pm 4. Lev: Haskell to HDL are the great news !! . Being VLSI engineer and Haskell beginner I am looking forward to possibility to use Haskell for VLSI . I’m interested in it as a potential user . Would it be used as DSL to describe invariants + temporal logic to be translated to HDL/netlist ? Or maybe to be used to concisely describe netlist ? Can you provide with some motivational example , such as DSL -> Dsl Translation -> Final representation Regards, Lev 15 September 2013, 12:43 pm 5. Conal Elliott » Blog Archive » Circuits as a bicartesian closed category: [...] previous few posts have been about cartesian closed categories (CCCs). In From Haskell to hardware via [...] 16 September 2013, 2:52 pm 6. Muzaffer Kal: Hi, Are you aware of this work http://clash.ewi.utwente.nl/ClaSH/Home.html ? Currently getting deeper into Haskell and very interested in Haskell for Hardware 8 December 2013, 12:20 am
{"url":"http://conal.net/blog/posts/haskell-to-hardware-via-cccs","timestamp":"2014-04-19T11:57:15Z","content_type":null,"content_length":"67298","record_id":"<urn:uuid:648d4aea-5ab1-4a03-a48a-1828ac3e7f1e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
To find an element of a $\Pi^1_1$ set up vote 5 down vote favorite I don't known the correct credit for the following: every non-empty $\Pi^1_1$ set of reals contains some $X \in L_{\alpha}$ for some $X$-recursive $\alpha$. (Addison-Kondo?) So, my question is: what is the least $\beta$, s.t. $\mathcal{P}(\omega) \cap L_\beta$ is a basis for all non-empty $\Pi^1_1$ sets? add comment 2 Answers active oldest votes I can't access the full paper at the moment, but I'm pretty sure this is exactly the question addressed in the paper "A Note on the Kondo-Addison Theorem" by D. Guaspari up vote 5 down vote accepted Available online at www.jstor.org/stable/10.2307/2272898 – Trevor Wilson Jul 21 '12 at 14:15 Thanks a lot! The last theorem on obtainable ordinals is quite interesting. – Wei Wang Jul 22 '12 at 11:06 add comment The least such ordinal is the least ordinal which cannot be a $\Delta^1_2$-well-ordering over natural numbers. Let $$\delta^1_2=\mbox{ supremum of the }\Delta^1_2 \mbox{ wellorderings of } \omega,$$ and $$\delta=\min\{\alpha\mid L\setminus L_{\alpha}\mbox{ contains no }\Pi^1_1 \mbox{ singleton}\}.$$ We claim that $\delta=\delta^1_2$. $\mathbf{Proof}$: If $\alpha<\delta$, then there is a $\Pi^1_1$ singleton $x \in L_{\delta}\setminus L_{\alpha}$. Since $x\in L_{\omega_1^x}$ and $\omega_1^x$ is a $\Pi^1_1(x)$-wellordering, it must be that $\alpha<\omega_1^x<\delta^1_2$. So $\delta\leq \delta^1_2$. up vote 4 down If $\alpha<\delta^1_2$, there is a $\Delta^1_2$ wellordering relation $R\subseteq \omega\times \omega$ of order type $\alpha$. So there are two arithmetical relations $S, T\subseteq (\omega^ vote {\omega})^2\times \omega^2$ so that $$R(n,m)\Leftrightarrow \exists f \forall g S(f,g,n,m), \mbox{ and}$$ $$\neg R(n,m)\Leftrightarrow \exists f \forall g T(f,g,n,m).$$ Define $\Pi^1_1$ sets $$R_0=\{(h,\langle n,m\rangle)\mid h(0)=0\wedge \exists f\forall g (S(f,g,n,m)\wedge \forall n(f(n)=h(n+1)))\}$$ and $$R_1=\{(h,\langle n,m\rangle)\mid h(0)=1\wedge \exists f\forall g (T (f,g,n,m)\wedge \forall n(f(n)=h(n+1)))\}.$$ By $\Pi^1_1$-uniformization Theorem, they both can be uniformized by $\Pi^1_1$ partial functions $p_{R_0}:\omega\to \omega^{\omega}$ and $p_{R_1}: \omega\to \omega^{\omega}$. Let $p=p_{R_0} \cup p_{R_1}$. Then $p$ is a $\Pi^1_1$ total function and can viewed as a $\Pi^1_1$-singleton. Then $R$ is recursive in $p$ and so $\alpha<\omega_1^ Thus $\delta^1_2=\delta$. Thanks a lot! It is a nice proof. – Wei Wang Jul 22 '12 at 11:07 add comment Not the answer you're looking for? Browse other questions tagged lo.logic or ask your own question.
{"url":"https://mathoverflow.net/questions/102810/to-find-an-element-of-a-pi1-1-set/102850","timestamp":"2014-04-16T14:16:35Z","content_type":null,"content_length":"56452","record_id":"<urn:uuid:1d54bf4f-5e1a-49df-8dba-fbdfe2aba766>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical English Usage - a Dictionary by Jerzy Trzeciak [see also: original] He was the first to propose a complete theory of triple intersections. Because N. Wiener is recognized as the first to have constructed such a measure, the measure is often called the Wiener measure. Let S[i] be the first of the remaining S[j]. The first two are simpler than the third. [Or: the third one; not: “The first two ones”] As a first step we shall bound A below. At first glance, this appears to be a strange definition. the first and third terms in (5) the first author = the first-named author [see also: initially, originally, beginning] First, we prove (2). [Not: “At first”] We first prove a reduced form of the theorem. Suppose first that...... His method of proof was to first exhibit a map...... In Lemma 6.1, the independence of F from V is surprising at first. It might seem at first that the only obstacle is the fact that the group is not compact. [Note the difference between first and at first: first refers to something that precedes everything else in a series, while at first [= initially] implies a contrast with what happens later.] Back to main page
{"url":"http://www.emis.de/monographs/Trzeciak/glossae/first.html","timestamp":"2014-04-19T04:27:11Z","content_type":null,"content_length":"2221","record_id":"<urn:uuid:941ca833-6dcb-4207-b902-b58a866af343>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
Modelling and solving train scheduling problems under capacity constraints Liu, Shi-Qiang (2008) Modelling and solving train scheduling problems under capacity constraints. . Many large coal mining operations in Australia rely heavily on the rail network to transport coal from mines to coal terminals at ports for shipment. Over the last few years, due to the fast growing demand, the coal rail network is becoming one of the worst industrial bottlenecks in Australia. As a result, this provides great incentives for pursuing better optimisation and control strategies for the operation of the whole rail transportation system under network and terminal capacity constraints. This PhD research aims to achieve a significant efficiency improvement in a coal rail network on the basis of the development of standard modelling approaches and generic solution techniques. Generally, the train scheduling problem can be modelled as a Blocking Parallel- Machine Job-Shop Scheduling (BPMJSS) problem. In a BPMJSS model for train scheduling, trains and sections respectively are synonymous with jobs and machines and an operation is regarded as the movement/traversal of a train across a section. To begin, an improved shifting bottleneck procedure algorithm combined with metaheuristics has been developed to efficiently solve the Parallel-Machine Job- Shop Scheduling (PMJSS) problems without the blocking conditions. Due to the lack of buffer space, the real-life train scheduling should consider blocking or hold-while-wait constraints, which means that a track section cannot release and must hold a train until the next section on the routing becomes available. As a consequence, the problem has been considered as BPMJSS with the blocking conditions. To develop efficient solution techniques for BPMJSS, extensive studies on the nonclassical scheduling problems regarding the various buffer conditions (i.e. blocking, no-wait, limited-buffer, unlimited-buffer and combined-buffer) have been done. In this procedure, an alternative graph as an extension of the classical disjunctive graph is developed and specially designed for the non-classical scheduling problems such as the blocking flow-shop scheduling (BFSS), no-wait flow-shop scheduling (NWFSS), and blocking job-shop scheduling (BJSS) problems. By exploring the blocking characteristics based on the alternative graph, a new algorithm called the topological-sequence algorithm is developed for solving the non-classical scheduling problems. To indicate the preeminence of the proposed algorithm, we compare it with two known algorithms (i.e. Recursive Procedure and Directed Graph) in the literature. Moreover, we define a new type of non-classical scheduling problem, called combined-buffer flow-shop scheduling (CBFSS), which covers four extreme cases: the classical FSS (FSS) with infinite buffer, the blocking FSS (BFSS) with no buffer, the no-wait FSS (NWFSS) and the limited-buffer FSS (LBFSS). After exploring the structural properties of CBFSS, we propose an innovative constructive algorithm named the LK algorithm to construct the feasible CBFSS schedule. Detailed numerical illustrations for the various cases are presented and analysed. By adjusting only the attributes in the data input, the proposed LK algorithm is generic and enables the construction of the feasible schedules for many types of non-classical scheduling problems with different buffer constraints. Inspired by the shifting bottleneck procedure algorithm for PMJSS and characteristic analysis based on the alternative graph for non-classical scheduling problems, a new constructive algorithm called the Feasibility Satisfaction Procedure (FSP) is proposed to obtain the feasible BPMJSS solution. A real-world train scheduling case is used for illustrating and comparing the PMJSS and BPMJSS models. Some real-life applications including considering the train length, upgrading the track sections, accelerating a tardy train and changing the bottleneck sections are discussed. Furthermore, the BPMJSS model is generalised to be a No-Wait Blocking Parallel- Machine Job-Shop Scheduling (NWBPMJSS) problem for scheduling the trains with priorities, in which prioritised trains such as express passenger trains are considered simultaneously with non-prioritised trains such as freight trains. In this case, no-wait conditions, which are more restrictive constraints than blocking constraints, arise when considering the prioritised trains that should traverse continuously without any interruption or any unplanned pauses because of the high cost of waiting during travel. In comparison, non-prioritised trains are allowed to enter the next section immediately if possible or to remain in a section until the next section on the routing becomes available. Based on the FSP algorithm, a more generic algorithm called the SE algorithm is developed to solve a class of train scheduling problems in terms of different conditions in train scheduling environments. To construct the feasible train schedule, the proposed SE algorithm consists of many individual modules including the feasibility-satisfaction procedure, time-determination procedure, tune-up procedure and conflict-resolve procedure algorithms. To find a good train schedule, a two-stage hybrid heuristic algorithm called the SE-BIH algorithm is developed by combining the constructive heuristic (i.e. the SE algorithm) and the local-search heuristic (i.e. the Best-Insertion- Heuristic algorithm). To optimise the train schedule, a three-stage algorithm called the SE-BIH-TS algorithm is developed by combining the tabu search (TS) metaheuristic with the SE-BIH algorithm. Finally, a case study is performed for a complex real-world coal rail network under network and terminal capacity constraints. The computational results validate that the proposed methodology would be very promising because it can be applied as a fundamental tool for modelling and solving many real-world scheduling problems. Impact and interest: Citation countsare sourced monthly from Scopus and Web of Science® citation databases. These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards. Citations counts from the Google Scholar™ indexing service can be viewed at the linked Google Scholar™ search. Full-text downloads: 899 since deposited on 22 Sep 2010 338 in the past twelve months Full-text downloadsdisplays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one. ID Code: 37181 Item Type: QUT Thesis (PhD) Supervisor: Kozan, Erhan& Anh, Vo Keywords: Railroads Train dispatching, Railroads Management, Scheduling Mathematical models, thesis, doctoral Divisions: Past > QUT Faculties & Divisions > Faculty of Science and Technology Past > Schools > Mathematical Sciences Institution: Queensland University of Technology Deposited On: 22 Sep 2010 23:07 Last Modified: 16 Feb 2012 11:17 Export: EndNote | Dublin Core | BibTeX Repository Staff Only: item control page
{"url":"http://eprints.qut.edu.au/37181/","timestamp":"2014-04-18T06:52:46Z","content_type":null,"content_length":"37620","record_id":"<urn:uuid:d10f01b3-2c1f-491e-96b4-b8c21a4b8fde>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
atural number Natural number Natural number can mean either a positive integer , ...) or a non-negative integer , ...). These are the first numbers learned by children, and the easiest to understand. Natural numbers have two main purposes: they can be used for ("there are 3 apples on the table"), or they can be used for ("this is the 3rd largest city in the country"). Properties of the natural numbers related to , such as the distribution of prime numbers , are studied in number theory . Problems concerning counting, such as Ramsey theory , are studied in History of natural numbers and the status of zero The natural numbers presumably had their origins in the words used to count things, beginning with the number one. The first major advance in abstraction was the use of numerals to represent numbers. This allowed systems to be developed for recording large numbers. For example, the Babylonians developed a powerful place-value system based essentially on the numerals for 1 and 10. The ancient Egyptians had a system of numerals with distinct hieroglyphs for 1, 10, and all the powers of 10 up to one million. A stone carving from Karnak, dating from around 1500 BC and now at the Louvre in Paris, depicts 276 as 2 hundreds, 7 tens, and 6 ones; and similarly for the number 4,622. A much later advance in abstraction was the development of the idea of zero as a number with its own numeral. A zero digit had been used in place-value notation as early as 700 BC by the Babylonians, but it was never used as a final element.¹ The Olmec and Maya civilization used zero as a separate number as early as 1st century BC, apparently developed independently, but they did not pass it along to anyone outside of Mesoamerica. The modern concept dates to the Indian mathematician Brahmagupta in 628 AD. It took more than five centuries for European mathematicians to accept zero as a number, and even when they did, it was not counted as a natural number. The first systematic study of numbers as abstractions (that is, as abstract entities) is usually credited to the Greek philosophers Pythagoras and Archimedes. However, independent studies also occurred at around the same time in India, China, and Mesoamerica. In the nineteenth century, a set-theoretical definition of natural numbers was developed. With this definition, it was more convenient to include zero (corresponding to the empty set) as a natural number. Wikipedia follows this convention, as do set theorists, logicians, and computer scientists. Other mathematicians, primarily number theorists, often prefer to follow the older tradition and exclude zero from the natural numbers. The term whole number is used informally by some authors for an element of the set of integers, the set of non-negative integers, or the set of positive integers. Mathematicians use N or (an N in blackboard bold) to refer to the set of all natural numbers. This set is infinite but countable by definition. To be unambiguous about whether zero is included the following are sometimes used to indicate the positive integers: and the following are sometimes used to indicate the nonnegative integers: W or is sometimes used to refer to the set of whole numbers, by authors who do not identify it with the integers. Formal definitions The precise mathematical definition of the natural numbers has not been easy. The Peano postulates state conditions that any successful definition must satisfy: • There is a natural number 0. • Every natural number a has a successor, denoted by S(a). • There is no natural number whose successor is 0. • Distinct natural numbers have distinct successors: if a ≠ b, then S(a) ≠ S(b). • If a property is possessed by 0 and also by the successor of every natural number which possesses it, then it is possessed by all natural numbers. (This postulate ensures that the proof technique of mathematical induction is valid.) It should be noted that the "0" in the above definition need not correspond to the what we normally consider to be the number zero. "0" simply means some object that when combined with an appropriate successor function, satisfies the Peano axioms. A standard construction in set theory is to define the natural numbers as follows: We set 0 := { } and define S(a) = a U {a} for all a. The set of natural numbers is then defined to be the intersection of all sets containing 0 which are closed under the successor function. Assuming the axiom of infinity, this definition can be shown to satisfy the Peano axioms. Each natural number is then equal to the set of natural numbers less than it, so that 1 = {0} = {{ }} 2 = {0,1} = {0, {0}} = {{ }, {{ }}} 3 = {0,1,2} = {0, {0}, {0, {0}}} = {{ }, {{ }}, {{ }, {{ }}}} and so on. When you see a natural number used as a set, this is typically what is meant. Under this definition, there are exactly n elements (in the naive sense) in the set n and n ≤ m (in the naive sense) iff n is a subset of m. Although this particular construction is useful, it is not the only possible construction. For example: one could define 0 = { } and S(a) = {a}, 0 = { } 1 = {0} = {{ }} 2 = {1} = {{{ }}}, etc. Or we could even define 0 = {{ }} and S(a) = a U {a} 0 = {{ }} 1 = {{ }, 0} = {{ }, {{ }}} 2 = {{ }, 0, 1}, etc. For the rest of this article, we follow the standard construction described first above. One can recursively define an addition on the natural numbers by setting a + 0 = a and a + S(b) = S(a + b) for all a, b. This turns the natural numbers (N, +) into a commutative monoid with identity element 0, the so-called free monoid with one generator. This monoid satisfies the cancellation property and can therefore be embedded in a group. The smallest group containing the natural numbers is the integers. If we define S(0) := 1, then S(b) = S(b + 0) = b + S(0) = b + 1; i.e. the successor of b is simply b + 1. Analogously, given that addition has been defined, a multiplication * can be defined via a * 0 = 0 and a * (b + 1) = (a * b) + a. This turns (N, *) into a commutative monoid with identity element 1; a generator set for this monoid is the set of prime numbers. Addition and multiplication are compatible, which is expressed in the distribution law: a * (b + c) = (a * b) + (a * c). These properties of addition and multiplication make the natural numbers an instance of a commutative semiring. Semirings are an algebraic generalization of the natural numbers where multiplication is not necessarily For the remainder of the article, we write ab to indicate the product a * b, and we also assume the standard order of operations. Furthermore, one defines a total order on the natural numbers by writing a ≤ b if and only if there exists another natural number c with a + c = b. This order is compatible with the arithmetical operations in the following sense: if a, b and c are natural numbers and a ≤ b, then a + c ≤ b + c and ac ≤ bc. An important property of the natural numbers is that they are well-ordered: every non-empty set of natural numbers has a least element. While it is in general not possible to divide one natural number by another and get a natural number as result, the procedure of division with remainder is available as a substitute: for any two natural numbers a and b with b ≠ 0 we can find natural numbers q and r such that a = bq + r and r < b The number q is called the quotient and r is called the remainder of division of a by b. The numbers q and r are uniquely determined by a and b. This, the Division algorithm, is key to several other properties (divisibility), algorithms (such as the Euclidean algorithm), and ideas in number theory. Two generalizations of natural numbers arise from the two uses: ordinal numbers are used to describe the position of an element in a ordered sequence and cardinal numbers are used to specify the size of a given set. For finite sequences or finite sets, both of these properties are embodied in the natural numbers. Other generalizations are discussed in the article on numbers. ¹ "... a tablet found at Kish ... thought to date from around 700 BC, uses three hooks to denote an empty place in the positional notation. Other tablets dated from around the same time use a single hook for an empty place." [1] Topics in mathematics related to quantity Edit Numbers | Natural numbers | Integers | Rational numbers | Real numbers | Complex numbers | Hypercomplex numbers | Quaternions | Octonions | Sedenions | Hyperreal numbers | Surreal numbers | Ordinal numbers | Cardinal numbers | p-adic numberss | Integer sequences | Mathematical constants | Infinity
{"url":"http://july.fixedreference.org/en/20040724/wikipedia/Natural_number","timestamp":"2014-04-18T16:45:51Z","content_type":null,"content_length":"18821","record_id":"<urn:uuid:604bc848-fed1-45f7-9662-fbc52fef2b47>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra/Number Theory/Combinatorics Seminar Tuesdays 12:15 - 1:10 PM Millikan 211 Pomona College, Department of Mathematics 610 N. College Ave. (Corner of 6th and College Ave.) Claremont, CA 91711 For more information contact: Gizem Karaali email: Gizem.Karaali@pomona.edu Our Next Speaker | Upcoming Seminars | Abstracts | Archive Our Next Speaker | Upcoming Seminars | Abstracts Calendar and Upcoming Seminars Our Next Speaker | Upcoming Seminars | Abstracts Abstracts • Interlocking Linkages Julie Glass (California State University East Bay) This talk will introduce the audience to some of the history and basic ideas used in the study of chains in the area of computational geometry. A chain is a collection of rigid bars connected at their vertices (also known as a linkage), which form a simple path (an open chain) or a simple cycle (a closed chain). A folding of a chain (or any linkage) is a certain reconfiguration obtained by moving the vertices. A collection of chains are said to be interlocked if they cannot be separated by foldings. This talk will explain some standard techniques using geometry and knot theory to address the problem of when linkages are interlocked. Finally, we will answer the question, “Can a 2-chain and a k-chain be interlocked?” This talk will be accessible to a broad audience. • On Siegel's lemma Lenny Fukshansky (Claremont McKenna College) Siegel's lemma in its simplest form is a statement about the existence of small-size solutions to a system of linear equations with integer coefficients: such results were originally motivated by their applications in transcendence. A modern version of this classical theorem guarantees the existence of a whole basis of small "size" for a vector space over a global field (that is number field, function field, or their algebraic closures). The role of size is played by a height function, an important tool from Diophantine geometry, which measures "arithmetic complexity" of points. For many applications it is also important to have a version of Siegel's lemma with some additional algebraic conditions placed on points in question. I will discuss the classical versions of Siegel's lemma, along with my recent results on existence of points of bounded height in a vector space outside of a finite union of varieties over a global field. • A very brief introduction to fields of norms Ghassan Sarkis (Pomona College) We will present the field-of-norms construction of Fontaine and Wintenberger, which associates certain totally ramified extensions of local fields with positive-characteristic fields in a way that relates the Galois group of the extension to a subgroup of automorphisms of the positive-characteristic field. Time permitting, we will discuss applications the field-of-norms theory to p-adic dynamical systems. • Inconsistencies with Nonparametric Procedures and Statistics: Combinatoric and Asymptotic Results Anna E. Bargagliotti (University of Memphis) Nonparametric statistical tests can be used to differentiate among alternatives. Each test is uniquely identified with a procedure that analyzes ranked data. Procedure results are then incorporated into a test statistic. Inconsistencies among tests occur at both the procedure level and the statistic level. In this talk, I will characterize symmetry structures of data that explain why different procedures can output different rankings when analyzing the same data. In addition, I will quantify the number of ways that two ranked data sets can be aggregated and define a strict condition data must satisfy in order to ensure consistent procedure results. Finally, I will discuss how procedure inconsistencies affect the test statistics. Using the Kruskal-Wallis test as an example, I will outline how to asymptotically find the probability with which the null is rejected. • More algebraic structures from knot theory Sam Nelson (Claremont McKenna College) Quandles are a type of non-associative algebraic structure defined from the combinatorics of knot diagrams. In this talk will recall the basics of quandle theory and look at some generalizations of quandles including biquandles, racks and biracks. If time permits, we will also look at tangle functors and a connection to Hopf algebras. • The Applied Mathematics of the Nottingham Group Jonathan Lubin (Brown University) If k is a finite field, say with p^n elements, then we may form the group of all formal power series u(x) ∈ k[[t]] for which u(0) = 0, u'(0) = 1, the group operation being substitution (composition). This group is often called the Nottingham group over k. It's a pro-p-group, i.e. the projective limit of finite p-groups, simple enough in definition, but in many ways, very mysterious in behavior. Camina has shown that every finite p-group can be embedded in Nottingham, and Klopsch has classified all the conjugacy classes of elements of order p. They remarked a while back that they did not know of any explicitly given elements of order even as low as p^2. In this talk I will apply old mathematics to give a description of how to construct all elements of the Nottingham group of p-power order, and tell a classification up to conjugacy. But a characterization of the conjugacy classes that's as satisfactory as Klopsch's seems elusive. • An introduction to Gromov-Witten theory Dagan Karp (Harvey Mudd College) In this talk I hope to give an introduction to Gromov-Witten theory, touching on its string-theoretic origins, applications to enumerative geometry and through the perspective of geometric moduli. Recent theorems and conjectures may also be discussed. • Groupoidification Alex Hoffnung (University of California Riverside) "Groupoidification" attempts to take familiar structures from linear algebra and enhance them to obtain structures involving groupoids. This process is not entirely systematic, however. The reverse process, "degroupoidification", is systematic and combined with examples sheds light on how to achieve the former. We describe the latter process and some examples including the groupoidification of Hecke algebras. • Bounds on self-dual codes and lattices Eric Rains (Caltech) A number of particularly interesting low-dimensional codes and lattices have the extra property of being equal to (or, for lattices, similar to) their duals; as a result, it is natural to wonder to what extent self-duality constrains the minimum distance of such a code or lattice. The first significant result in this direction was that of Mallows and Sloane, who showed that a doubly-even self-dual binary code of length n has minimum distance at most 4⌊n/24⌋+4, and with Odlyzko, obtained an analogous result for lattices. Without the extra evenness assumption, they obtained a much weaker bound; in fact, as I will show, this gap between singly-even and doubly-even codes is illusory: the bound 4⌊n/24⌋+4 holds for essentially all self-dual binary codes. For asymptotic bounds, the best result for doubly-even binary codes is that of Krasikov and Litsyn, who showed d≤Dn+o(n); where D = (1-5^-1/4)/2 ∼ 0.165629. I'll discuss a different proof of their bound, applicable to other types of codes and lattices, in particular showing that for any positive constant c, there are only finitely many self-dual binary codes satisfying d≥Dn-c√n. • Beyond Moonshine Geoffrey Buhl (Califonia State University Channel Islands) Mathematically, "Moonshine" refers to the unexpected relationship between the largest sporadic simple group, the Monster, and the modular function, j. One of the products of the study and proof of the Moonshine conjectures are new algebraic objects called vertex operator algebras. Surprisingly, these objects are exactly the so- called chiral algebras of string theory. For certain vertex operator algebras, there is an associated modular function, generalizing one aspect of the moonshine conjectures. In this talk I will describe the moonshine conjectures, give a definition of vertex operator algebras, and describe which vertex operator algebras have modularity properties. • Using Group Theory and Graph Theory to Build Fast Communications Networks: A Brief Introduction to Expanders and Ramanujan Graphs Michael Krebs (California State University Los Angeles) Think of a graph as a communications network. Putting in edges (e.g., fiber optic cables, telephone lines) is expensive, so we wish to limit the number of edges in the graph. At the same time, we would like messages in the graph to spread as rapidly as possible. We will see that the speed of communication is closely related to the eigenvalues of the graph's adjacency matrix. Essentially, the smaller the eigenvalues are, the faster messages spread. It turns out that there is a bound, due to Serre and others, on how small the eigenvalues can be. This gives us a rough sense of what it means for graphs to represent "optimal" communications networks; we call these Ramanujan graphs. Families of k-regular Ramanujan graphs have been constructed in this manner by Sarnak and others whenever k minus one equals a power of a prime number. No one knows whether families of k-regular Ramanujan graphs exist for all k. • Minimal Triangulations of Cubes and Simplotopes Francis Su (Harvey Mudd College) In this talk, I will describe recent progress on the question of determining the smallest triangulation of a d-dimensional cube, and more generally, the smallest triangulation of a simplotope: the product of simplices. Some interesting combinatorial insights come out of the geometry. Our Next Speaker | Upcoming Seminars | Abstracts Archive
{"url":"http://pages.pomona.edu/~sshahriari/AlgCombSeminar-Fall08.html","timestamp":"2014-04-16T13:09:09Z","content_type":null,"content_length":"20280","record_id":"<urn:uuid:bba38a64-9ad7-4809-a999-9a66ba97196e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
Enlarging motivic constructible functions to exponentials I shall present recent work with Raf Cluckers about adding exponential functions to constructible motivic functions. We show this enlarged class of functions is stable under integration and develop a Fourier transformation in this setting. We shall end the talk by stating a version of the Ax-Kochen-Ersov Theorem for these functions.
{"url":"http://www.newton.ac.uk/programmes/MAA/loeser.html","timestamp":"2014-04-19T17:14:23Z","content_type":null,"content_length":"2260","record_id":"<urn:uuid:1b821a36-0770-428d-b177-4d51976152da>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
Amenable groups not containing free semigroups MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. It is known that all amenable groups do not contain free subgroups (of rank>1). But there are amenable groups containing free semigroups. Which amenable groups cannot contain free semigroups? up vote 8 down vote favorite gr.group-theory add comment It is known that all amenable groups do not contain free subgroups (of rank>1). But there are amenable groups containing free semigroups. Which amenable groups cannot contain free semigroups? This is the answer to the question asked by Henry. The wreath product $\mathbb Z_2 {\rm wr} G$, where $G$ is the Grigorchuk (torsion) group of subexponential growth, obviously has exponential growth and is amenable and torsion. In particular, it has no free subsemigroups. up vote 14 down vote For elementary amenable (in particular, solvable) groups, existence of non-cyclic free subsemigroups is equivalent to exponential growth [C. Chou, Elementary amenable groups, accepted Illinois J. Math. 24 (1980), 3, 396-407]. add comment This is the answer to the question asked by Henry. The wreath product $\mathbb Z_2 {\rm wr} G$, where $G$ is the Grigorchuk (torsion) group of subexponential growth, obviously has exponential growth and is amenable and torsion. In particular, it has no free subsemigroups. For elementary amenable (in particular, solvable) groups, existence of non-cyclic free subsemigroups is equivalent to exponential growth [C. Chou, Elementary amenable groups, Illinois J. Math. 24 (1980), 3, 396-407].
{"url":"http://mathoverflow.net/questions/56778/amenable-groups-not-containing-free-semigroups","timestamp":"2014-04-18T01:08:31Z","content_type":null,"content_length":"51844","record_id":"<urn:uuid:6676d9c0-6a1e-4470-9f01-8b0adb256ba6>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
Computational chemistry From Wikipedia, the free encyclopedia Computational chemistry is a branch of chemistry that uses principles of computer science to assist in solving chemical problems. It uses the results of theoretical chemistry, incorporated into efficient computer programs, to calculate the structures and properties of molecules and solids. While its results normally complement the information obtained by chemical experiments, it can in some cases predict hitherto unobserved chemical phenomena. It is widely used in the design of new drugs and materials. Examples of such properties are structure (i.e. the expected positions of the constituent atoms), absolute and relative (interaction) energies, electronic charge distributions, dipoles and higher multipole moments, vibrational frequencies, reactivity or other spectroscopic quantities, and cross sections for collision with other particles. The methods employed cover both static and dynamic situations. In all cases the computer time and other resources (such as memory and disk space) increase rapidly with the size of the system being studied. That system can be a single molecule, a group of molecules, or a solid. Computational chemistry methods range from highly accurate to very approximate; highly accurate methods are typically feasible only for small systems. Ab initio methods are based entirely on theory from first principles. Other (typically less accurate) methods are called empirical or semi-empirical because they employ experimental results, often from acceptable models of atoms or related molecules, to approximate some elements of the underlying theory. Both ab initio and semi-empirical approaches involve approximations. These range from simplified forms of the first-principles equations that are easier or faster to solve, to approximations limiting the size of the system (for example, periodic boundary conditions), to fundamental approximations to the underlying equations that are required to achieve any solution to them at all. For example, most ab initio calculations make the Born–Oppenheimer approximation, which greatly simplifies the underlying Schrödinger Equation by freezing the nuclei in place during the calculation. In principle, ab initio methods eventually converge to the exact solution of the underlying equations as the number of approximations is reduced. In practice, however, it is impossible to eliminate all approximations, and residual error inevitably remains. The goal of computational chemistry is to minimize this residual error while keeping the calculations tractable. In some cases, the details of electronic structure are less important than the long-time phase space behavior of molecules. This is the case in conformational studies of proteins and protein-ligand binding thermodynamics. Classical approximations to the potential energy surface are employed, as they are computationally less intensive than electronic calculations, to enable longer simulations of molecular dynamics. Furthermore, chemoinformatics uses even more empirical (and computationally cheaper) methods like machine learning based on physicochemical properties. One typical problem in cheminformatics is to predict the binding affinity of drug molecules to a given target. Building on the founding discoveries and theories in the history of quantum mechanics, the first theoretical calculations in chemistry were those of Walter Heitler and Fritz London in 1927. The books that were influential in the early development of computational quantum chemistry include Linus Pauling and E. Bright Wilson's 1935 Introduction to Quantum Mechanics – with Applications to Chemistry, Eyring, Walter and Kimball's 1944 Quantum Chemistry, Heitler's 1945 Elementary Wave Mechanics – with Applications to Quantum Chemistry, and later Coulson's 1952 textbook Valence, each of which served as primary references for chemists in the decades to follow. With the development of efficient computer technology in the 1940s, the solutions of elaborate wave equations for complex atomic systems began to be a realizable objective. In the early 1950s, the first semi-empirical atomic orbital calculations were carried out. Theoretical chemists became extensive users of the early digital computers. A very detailed account of such use in the United Kingdom is given by Smith and Sutcliffe.^[1] The first ab initio Hartree–Fock calculations on diatomic molecules were carried out in 1956 at MIT, using a basis set of Slater orbitals. For diatomic molecules, a systematic study using a minimum basis set and the first calculation with a larger basis set were published by Ransil and Nesbet respectively in 1960.^[2] The first polyatomic calculations using Gaussian orbitals were carried out in the late 1950s. The first configuration interaction calculations were carried out in Cambridge on the EDSAC computer in the 1950s using Gaussian orbitals by Boys and coworkers.^[3] By 1971, when a bibliography of ab initio calculations was published,^[4] the largest molecules included were naphthalene and azulene.^[5]^[6] Abstracts of many earlier developments in ab initio theory have been published by Schaefer.^[7] In 1964, Hückel method calculations (using a simple linear combination of atomic orbitals (LCAO) method for the determination of electron energies of molecular orbitals of π electrons in conjugated hydrocarbon systems) of molecules ranging in complexity from butadiene and benzene to ovalene, were generated on computers at Berkeley and Oxford.^[8] These empirical methods were replaced in the 1960s by semi-empirical methods such as CNDO.^[9] In the early 1970s, efficient ab initio computer programs such as ATMOL, GAUSSIAN, IBMOL, and POLYAYTOM, began to be used to speed up ab initio calculations of molecular orbitals. Of these four programs, only GAUSSIAN, now massively expanded, is still in use, but many other programs are now in use. At the same time, the methods of molecular mechanics, such as MM2, were developed, primarily by Norman Allinger.^[10] One of the first mentions of the term "computational chemistry" can be found in the 1970 book Computers and Their Role in the Physical Sciences by Sidney Fernbach and Abraham Haskell Taub, where they state "It seems, therefore, that 'computational chemistry' can finally be more and more of a reality."^[11] During the 1970s, widely different methods began to be seen as part of a new emerging discipline of computational chemistry.^[12] The Journal of Computational Chemistry was first published in 1980. The term theoretical chemistry may be defined as a mathematical description of chemistry, whereas computational chemistry is usually used when a mathematical method is sufficiently well developed that it can be automated for implementation on a computer. Note that the words exact and perfect do not appear here, as very few aspects of chemistry can be computed exactly. However, almost every aspect of chemistry can be described in a qualitative or approximate quantitative computational scheme. Molecules consist of nuclei and electrons, so the methods of quantum mechanics apply. Computational chemists often attempt to solve the non-relativistic Schrödinger equation, with relativistic corrections added, although some progress has been made in solving the fully relativistic Dirac equation. In principle, it is possible to solve the Schrödinger equation in either its time-dependent or time-independent form, as appropriate for the problem in hand; in practice, this is not possible except for very small systems. Therefore, a great number of approximate methods strive to achieve the best trade-off between accuracy and computational cost. Accuracy can always be improved with greater computational cost. Significant errors can present themselves in ab initio models comprising many electrons, due to the computational expense of full relativistic-inclusive methods. This complicates the study of molecules interacting with high atomic mass unit atoms, such as transitional metals and their catalytic properties. Present algorithms in computational chemistry can routinely calculate the properties of molecules that contain up to about 40 electrons with sufficient accuracy. Errors for energies can be less than a few kJ/mol. For geometries, bond lengths can be predicted within a few picometres and bond angles within 0.5 degrees. The treatment of larger molecules that contain a few dozen electrons is computationally tractable by approximate methods such as density functional theory (DFT). There is some dispute within the field whether or not the latter methods are sufficient to describe complex chemical reactions, such as those in biochemistry. Large molecules can be studied by semi-empirical approximate methods. Even larger molecules are treated by classical mechanics methods that employ what are called molecular mechanics. In QM/MM methods, small portions of large complexes are treated quantum mechanically (QM), and the remainder is treated approximately (MM). In theoretical chemistry, chemists, physicists and mathematicians develop algorithms and computer programs to predict atomic and molecular properties and reaction paths for chemical reactions. Computational chemists, in contrast, may simply apply existing computer programs and methodologies to specific chemical questions. There are two different aspects to computational chemistry: • Computational studies can be carried out to find a starting point for a laboratory synthesis, or to assist in understanding experimental data, such as the position and source of spectroscopic • Computational studies can be used to predict the possibility of so far entirely unknown molecules or to explore reaction mechanisms that are not readily studied by experimental means. Thus, computational chemistry can assist the experimental chemist or it can challenge the experimental chemist to find entirely new chemical objects. Several major areas may be distinguished within computational chemistry: • The prediction of the molecular structure of molecules by the use of the simulation of forces, or more accurate quantum chemical methods, to find stationary points on the energy surface as the position of the nuclei is varied. • Storing and searching for data on chemical entities (see chemical databases). • Identifying correlations between chemical structures and properties (see QSPR and QSAR). • Computational approaches to help in the efficient synthesis of compounds. • Computational approaches to design molecules that interact in specific ways with other molecules (e.g. drug design and catalysis). A single molecular formula can represent a number of molecular isomers. Each isomer is a local minimum on the energy surface (called the potential energy surface) created from the total energy (i.e., the electronic energy, plus the repulsion energy between the nuclei) as a function of the coordinates of all the nuclei. A stationary point is a geometry such that the derivative of the energy with respect to all displacements of the nuclei is zero. A local (energy) minimum is a stationary point where all such displacements lead to an increase in energy. The local minimum that is lowest is called the global minimum and corresponds to the most stable isomer. If there is one particular coordinate change that leads to a decrease in the total energy in both directions, the stationary point is a transition structure and the coordinate is the reaction coordinate. This process of determining stationary points is called geometry optimization. The determination of molecular structure by geometry optimization became routine only after efficient methods for calculating the first derivatives of the energy with respect to all atomic coordinates became available. Evaluation of the related second derivatives allows the prediction of vibrational frequencies if harmonic motion is estimated. More importantly, it allows for the characterization of stationary points. The frequencies are related to the eigenvalues of the Hessian matrix, which contains second derivatives. If the eigenvalues are all positive, then the frequencies are all real and the stationary point is a local minimum. If one eigenvalue is negative (i.e., an imaginary frequency), then the stationary point is a transition structure. If more than one eigenvalue is negative, then the stationary point is a more complex one, and is usually of little interest. When one of these is found, it is necessary to move the search away from it if the experimenter is looking solely for local minima and transition structures. The total energy is determined by approximate solutions of the time-dependent Schrödinger equation, usually with no relativistic terms included, and by making use of the Born–Oppenheimer approximation, which allows for the separation of electronic and nuclear motions, thereby simplifying the Schrödinger equation. This leads to the evaluation of the total energy as a sum of the electronic energy at fixed nuclei positions and the repulsion energy of the nuclei. A notable exception are certain approaches called direct quantum chemistry, which treat electrons and nuclei on a common footing. Density functional methods and semi-empirical methods are variants on the major theme. For very large systems, the relative total energies can be compared using molecular mechanics. The ways of determining the total energy to predict molecular structures are: Ab initio methods The programs used in computational chemistry are based on many different quantum-chemical methods that solve the molecular Schrödinger equation associated with the molecular Hamiltonian. Methods that do not include any empirical or semi-empirical parameters in their equations – being derived directly from theoretical principles, with no inclusion of experimental data – are called ab initio methods. This does not imply that the solution is an exact one; they are all approximate quantum mechanical calculations. It means that a particular approximation is rigorously defined on first principles (quantum theory) and then solved within an error margin that is qualitatively known beforehand. If numerical iterative methods have to be employed, the aim is to iterate until full machine accuracy is obtained (the best that is possible with a finite word length on the computer, and within the mathematical and/or physical approximations made). The simplest type of ab initio electronic structure calculation is the Hartree–Fock (HF) scheme, an extension of molecular orbital theory, in which the correlated electron–electron repulsion is not specifically taken into account; only its average effect is included in the calculation. As the basis set size is increased, the energy and wave function tend towards a limit called the Hartree–Fock limit. Many types of calculations (known as post-Hartree–Fock methods) begin with a Hartree–Fock calculation and subsequently correct for electron–electron repulsion, referred to also as electronic correlation. As these methods are pushed to the limit, they approach the exact solution of the non-relativistic Schrödinger equation. In order to obtain exact agreement with experiment, it is necessary to include relativistic and spin orbit terms, both of which are only really important for heavy atoms. In all of these approaches, in addition to the choice of method, it is necessary to choose a basis set. This is a set of functions, usually centered on the different atoms in the molecule, which are used to expand the molecular orbitals with the LCAO ansatz. Ab initio methods need to define a level of theory (the method) and a basis set. The Hartree–Fock wave function is a single configuration or determinant. In some cases, particularly for bond breaking processes, this is quite inadequate, and several configurations need to be used. Here, the coefficients of the configurations and the coefficients of the basis functions are optimized together. The total molecular energy can be evaluated as a function of the molecular geometry; in other words, the potential energy surface. Such a surface can be used for reaction dynamics. The stationary points of the surface lead to predictions of different isomers and the transition structures for conversion between isomers, but these can be determined without a full knowledge of the complete A particularly important objective, called computational thermochemistry, is to calculate thermochemical quantities such as the enthalpy of formation to chemical accuracy. Chemical accuracy is the accuracy required to make realistic chemical predictions and is generally considered to be 1 kcal/mol or 4 kJ/mol. To reach that accuracy in an economic way it is necessary to use a series of post-Hartree–Fock methods and combine the results. These methods are called quantum chemistry composite methods. Density functional methods Density functional theory (DFT) methods are often considered to be ab initio methods for determining the molecular electronic structure, even though many of the most common functionals use parameters derived from empirical data, or from more complex calculations. In DFT, the total energy is expressed in terms of the total one-electron density rather than the wave function. In this type of calculation, there is an approximate Hamiltonian and an approximate expression for the total electron density. DFT methods can be very accurate for little computational cost. Some methods combine the density functional exchange functional with the Hartree–Fock exchange term and are known as hybrid functional methods. Semi-empirical and empirical methods Semi-empirical quantum chemistry methods are based on the Hartree–Fock formalism, but make many approximations and obtain some parameters from empirical data. They are very important in computational chemistry for treating large molecules where the full Hartree–Fock method without the approximations is too expensive. The use of empirical parameters appears to allow some inclusion of correlation effects into the methods. Semi-empirical methods follow what are often called empirical methods, where the two-electron part of the Hamiltonian is not explicitly included. For π-electron systems, this was the Hückel method proposed by Erich Hückel, and for all valence electron systems, the Extended Hückel method proposed by Roald Hoffmann. Molecular mechanics In many cases, large molecular systems can be modeled successfully while avoiding quantum mechanical calculations entirely. Molecular mechanics simulations, for example, use a single classical expression for the energy of a compound, for instance the harmonic oscillator. All constants appearing in the equations must be obtained beforehand from experimental data or ab initio calculations. The database of compounds used for parameterization, i.e., the resulting set of parameters and functions is called the force field, is crucial to the success of molecular mechanics calculations. A force field parameterized against a specific class of molecules, for instance proteins, would be expected to only have any relevance when describing other molecules of the same class. These methods can be applied to proteins and other large biological molecules, and allow studies of the approach and interaction (docking) of potential drug molecules (e.g. [1] and [2]). Methods for solids Computational chemical methods can be applied to solid state physics problems. The electronic structure of a crystal is in general described by a band structure, which defines the energies of electron orbitals for each point in the Brillouin zone. Ab initio and semi-empirical calculations yield orbital energies; therefore, they can be applied to band structure calculations. Since it is time-consuming to calculate the energy for a molecule, it is even more time-consuming to calculate them for the entire list of points in the Brillouin zone. Chemical dynamics Once the electronic and nuclear variables are separated (within the Born–Oppenheimer representation), in the time-dependent approach, the wave packet corresponding to the nuclear degrees of freedom is propagated via the time evolution operator (physics) associated to the time-dependent Schrödinger equation (for the full molecular Hamiltonian). In the complementary energy-dependent approach, the time-independent Schrödinger equation is solved using the scattering theory formalism. The potential representing the interatomic interaction is given by the potential energy surfaces. In general, the potential energy surfaces are coupled via the vibronic coupling terms. The most popular methods for propagating the wave packet associated to the molecular geometry are Molecular dynamics Molecular dynamics (MD) uses Newton's laws of motion to examine the time-dependent behavior of systems, including vibrations or Brownian motion, using a classical mechanical description. MD combined with density functional theory leads to the Car–Parrinello method. Interpreting molecular wave functions The atoms in molecules model developed by Richard Bader was developed in order to effectively link the quantum mechanical picture of a molecule, as an electronic wavefunction, to chemically useful older models such as the theory of Lewis pairs and the valence bond model. Bader has demonstrated that these empirically useful models are connected with the topology of the quantum charge density. This method improves on the use of Mulliken population analysis. Software packages There are many self-sufficient software packages used by computational chemists. Some include many methods covering a wide range, while others concentrating on a very specific range or even a single method. Details of most of them can be found in: See also Cited References Other references • Christopher J. Cramer Essentials of Computational Chemistry, John Wiley & Sons (2002) • T. Clark A Handbook of Computational Chemistry, Wiley, New York (1985) • R. Dronskowski Computational Chemistry of Solid State Materials, Wiley-VCH (2005) • F. Jensen Introduction to Computational Chemistry, John Wiley & Sons (1999) • D. Rogers Computational Chemistry Using the PC, 3rd Edition, John Wiley & Sons (2003) • Paul von Ragué Schleyer (Editor-in-Chief). Encyclopedia of Computational Chemistry. Wiley, 1998. ISBN 0-471-96588-X. • A. Szabo, N.S. Ostlund, Modern Quantum Chemistry, McGraw-Hill (1982) • D. Young Computational Chemistry: A Practical Guide for Applying Techniques to Real World Problems, John Wiley & Sons (2001) • David Young's Introduction to Computational Chemistry • K.I.Ramachandran, G Deepa and Krishnan Namboori. P.K. Computational Chemistry and Molecular Modeling Principles and applications Springer-Verlag GmbH ISBN 978-3-540-77302-3 External links
{"url":"http://www.thefullwiki.org/Computational_chemistry","timestamp":"2014-04-17T15:33:53Z","content_type":null,"content_length":"174594","record_id":"<urn:uuid:8822dc75-ddda-4a48-9af2-cc556a812fab>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
Proposition 96 If an area is contained by a rational straight line and a sixth apotome, then the side of the area is a straight line which produces with a medial area a medial whole. Let the area AB be contained by the rational straight line AC and the sixth apotome AD. I say that the side of the area AB is a straight line which produces with a medial area a medial whole. Let DG be the annex to AD. Then AG and GD are rational straight lines commensurable in square only, neither of them is commensurable in length with the rational straight line AC set out, and the square on the whole AG is greater than the square on the annex DG by the square on a straight line incommensurable in length with AG. Since the square on AG is greater than the square on GD by the square on a straight line incommensurable in length with AG, therefore, if there is applied to AG a parallelogram equal to the fourth part of the square on DG and deficient by a square figure, then it divides it into incommensurable parts. Bisect DG at E, apply to AG a parallelogram equal to the square on EG and deficient by a square figure, and let it be the rectangle AF by FG. Then AF is incommensurable in length with FG. But AF is to FG as AI is to FK, therefore AI is incommensurable with FK. Since AG and AC are rational straight lines commensurable in square only, therefore AK is medial. Again, since AC and DG are rational straight lines and incommensurable in length, DK is also medial. Now, since AG and GD are commensurable in square only, therefore AG is incommensurable in length with GD. But AG is to GD as AK is to KD, therefore AK is incommensurable with KD. Now construct the square LM equal to AI, and subtract NO, equal to FK, about the same angle. Then the squares LM and NO are about the same diameter. Let PR be their diameter, and draw the figure. Then in manner similar to the above we can prove that LN is the side of the area AB. I say that LN is a straight line which produces with a medial area a medial whole. Since AK was proved medial and equals the sum of the squares on LP and PN, therefore the sum of the squares on LP and PN is medial. Again, since DK was proved medial and equals twice the rectangle LP by PN, therefore twice the rectangle LP by PN is also medial. Since AK was proved incommensurable with DK, therefore the sum of the squares on LP and PN is also incommensurable with twice the rectangle LP by PN. And, since AI is incommensurable with FK, therefore the square on LP is also incommensurable with the square on PN. Therefore LP and PN are straight lines incommensurable in square which make the sum of the squares on them medial, twice the rectangle contained by them medial, and further, the sum of the squares on them incommensurable with twice the rectangle contained by them. Therefore LN is the irrational straight line called that which produces with a medial area a medial whole, and it is the side of the area AB. Therefore the side of the area is a straight line which produces with a medial area a medial whole. Therefore, if an area is contained by a rational straight line and a sixth apotome, then the side of the area is a straight line which produces with a medial area a medial whole.
{"url":"http://aleph0.clarku.edu/~djoyce/java/elements/bookX/propX96.html","timestamp":"2014-04-19T22:05:39Z","content_type":null,"content_length":"7902","record_id":"<urn:uuid:7da95db4-7731-4b6d-bc4c-45565c6803a7>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
Analysis of Thermal Stability in a Convecting and Radiating Two-Step Reactive Slab Advances in Mechanical Engineering Volume 2013 (2013), Article ID 294961, 9 pages Research Article Analysis of Thermal Stability in a Convecting and Radiating Two-Step Reactive Slab Faculty of Military Science, Stellenbosch University, Private Bag X2, Saldanha 7395, South Africa Received 27 February 2013; Revised 20 July 2013; Accepted 26 September 2013 Academic Editor: Akhilendra Singh Copyright © 2013 O. D. Makinde and M. S. Tshehla. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper investigates the combined effects of convective and radiative heat loss on thermal stability of a rectangular slab of combustible materials with internal heat generation due to a two-step exothermic chemical reaction, taking the diffusion of the reactant into account and assuming a variable (temperature dependent) preexponential factor. The nonlinear differential equation governing the transient reaction-diffusion problem is obtained and tackled numerically using a semidiscretization finite difference technique. A special type of Hermite-Padé approximants coupled with perturbation technique are employed to analyze the effects of various embedded thermophysical parameters on the steady state problem. Important properties of the temperature field including thermal stability conditions are presented graphically and discussed quantitatively. 1. Introduction Analysis of thermal stability in a reactive slab of combustible materials due to exothermic chemical reaction plays a significant role in improving the design and operation of many industrial and engineering devices and find applications in power production, jet and rocket propulsion, fire prevention and safety, pollution control, material processing industries, and so on [1]. For instance, solid propellants used in rocket vehicles are capable of experiencing exothermic reactions without the addition of any other reactants. The theory of thermal stability of reactive materials has long been a fundamental topic in the field of combustion [2]. It is directly related to the determination of critical regimes separating the regions of explosive and nonexplosive ways of chemical reactions (Frank Kamenetskii [3]). The chemical reaction may be modelled by considering either a single step or multistep reaction kinetics. For instance, catalytic converter used in an automobile’s exhaust system provides a platform for a two-step exothermic chemical reaction where unburned hydrocarbons completely combust. This helps to reduce the emissions of toxic car pollutant such as carbon monoxide (CO) into the environment. The main chemical reaction schemes in an autocatalytic converter are [4, 5] Similarly, the combustion taking place within k-fluid is treated as a two-step irreversible chemical reaction of methane oxidation as follows [6]: The vast majority of studies on thermal stability of chemically reactive materials have been concerned with homogeneous boundary conditions ranging from the infinite Biot number case [7, 8] (Frank-Kamenetskii conditions) to a range of Biot numbers [9, 10] (Semenov conditions). Previous investigations have included a variety of geometries and have been directed towards obtaining critical conditions for thermal ignition to occur, in the form of a critical value for the Frank-Kamenetskii parameter [11]. Mathematical models of the problem relating to exothermic reaction in a reactive slab may be extremely stiff owing to the temperature dependence of the chemical reactions. Moreover, the differential equation for the temperature distribution in a convective-radiative reactive slab with temperature dependent preexponential factor is highly nonlinear and does not admit an exact analytical solution. Consequently, the equation has been solved either numerically or using a variety of approximate semianalytical methods. The preceding literature clearly shows that the work on reacting slab has been confined to convective surface heat loss. No attempt has been made to study the combined effects of convective and radiative heat losses at the slab surface despite its relevance in various technological applications such as aerothermodynamic heating of spaceships and satellites, nuclear reactor thermohydraulics, and glass manufacturing. Thermal radiation is characteristic of any material system at temperatures above the absolute zero and becomes an important form of heat transfer in devices that operate at high temperatures. Radiation is the dominant form of heat transfer in applications such as furnaces, boilers, and other combustion systems. The present investigation aims to extend the recent work of Makinde [12] to include combined effects of convective and radiative heat losses on slab of combustible materials with internal heat generation due to a two-step exothermic reaction. Although, combustion process consists of series of chemical reactions, the choice of two-step reaction process in this study will enhance better understanding on thermal effects of chemical kinetic involving oxidation and reduction of exothermic reactions [6] in particular as well as multistep combustion process in general. Both the transient and the steady state problems are tackled numerically using semidiscretization finite difference method [13] and a special type of Hermite-Padé approximants coupled with perturbation technique [8, 9, 14]. The critical regime separating the regions of explosive and nonexplosive ways of a two-step exothermic chemical reactions is determined. It is hoped that the results obtained will not only provide useful information for applications but also serve as a complement to the previous studies. 2. Mathematical Model The dynamical thermal behaviour of a rectangular slab of combustible materials with internal heat generation due to a two-step exothermic chemical reaction, taking into account the diffusion of the reactant and the temperature dependent variable preexponential factor, is considered. It is assumed that the slab surface is subjected to both convective and radiative heat losses to the environment. The geometry of the problem is depicted in Figure 1. It is assumed that the slab surface is subjected to both convective and radiative heat losses to the environment. The one-dimensional heat balance equation in the original variables together with the initial and boundary conditions can be written as [1, 8–12] where is the absolute temperature, is the initial temperature, is the ambient temperature, is the time, is the convective heat transfer coefficient, is the thermal conductivity of the material, is the slab surface emissivity, is the Stefan-Boltzmann constant, is the first step heat of reaction, is the second step heat of reaction, is the first step reaction rate constant, is the second step reaction rate constant, is the first step reaction activation energy, is the second step reaction activation energy, is the density, is the universal gas constant, is the first step reactant species initial concentration, is the second step reactant species initial concentration, is Planck’s number, is Boltzmann’s constant, is vibration frequency, is the slab half width, is distance measured in the normal direction to the plane, is the specific heat at constant pressure, and is the numerical exponent such that represent numerical exponent for sensitized, Arrhenius, and bimolecular kinetics, respectively [2, 3, 8]. The following dimensionless variables are introduced into (3) and we obtain the dimensionless governing equation together with initial and boundary conditions as where , , , , , , , and , represent the Frank-Kamenetskii parameter, activation energy parameter, two-step exothermic reaction parameter, activation energy ratio parameter, slab of the initial temperature parameter, thermal radiation parameter, and the Biot numbers for the slab lower and upper surfaces, respectively. Initially at , the slab temperature and that of ambient are assumed to be the same; that is, . For , the exothermic reaction within the slab occurs, and the slab temperature increases above that of ambient. In the following section, (5)–(7) are solved numerically using a semidiscretization finite difference method [13]. 3. Numerical Procedure Here semidiscretization finite difference technique [13] is employed to tackle the model nonlinear initial boundary value problem in (5)–(7). The discretization is based on a linear cartesian mesh and uniform grid on which finite differences are taken (see Figure 2). Firstly, a partition of the spatial interval is introduced. We divide it into equal parts and define grid size and grid points , The first and the second spatial derivatives in (5)–(7) are approximated with second-order central differences. Let be an approximation of ; then the semidiscrete system for the problem reads with initial conditions The equations corresponding to the first and last grid points are modified to incorporate the boundary conditions as follows: In (8), there is only one independent variable, so they are ordinary differential equations. Since they are first order and the initial conditions for all variables are known, the problem is an initial value problem. The MATLAB program ode45 is employed to integrate sets of differential equations using a fourth-order Runge-Kutta integration scheme. 4. Steady State Analysis The thermal evolution of the convecting and radiating reactive slab attains a steady state for a given set of parameter values, and (5)–(7) then become with where and (11) and (12) represent a boundary value nonlinear problem. This nonlinear nature precludes its exact solution. Using regular perturbation technique, it is convenient to form a power series expansion in the Frank-Kamenetskii parameter ; that is, Substituting the solution series (13) into (11) and (12) and collecting the coefficients of like powers of , we obtained and solved the equations for the coefficients of solution series iteratively. The solution for the temperature field is given in the appendix. Using computer symbolic algebra package (MAPLE), we obtained the first twenty terms of the solution series as well as the series for slab surface heat transfer rate . We are aware that this power series solution is valid for very small parameter values. In order to extend the usability of the solution series beyond small parameter values, a special type of Hermite-Padé approximant based on series summation and improvement technique is employed as illustrated in the following section (Hunter and Baker [14]; Makinde [9]). 5. Thermal Stability Analysis The thermal stability properties and the onset of thermal runaway in the convecting and radiating reactive slab under consideration are characterized by the estimation of critical regimes separating the regions of explosive and nonexplosive chemical reactions. In order to achieve this goal, we employ a special type of Hermite-Padé approximation technique [8, 9, 14]. Suppose that the partial sum is given. It is important to note here that (14) can be used to approximate any output of the solution of the problem under investigation (e.g., the series for the surface heat flux since everything can be Taylor expanded in the given small parameter). Assume that is a local representation of an algebraic function of in the context of nonlinear problems; we construct an expression of the form of degree , such that The requirement (16) reduces the problem to a system of linear equations for the unknown coefficients of . The entries of the underlying matrix depend only on the given coefficients , and we shall take , so that the number of equations equals the number of unknowns. The polynomial is a special type of Hermite-Padé approximant and is then investigated for bifurcation and thermal criticality conditions using Newton diagram (Vainberg and Trenogin [15]). 6. Results and Discussion We have assigned numerical values to the parameters encountered in the problem in order to get a clear insight into the thermal development in the system. At initial stage, the uniform temperature of the slab is assumed and parameter . It is very important to note that corresponds to a one-step chemical reaction case; an increase in the value signifies an increase in the two-step chemical reaction activities in the system. The thermal stability procedure above is applied on the first 19 terms of the solution series, and we obtained the results as shown in Tables 1 and 2. Table 1 shows the rapid convergence of the dominant singularity , that is, the value of thermal criticality in the system together with its corresponding slab surface heat transfer rate as the number of series coefficients utilized in the approximants increases. It is noteworthy that the value of the critical regime for thermal stability when is perfectly in agreement with the one reported in Makinde [8] for a one step exothermic chemical reaction scenario. Table 2 illustrates the variation in the values of thermal criticality conditions for different combination of thermophysical parameters. The magnitude of thermal criticality decreases with increasing values of two-step reaction rate parameter and the activation energies ratio parameter . This implies that thermal instability and successive explosion are enhanced by two-step exothermic reaction and higher second step activation energy. At very large activation energy (), thermal explosion criticality is independent of the type of reaction as shown in (5). For moderately value of activation energy, the criticality varies from one type of reaction to another. It is interesting to note from Table 2 that explosion in bimolecular reaction will occur faster than in Arrhenius and sensitized reactions. This can be attributed to the lower thermal criticality value of bimolecular reaction. Moreover, it is interesting to note that the magnitude of thermal criticality increases with an increase in convective and radiative heat loss as well as a decrease in the activation energy (i.e., , , and ), thus preventing the early development of thermal runaway and enhancing thermal stability in the system as expected. 6.1. Effects of Parameter Variation on Temperature Profiles Figures 3(a), 3(b), and 4 illustrate the evolution of the temperature field in the reactive slab. For fixed values of various thermophysical parameters, the slab temperature increases rapidly with time until it attains its steady state value as shown in Figure 3(b). It is observed that for the given set of parameter values, a steady state temperature reachs at time . Generally the temperature is maximum along the slab centerline and minimum at the surface due to convective and radiative heat loss, thus satisfying the prescribed boundary conditions. In Figures 5 and 6, it is observed that the slab temperature generally increases with increasing value of Frank-Kamenetskii parameter and two-step reaction parameter . This can be attributed to an increase in the rate of internal heat generation activities due to chemical kinetics in the system. Figures 7 and 8 illustrate the effects of heat loss on the reactive slab due to radiation and asymmetric convective cooling. As expected, a general decrease in the slab temperature is observed with increasing parameter values of , . Similar trend is observed in Figure 9 with a decrease in the slab activation energy. As the value of increases, the slab temperature decreases. This can be attributed to an increase in the slab thermal resistance. The expression for power law nonlinear regression with respect to the slab temperature and the parameter variation is obtained from the numerical data especially along the slab centreline when as , where , , , , , , and . This also confirmed our results that the slab temperature decreases with an increase in the activation energy parameter (i.e., a decrease in the slab activation energy) and an increase in both radiative and convective heat loses. However, the temperature increases with a rise in the exothermic reaction rate and two-step reaction parameter. Figure 10 shows that the slab temperature is highest during bimolecular reaction and lowest for sensitized reaction, hence confirming the result highlighted in the Table 2. In Figure 11, it is observed that the slab temperature increases with an increase in the two-step reaction activation energy ratio . Interestingly, an increase in indicates that the activation energy of the second step reaction is higher than that of the first step reaction, leading to an increase in the internal heat generated within the slab. 6.2. Effects of Parameter Variation on Slab Thermal Stability Figures 12–16 illustrate the effects of parameter variation on the thermal stability of the slab under a two-step reaction scenario. It represents the variation of slab surface heat transfer rate with the Frank-Kamenetskii parameter defined as where at . It is well known that thermal instability do occur whenever the rate of internal heat generation due to exothermic reaction within a system is higher than the rate at which the system loses heat to the ambient. This invariably leads to accumulation of heat within the system, and consequently thermal runaway developed. In particular, for every a fixed set of thermophysical parameters at steady state, there is a critical value (see Table 2) such that, for , the slab is thermally stable. When , the system becomes thermally unstable leading to thermal runaway. In Figures 12 and 13, it is observed that the thermal stability interval for parameter value increases with an increase in convective and radiative heat loss to the ambient as expected, since this reduces the accumulation of heat within the reacting slab. Figure 14 shows that the thermal stability interval for parameter value decreases with an increase in two-step reaction parameter leading to early occurrence on thermal runaway in the system. Interestingly, this shows clearly that a multistep exothermic reaction is more thermally unstable in comparison to a one step reaction process. As increases due to a decrease in the slab activation energy, thermal stability interval for parameter value increases as shown in Figure 15. It implies that less volatile reactive materials are more thermally stable as compared to highly volatile reactive materials. In Figure 16, it is observe that thermal instability in the slab will occur faster during bimolecular reaction as compared to Arrhenius and sensitized reactions. This is noteworthy that although the chemical reaction is exothermic, early development of thermal runaway in the system depends on the type of reaction process due to the temperature dependent preexponential factor. Moreover, the thermal criticality results depicted in Figures 12–16 are in perfect agreement with the one shown in Table 2. 7. Conclusion The combined effects of asymmetric convective and radiative heat loss on a two-step exothermic reactive slab, taking the diffusion of the reactant into account and assuming a temperature dependent variable preexponential factor, have been investigated. Both transient and steady state of reactive slab scenario are examined. The nonlinear governing equation is solved numerically using a semidiscretization finite difference scheme and a perturbation technique coupled with a special type of Hermite-Padé approximants. Our results can be summarized as follows.(i)Increase in , , , and increases the slab temperature profile, while increase in , , , and decreases slab temperature profiles.(ii)A critical value exists such that for , the slab is thermally stable and for the slab is thermally, unstable leading to thermal runaway. (iii)Increase in , , and increases the onset of thermal instability, while increase in , , , and enhanced slab thermal stability. Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. The authors would like to thank the National Research Foundation (NRF) of South Africa for their generous financial support. 1. J. Bebernes and D. Eberly, Mathematical Problems from Combustion Theory, Springer, New York, NY, USA, 1989. 2. F. A. Williams, Combustion Theory, Benjamin & Cuminy publishing, Menlo Park, Calif, USA, 2nd edition, 1985. 3. D. A. Frank Kamenetskii, Diffusion and Heat Transfer in Chemical Kinetics, Plenum Press, New York, NY, USA, 1969. 4. O. D. Makinde, “Thermal stability of a reactive viscous flow through a porous-saturated channel with convective boundary conditions,” Applied Thermal Engineering, vol. 29, no. 8-9, pp. 1773–1777, 2009. View at Publisher · View at Google Scholar · View at Scopus 5. F. S. Dainton, Chain Reaction: An Introduction, John Wiley & Sons, New York, NY, USA, 1960. 6. Z. G. Szabo, Advances in Kinetics of Homogeneous Gas Reactions, Methuen and Company, Great Britain, UK, 1964. 7. G. I. Barenblatt, J. B. Bell, and W. Y. Crutchfield, “The thermal explosion revisited,” Proceedings of the National Academy of Sciences of the United States of America, vol. 95, no. 23, pp. 13384–13386, 1998. View at Publisher · View at Google Scholar · View at Scopus 8. O. D. Makinde, “Exothermic explosions in a slab: a case study of series summation technique,” International Communications in Heat and Mass Transfer, vol. 31, no. 8, pp. 1227–1231, 2004. View at Publisher · View at Google Scholar · View at Scopus 9. O. D. Makinde, “Hermite-Padé approach to thermal stability of reacting masses in a slab with asymmetric convective cooling,” Journal of the Franklin Institute, vol. 349, no. 3, pp. 957–965, 2012. View at Publisher · View at Google Scholar · View at Scopus 10. N. N. Semenov, Chemical Kinetics and Chain Reactions, The Clarendon Press, Oxford, UK, 1935. 11. E. Balakrishnan, A. Swift, and G. C. Wake, “Critical values for some non-class A geometries in thermal ignition theory,” Mathematical and Computer Modelling, vol. 24, no. 8, pp. 1–10, 1996. View at Publisher · View at Google Scholar · View at Scopus 12. O. D. Makinde, “On the thermal decomposition of reactive materials of variable thermal conductivity and heat loss characteristics in a long pipe,” Journal of Energetic Materials, vol. 30, no. 4, pp. 283–298, 2012. View at Publisher · View at Google Scholar · View at Scopus 13. K. W. Morton and D. F. Mayers, Numerical Solution of Partial Differential Equations: An Introduction, Cambridge University Press, 2005. 14. D. L. Hunter and G. A. Baker Jr., “Methods of series analysis. III. Integral approximant methods,” Physical Review B, vol. 19, no. 7, pp. 3808–3821, 1979. View at Publisher · View at Google Scholar · View at Scopus 15. M. M. Vainberg and V. A. Trenogin, Theory of Branching of Solutions of Nonlinear Equations, Wolters-Noordhoff B.V., Leyden, Mass, USA, 1974.
{"url":"http://www.hindawi.com/journals/ame/2013/294961/","timestamp":"2014-04-19T00:53:13Z","content_type":null,"content_length":"245028","record_id":"<urn:uuid:dbf806f0-5663-41e0-aff8-05e984d7b1e6>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
Lucav - Viewing Profile: Reputation I agree. I've been a member of this forum community for 9 years, dating back to Guild Wars (1) Guru. I'm one of the top 10 most active posters of all time. I've basically stopped posting because I feel like my opinions have been drowned out. There was just a huge patch and there are basically no threads discussing it. There is not a single thread actually discussing the new levels of SAB. Just a thread bitching about the rewards for it. The forum has turned into a group of about 10 people who complain about everything and like each others posts. There is no more logical discussion or even discussion about the game at all. There is just paranoia and conspiracy theories. So, what's the point of me even bothering to discuss things anymore. The negativity is actually so bad it has discouraged people like me from even bothering to post anymore. That is why this forum is suffering a slow death.
{"url":"http://www.guildwars2guru.com/user/17680-lucav/page__tab__reputation","timestamp":"2014-04-16T04:18:14Z","content_type":null,"content_length":"91088","record_id":"<urn:uuid:afaa7545-c297-439a-968e-ae1021fffb17>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Holomorphic on C Suppose [itex]f : \mathbb{C}\to \mathbb{C}[/itex] is continuous everywhere, and is holomorphic at every point except possibly the points in the interval [itex][2, 5][/itex] on the real axis. Prove that f must be holomorphic at every point of C. How can I go from f being holomorphic every except that interval to showing it is holomorphic at that interval? I am assuming it has to be due to continuity. But there are continuous functions that aren't differentiable every where.
{"url":"http://www.physicsforums.com/showpost.php?p=3775607&postcount=1","timestamp":"2014-04-18T00:29:41Z","content_type":null,"content_length":"8757","record_id":"<urn:uuid:298a41c4-c63e-44fa-872b-83c200e5d915>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
need help with a java assignment. Join Date Apr 2011 Rep Power For my java assignment I am supposed to modify a code to add other functions to it. I need to add sin, cos, tan, and x^y buttons on the calculator and make them work. The problem is I get this error. (I know I probably got ^ wrong): Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 23 at Calculator.<init>(Calculator.java:124) at Calculator.main(Calculator.java:586) original code: #1798262 - Pastie where I am at so far: #1799334 - Pastie As I'm still a beginner java programmer I probably got other things wrong in the code, so any help that I can get I will appreciate. You're getting an index out of range error. You're trying to call a array slot that doesn't exist. Your program is hundreds of lines long so I'm not going to go through it all to find where, but you should investigate your arrays. I have a feeling its with your jbnButtons array. ☆ Use [code][/code] tags when posting code. That way people don't want to stab their eyes out when trying to help you. ☆ +Rep people for helpful posts. Join Date Apr 2011 Rep Power Sweet. fixed that! it was on line 97. thanks for showing that one. now I see sin, cos, tan, and x^y isn't functioning the way they should. any where I should put that on the code? Last edited by Z-slasher; 04-16-2011 at 10:48 PM. Join Date Jun 2008 Blog Entries Rep Power Please re-read this comment: Consider creating and posting an SSCCE to help us be better able to help you. You can find out what this entails by checking my link on this subject below. Join Date Apr 2011 Rep Power my apologies. let me break it down to the problem. There are 2 areas I'm not so sure about I'll start with the first switch statement on the program. I'll start with case 20 to 23 because that is the ones to look at. (Find this one on line 240-368) for (int i=0; i<jbnButtons.length; i++) if(e.getSource() == jbnButtons[i]) case 20: // sin case 21: // cos case 22: // sin case 23: // x^y and I modified the if statement. (this is found on line 538 to 568 in the pastie) double processLastOperator() throws DivideByZeroException { double result = 0; double numberInDisplay = getNumberInDisplay(); if (lastOperator.equals("/")) if (numberInDisplay == 0) throw (new DivideByZeroException()); result = lastNumber / numberInDisplay; if (lastOperator.equals("*")) result = lastNumber * numberInDisplay; if (lastOperator.equals("-")) result = lastNumber - numberInDisplay; if (lastOperator.equals("+")) result = lastNumber + numberInDisplay; if (lastOperator.equals("sin")) result = Math.sin(numberInDisplay); if (lastOperator.equals("cos")) result = Math.cos(numberInDisplay); if (lastOperator.equals("tan")) result = Math.tan(numberInDisplay); return result; These are the codes where I find the +,-,* and divide operators function. What could I be doing wrong for sin, cos, tan, and x^y? Join Date Jun 2008 Blog Entries Rep Power For the trig calculations: Are your numbers in radians or degrees? Join Date Apr 2011 Rep Power It is radians. I actually figured this one out by looking at sqrt since that uses the math logic. I'll paste the what I did in C indentation so it is easy to look at. Changing the switch statements to this gave me the correct number in radians. here is the code for case 20 to 22: result = Math.sin(getNumberInDisplay()); so I did that for cos, and tan as well. now to figure out x^y. how do I do this one using the Math.pow logic? result = Math.pow(getNumberInDisplay(), getNumberInDisplay()); This gives x^x without the = sign input. I need x^y. Last edited by Z-slasher; 04-17-2011 at 01:02 AM. Join Date Apr 2011 Rep Power
{"url":"http://www.java-forums.org/new-java/42522-need-help-java-assignment.html","timestamp":"2014-04-19T08:04:59Z","content_type":null,"content_length":"94304","record_id":"<urn:uuid:9a6c03a8-ec91-4da3-8d89-e2d229a47b27>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
Grayson, GA Math Tutor Find a Grayson, GA Math Tutor ...Then math can truly be elementary. I have received by B.S. in Elementary Education with a GPA of 3.27. I have also been educating students for over twenty years. 26 Subjects: including algebra 2, phonics, study skills, special needs ...I have helped students prepare for both the ACT and the SAT with great success. I have a degree in genetics, and have taught genetics as a teacher in 9th grade biology. Also I was an instructor at Emory University in an introductory genetics course. 15 Subjects: including algebra 1, algebra 2, biology, chemistry ...My score for the GRE under the new scale was 165 / 163 / 4.0 (V/Q/AW). I have also tutored the test for about 6 months. I have tutored ACT math questions to high school aged students for 3 years. I received vocal training at Emory University as part of the voice major program. 30 Subjects: including algebra 2, precalculus, photography, algebra 1 ...My husband and I attend an American Sign Language congregation now and we're enjoying teaching the language to our young children. During my career as an accountant, I worked primarily with general ledger duties such as income statements and bank reconciliations, but I also did intensive payroll... 30 Subjects: including algebra 1, algebra 2, reading, SAT math ...If you need to know more, just let me know! I look forward to helping you! All the best, ArisOne of the most difficult classes, calculus can be a killer. 20 Subjects: including prealgebra, MATLAB, algebra 1, algebra 2 Related Grayson, GA Tutors Grayson, GA Accounting Tutors Grayson, GA ACT Tutors Grayson, GA Algebra Tutors Grayson, GA Algebra 2 Tutors Grayson, GA Calculus Tutors Grayson, GA Geometry Tutors Grayson, GA Math Tutors Grayson, GA Prealgebra Tutors Grayson, GA Precalculus Tutors Grayson, GA SAT Tutors Grayson, GA SAT Math Tutors Grayson, GA Science Tutors Grayson, GA Statistics Tutors Grayson, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/grayson_ga_math_tutors.php","timestamp":"2014-04-17T16:19:40Z","content_type":null,"content_length":"23390","record_id":"<urn:uuid:04d5cb04-3a38-4449-b039-c704f68c13a4>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
D. J. Bernstein Fighting patents US patent 7774607, Ferguson, signature verification Text of the patent No reported enforcement attempts. Filed 2006. Basic claims: • 1. One or more computer-readable media comprising computer-executable instructions for verifying a signature, the computer-executable instructions directed to steps comprising: receiving a signed message comprising message data and the signature, the signed message having been signed using a private key comprising a modulo value, the private key being paired with a public key comprising a public exponent and the modulo value; receiving a pre-computed value corresponding to a quotient of the signature raised by the public exponent then divided by the modulo value; and using the pre-computed value to verify the signature. • 8. One or more computer-readable media comprising computer-executable instructions for aiding in the verification of a signature, the computer-executable instructions directed to steps comprising: transmitting a signed message comprising message data and the signature, the signed message having been signed using a private key comprising a modulo value, the private key being paired with a public key comprising a public exponent and the modulo value; generating a pre-computed value corresponding to a quotient of the signature raised by the public exponent then divided by the modulo value; and transmitting the pre-computed value. • 14. A method of verifying a signature comprising: receiving a signed message comprising message data and the signature, the signed message having been signed using a private key comprising a modulo value, the private key being paired with a public key comprising a public exponent and the modulo value; receiving a pre-computed value corresponding to a quotient of the signature raised by the public exponent then divided by the modulo value; and using the pre-computed value to verify the signature. The other claims include various trivial data-flow limitations (e.g., were the signature and quotient generated on the same computer, or on different computers?), and a few claims limited to exponent Prior art: • 1997.03.11, Bernstein, The world's fastest digital signature system: "Modification: Include (s^2 - h)/n in the signature. ... Gamblers may prefer to select one or two small random primes, then check s^2 = nk + h modulo those primes; the chance of a bad signature slipping through is very small." • 2000.08.09, Bernstein, A secure public-key signature system with extremely fast verification: "In March 1997 on sci.crypt I suggested providing t = (s^2-fh)/pq as part of the signature. This makes verification much easier with no effect on security. The new signature equation s^2 = tn+fh is a ring equation; testing a ring equation by mapping it to a random quotient ring is a standard technique, as is proving a ring equation by mapping it to enough quotient rings." At another point: "A receiver can discard the extra information to save space, and regenerate the extra information later." • 2000.08.18, Bernstein, Protecting communications against forgery (video also published by MSRI): "Modify signatures to save time: ... Verifier computes s^2-fh-tn modulo a secret 40-digit prime." • 2000.10.20, Bernstein, Design and implementation of a public-key signature system (video also published by MSRI): "s^2 mod n = fA(r,m): The Rabin-Williams system. Unbroken. s^2 - tn = fA(r,m): The RWB system. Unbroken. ... Can compute s^2-tn-fh, check if result is 0. Faster: Reduce s^2-tn-fh modulo a secret prime l with 2^114<l<2^115, l mod 5 in {2,3}; check if result is 0. Chance <2^ -100 of error for uniform random l." • 2002.06.24, Wagner, Re: Shortcut digital signature verification failure: "A signature on message m is a tuple (h,s,k) such that s^3 = kn + h, h = H(m), and 0 <= h,s,k < n." • 2003.11.08, Bernstein, More news from the Rabin-Williams front: "Signature: (e,f,r,s) such that s^2 = blah (mod pq). Expanded: (e,f,r,s,t) such that s^2-blah-pqt = 0. Fast randomized verification: Check ((s mod n)^2 - (blah mod n) - (pq mod n)(t mod n)) mod n = 0 for secret random 100-bit prime n."
{"url":"http://cr.yp.to/patents/us/7774607.html","timestamp":"2014-04-16T19:08:52Z","content_type":null,"content_length":"4892","record_id":"<urn:uuid:713cd6a0-680a-4ac2-8ca1-b470094e3b6a>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
Martins Add, MD SAT Math Tutor Find a Martins Add, MD SAT Math Tutor ...I believe that working with students 1-on-1 and in small group settings can allow students to get their questions asked and not feel intimidated or worry about the pace of the class. I truly believe that my students' success is my success. I invest in each and every one of my students and try to be flexible, accommodating and available. 24 Subjects: including SAT math, reading, geometry, algebra 1 ...Are you worried about how this will affect your chances of graduating or getting into college? Besides being necessary to graduate from high school, good math and science skills can help you expand your job opportunities once you enter the job market. I can assess your current level of understa... 31 Subjects: including SAT math, chemistry, calculus, geometry ...I was always a top math student in high school and college, and I am able to patiently show students more than one way to solve most problems. I get along well with younger people and am able to connect in order to explain math, which I know many students do not like. I am also effective with adult students who I know can feel awkward about material they have not seen in years. 28 Subjects: including SAT math, chemistry, calculus, physics ...I have taken various cooking classes for pleasure from high school all the way through graduate school. Recently, I have started taking classes from Sur La Table, a gourmet cooking class which makes classic recipes attainable at home. I am energetic, enthusiastic, and love food. 27 Subjects: including SAT math, reading, writing, English ...A lifelong learner with a Master's Degree in Education, I enjoy finding new ways to make challenging material make sense to my students. Having homeschooled my own children, and tutored others, I find particular delight in working with students individually. Because learning works best when the... 17 Subjects: including SAT math, reading, geometry, ESL/ESOL Related Martins Add, MD Tutors Martins Add, MD Accounting Tutors Martins Add, MD ACT Tutors Martins Add, MD Algebra Tutors Martins Add, MD Algebra 2 Tutors Martins Add, MD Calculus Tutors Martins Add, MD Geometry Tutors Martins Add, MD Math Tutors Martins Add, MD Prealgebra Tutors Martins Add, MD Precalculus Tutors Martins Add, MD SAT Tutors Martins Add, MD SAT Math Tutors Martins Add, MD Science Tutors Martins Add, MD Statistics Tutors Martins Add, MD Trigonometry Tutors Nearby Cities With SAT math Tutor Bethesda, MD SAT math Tutors Chevy Chase SAT math Tutors Chevy Chase Village, MD SAT math Tutors Chevy Chs Vlg, MD SAT math Tutors Garrett Park SAT math Tutors Glen Echo SAT math Tutors Kensington, MD SAT math Tutors Martins Additions, MD SAT math Tutors Mount Rainier SAT math Tutors N Chevy Chase, MD SAT math Tutors North Chevy Chase, MD SAT math Tutors Silver Spring, MD SAT math Tutors Somerset, MD SAT math Tutors University Park, MD SAT math Tutors West Mclean SAT math Tutors
{"url":"http://www.purplemath.com/Martins_Add_MD_SAT_Math_tutors.php","timestamp":"2014-04-17T13:19:56Z","content_type":null,"content_length":"24504","record_id":"<urn:uuid:034a3794-af04-41cd-a8f2-bbeebfbd852a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: August 2009 [00587] [Date Index] [Thread Index] [Author Index] Problems with Evaluate[Symbol[xxx]] • To: mathgroup at smc.vnet.net • Subject: [mg102759] Problems with Evaluate[Symbol[xxx]] • From: Keelin <keelinm at gmail.com> • Date: Wed, 26 Aug 2009 07:44:06 -0400 (EDT) I'm trying to use the Evaluate[Symbol[xxx]] syntax but I think I am misunderstanding something as it is not behaving as I expect. Below is an example to illustrate what is unexpected: The questions are: 1) Is there a problem with line 604 - it gives errors, but yet the output in the end is correct 2) Why are the output of abc and the output of Evaluate[Symbol [myListIdentifier]] not regarded as equal even though they appear identical (See line 608) 3) Why can I append correctly using the reference name abc, but not using the Evaluate[Symbol[myListIdentifier]] name? This is what I really want to do in the end - append to a list which has a dynamically assigned variable name. In[602]:= (*Make an identifier string to hold some data*) myListIdentifier = "abc" Out[602]= "abc" In[603]:= (*here's the data*) myList = {{7, 8}} Out[603]= {{7, 8}} In[604]:= (*Now put the data into a variable with name abc*) Evaluate[Symbol[myListIdentifier]] = myList During evaluation of In[604]:= Set::setraw: Cannot assign to raw \ object 7. >> During evaluation of In[604]:= Set::setraw: Cannot assign to raw \ object 8. >> Out[604]= {{7, 8}} (*Check abc holds the correct data*) Out[605]= {{7, 8}} In[606]:= (*Check the data can also be accessed using the \ In[607]:= Evaluate[Symbol[myListIdentifier]] Out[607]= {{7, 8}} In[608]:= (*But apparently the two items are not equal... why??*) abc == Evaluate[Symbol[listIdentifier]] Out[608]= False In[609]:= (*Even worse, I can append using the reference abc, but not using the myListIdentifier*) Append[abc, {3}] Out[609]= {{7, 8}, {3}} In[610]:= Append[Evaluate[Symbol[listIdentifier]], {3}] Out[610]= {{3}}
{"url":"http://forums.wolfram.com/mathgroup/archive/2009/Aug/msg00587.html","timestamp":"2014-04-16T10:13:54Z","content_type":null,"content_length":"26698","record_id":"<urn:uuid:6e7fcee7-2606-4e00-9225-9895fc6839be>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
Equational Reasoning Shopping List Pure functional programming has the advantage that one can reason with it very like one does in mathematics. Specifically, if you know two expressions to be equal, they can be freely swapped. This underlies the use of inlining, sharing and the ability to replace any linear recursion with a fold, unfold, zip, etc. as appropriate. As a first Haskell example, consider the following very-well-known optimization. Given code like map f $ map g $ map h list or in the more common style: map f . map g . map h $ list it appears naively that this will traverse the list three times: once to apply h, again to apply g, and finally a third time to apply f. But because map, f, g and h are all pure functions, they can be replaced with something equivalent without changing the result of the code. Consider an element of list, say x. After the three maps, it will have been replaced in the final list with f(g(h x))), or equivalently f . g . h $ x. So we should be able to replace the three maps with one that does the same thing: map (f . g . h) list And in fact GHC does precisely this. Equational Rules Following is a collection (and more are most welcome!) of rules for transforming your programs. I have written their equality with ==, though of course Haskell functions are not in Eq. I have written implication as –> and equivalence/double-implication as <–>. map f . map g == map (f . g) filter f . filter g == filter (\x -> f x && g x) == filter (liftM2 (&&) f g) f . f == id --> filter (p . f) == map f . filter p . map f filter (all p) . cartesianProduct == cartesianProduct . map (filter p) filter p . concat == concat . map (filter p) == concatMap (filter p) And now some concerning pairs and arrow combinators, Haskell translations of ones from Meijer et al.’s seminal paper: fst . (f *** g) == f . fst fst . (f &&& g) == f -- and the obvious equivalents for snd and g. fst . h &&& snd . h == id fst &&& snd == id (f *** g) . (h &&& j) == (f . h) &&& (g . j) (f &&& g) . h == (f . h) &&& (g . h) (f *** g) == (h *** j) <--> f == h && g == j (f &&& g) == (h &&& j) <--> f == h && g == j And more, translated from the same paper, for Either and the arrow combinators using it: (f +++ g) . Left == Left . f (f ||| g) . Left == f -- and the obvious equivalents for g and Right {- h strict -} --> (h . Left) ||| (h . Right) == h Left ||| Right == id (f ||| g) . (h +++ j) == (f . h) ||| (g . j) {- f strict -} --> f . (g ||| h) == (f . g) ||| (f . h) (f ||| g) == (h ||| j) f == h && g == j (f +++ g) == (h +++ j) f == h && g == j And finally the “Abides Law” tying the two together: (f &&& g) ||| (h &&& j) == (f ||| h) &&& (g ||| j) There are many more such rules, and if you have more to suggest I’d love to add them to the list. 2 Responses to Equational Reasoning Shopping List 1. I discovered your blog from HWN and it looks to be pretty dang useful. But I have a request: Would it be possible for you to disable automatic smileys in your blog template? WordPress has inconsiderately obscured some of your code with a bitmap smiley (on the line with filter and liftM2). 2. Thanks for pointing that out. I suppose I hadn’t noticed that when I moved from Blogger.com. There doesn’t appear to be an option to disable this (and in a pre block, even!) so I suppose I’ll just have to work around it. If there is a way to disable that, I’d love to know.
{"url":"http://braincrater.wordpress.com/2008/04/10/equational-reasoning-shopping-list/?like=1&source=post_flair&_wpnonce=e5382928c8","timestamp":"2014-04-19T09:25:18Z","content_type":null,"content_length":"48953","record_id":"<urn:uuid:ff9cb0f7-3f0a-440b-bebb-b89ab9aee4d5>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
Detecting Weak Instruments in R September 23, 2013 By diffuseprior Any instrumental variables (IV) estimator relies on two key assumptions in order to identify causal effects: 1. That the excluded instrument or instruments only effect the dependent variable through their effect on the endogenous explanatory variable or variables (the exclusion restriction), 2. That the correlation between the excluded instruments and the endogenous explanatory variables is strong enough to permit identification. The first assumption is difficult or impossible to test, and shear belief plays a big part in what can be perceived to be a good IV. An interesting paper was published last year in the Review of Economics and Statistics by Conley, Hansen, and Rossi (2012), wherein the authors provide a Bayesian framework that permits researchers to explore the consequences of relaxing exclusion restrictions in a linear IV estimator. It will be interesting to watch research on this topic expand in the coming years. Fortunately, it is possible to quantitatively measure the strength of the relationship between the IVs and the endogenous variables. The so-called weak IV problem was underlined in paper by Bound, Jaeger, and Baker (1995). When the relationship between the IVs and the endogenous variable is not sufficiently strong, IV estimators do not correctly identify causal effects. The Bound, Jaeger, and Baker paper represented a very important contribution to the econometrics literature. As a result of this paper, empirical studies that use IV almost always report some measure of the instrument strength. A secondary result of this paper was the establishment of a literature that evaluates different methods of testing for weak IVs. Staiger and Stock (1997) furthered this research agenda, formalizing the relevant asymptotic theory and recommending the now ubiquitous “rule-of-thumb” measure: a first-stage partial-F test of less than 10 indicates the presence of weak In the code below, I have illustrated how one can perform these partial F-tests in R. The importance of clustered standard errors has been highlighted on this blog before, so I also show how the partial F-test can be performed in the presence of clustering (and heteroskedasticity too). To obtain the clustered variance-covariance matrix, I have adapted some code kindly provided by Ian Gow. For completeness, I have displayed the clustering function at the end of the blog post. # load packages library(AER) ; library(foreign) ; library(mvtnorm) # clear workspace and set seed # number of observations n = 1000 # simple triangular model: # y2 = b1 + b2x1 + b3y1 + e # y1 = a1 + a2x1 + a3z1 + u # error terms (u and e) correlate Sigma = matrix(c(1,0.5,0.5,1),2,2) ue = rmvnorm(n, rep(0,2), Sigma) # iv variable z1 = rnorm(n) x1 = rnorm(n) y1 = 0.3 + 0.8*x1 - 0.5*z1 + ue[,1] y2 = -0.9 + 0.2*x1 + 0.75*y1 +ue[,2] # create data dat = data.frame(z1, x1, y1, y2) # biased OLS lm(y2 ~ x1 + y1, data=dat) # IV (2SLS) ivreg(y2 ~ x1 + y1 | x1 + z1, data=dat) # do regressions for partial F-tests # first-stage: fs = lm(y1 ~ x1 + z1, data = dat) # null first-stage (i.e. exclude IVs): fn = lm(y1 ~ x1, data = dat) # simple F-test waldtest(fs, fn)$F[2] # F-test robust to heteroskedasticity waldtest(fs, fn, vcov = vcovHC(fs, type="HC0"))$F[2] # now lets get some F-tests robust to clustering # generate cluster variable dat$cluster = 1:n # repeat dataset 10 times to artificially reduce standard errors dat = dat[rep(seq_len(nrow(dat)), 10), ] # re-run first-stage regressions fs = lm(y1 ~ x1 + z1, data = dat) fn = lm(y1 ~ x1, data = dat) # simple F-test waldtest(fs, fn)$F[2] # ~ 10 times higher! # F-test robust to clustering waldtest(fs, fn, vcov = clusterVCV(dat, fs, cluster1="cluster"))$F[2] # ~ 10 times lower than above (good) Further “rule-of-thumb” measures are provided in a paper by Stock and Yogo (2005) and it should be noted that whole battery of weak-IV tests exist (for example, see the Kleinberg-Paap rank Wald F-statistic and Anderson-Rubin Wald test) and one should perform these tests if the presence of weak instruments represents a serious concern. # R function adapted from Ian Gows' webpage: # http://www.people.hbs.edu/igow/GOT/Code/cluster2.R.html. clusterVCV <- function(data, fm, cluster1, cluster2=NULL) { # Calculation shared by covariance estimates est.fun <- estfun(fm) inc.obs <- complete.cases(data[,names(fm$model)]) # Shared data for degrees-of-freedom corrections N <- dim(fm$model)[1] NROW <- NROW(est.fun) K <- fm$rank # Calculate the sandwich covariance estimate cov <- function(cluster) { cluster <- factor(cluster) # Calculate the "meat" of the sandwich estimators u <- apply(est.fun, 2, function(x) tapply(x, cluster, sum)) meat <- crossprod(u)/N # Calculations for degrees-of-freedom corrections, followed # by calculation of the variance-covariance estimate. # NOTE: NROW/N is a kluge to address the fact that sandwich uses the # wrong number of rows (includes rows omitted from the regression). M <- length(levels(cluster)) dfc <- M/(M-1) * (N-1)/(N-K) dfc * NROW/N * sandwich(fm, meat=meat) # Calculate the covariance matrix estimate for the first cluster. cluster1 <- data[inc.obs,cluster1] cov1 <- cov(cluster1) if(is.null(cluster2)) { # If only one cluster supplied, return single cluster # results } else { # Otherwise do the calculations for the second cluster # and the "intersection" cluster. cluster2 <- data[inc.obs,cluster2] cluster12 <- paste(cluster1,cluster2, sep="") # Calculate the covariance matrices for cluster2, the "intersection" # cluster, then then put all the pieces together. cov2 <- cov(cluster2) cov12 <- cov(cluster12) covMCL <- (cov1 + cov2 - cov12) # Return the output of coeftest using two-way cluster-robust # standard errors. for the author, please follow the link and comment on his blog: DiffusePrioR » R daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/detecting-weak-instruments-in-r/","timestamp":"2014-04-20T10:55:42Z","content_type":null,"content_length":"42143","record_id":"<urn:uuid:b7b49add-54d7-4cc1-8934-2e047702f21b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
1st Quarter Reflection I am now one quarter into my first year as a public high school teacher. The primary lesson of my experience thus far is that teaching requires a lot of time and effort. I work with students either as a class or individually almost non-stop from 7am until about 4pm on a daily basis (and later than that these last few weeks). I go home, eat, try to catch up on my personal life for a few hours, and then spend the remainder of my evening planning for the next day. The daily grind makes it a bit difficult to think big about the best way to teach any individual concept. In my curriculum class we have been working on a lesson plan for factoring trinomials with leading coefficients greater than 1. Our group of four has spent a few months thinking about the standards, placing the lesson in context, and developing the best way to approach it. We still don't have a lesson ready. A few days ago I had to send some materials to the other members of my department, so I cooked up a power-point lesson on the subject in a few hours and sent it along. The thing is, factoring trinomials is an algorithm. Almost everything we do in Algebra 2 seems like an algorithm. Will our intensive investigation into the lesson planning process yield any significantly better way to teach this algorithm than my rushed powerpoint presentation? I have been able to incorporate several experimental ideas into the geometry curriculum. I developed a triangle congruence unit based around my proof cards, and I think it has worked for a number of students. They aren't completely intimidated by proofs, in any case. Although I joke about powerpoint presentations, they have become pretty useful. Creating a detailed presentation makes me think a bit more about the flow of my lesson, and it is nice to be able to display a problem for the students to work on and have the result already worked out to show after I have given them a chance to work on it. The policy of making students keep all of their work in a binder has been working well so far. I think it streamlines grading and makes it possible for me to give them credit for participation in class (by checking whether they have filled in their notes). The students still don't really reference their notes, though. I had my first evaluation (called a JPAS evaluation) this quarter. So far I haven't seen very much useful feedback. Maybe it is forthcoming. All of my class materials are now available to students online. Some of them have made use of this feature during absences, but not enough to convince me that it is worthwhile. I am thinking of better ways to implement technology that will save me time. I am rethinking my quiz system. Partly because I dread making up new quizzes every day. Partly because the students don't take them seriously enough when they are graded based on participation. But I am not exactly sure what to replace them with. I don't really want to implement something that would involve a lot of grading at this point because I don't have a lot of extra time. There really isn't enough time to be an effective lesson planner, lesson teacher, and evaluator for 200 students. One of my focuses for this next quarter will be to try and develop stronger ties to those students in my classes who are struggling. Encourage them to come in for help. Make checking on their progress a higher priority during individual work time. 2 comments: 1. Most students prefer, and even need that algorithm to factor non-monic trinomials. The ironic thing is that students who learn to factor trinomials by feel rather than the algorithm gain a much stronger number sense than those who just memorize the steps and follow them. However, so many teachers penalize those who can do it intuitively, forcing them to memorize and regurgitate the I think the reason that these teachers force the algorithm is usually that it is easier to grade, and the teachers often don't understand the math as well as their best students. This is particularly true at the elementary level. So students are so used to just memorizing algorithms, that they know no other way of learning math by the time they get to middle and high school (and 2. I agree. In fact, in my class we used some of these ideas to design a revised lesson plan that focuses more on building number sense and giving a student an intuitive feel for the problems. Sometimes a little reflection does improve a lesson.
{"url":"http://www.ergoscribo.com/2011/11/1st-quarter-reflection.html","timestamp":"2014-04-16T13:23:00Z","content_type":null,"content_length":"68895","record_id":"<urn:uuid:8b07dc2a-618d-47f3-a8f0-2785ed12e114>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
APS RDA Math Performance Task Bank NSO=Number and Sense Operations GSSM=Geometry, Spatial Sense, and Measurement DASP=Data Analysis, Statistics, and Probability PFAC=Patterns, Functions, and Algebraic Concepts Click on the name of the file to download the pdf for each category. You will need each of the files for any given task. Once you've opened the file, you may save the file to your hard disk or print the file.
{"url":"http://www.rda.aps.edu/MathTaskBank/fi_html/35tasks.htm","timestamp":"2014-04-19T14:43:08Z","content_type":null,"content_length":"54764","record_id":"<urn:uuid:02dcf2e6-5362-4fa6-9d70-ad7c9243c8bb>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
The attempt to load metrics for this article has failed. The attempt to plot a graph for these metrics has failed. Invited Article: A unified evaluation of iterative projection algorithms for phase retrieval Examples of sets and projectors: (a) Support: The axes represent the values on 3 pixels of an image known to be 0 outside the support . The vertical axis represents a pixel outside ( ), while the horizontal plane represents pixels inside . The projection on this set is performed simply by setting to 0 all the pixels outside the support. (b) Modulus: A pixel (in Fourier space) with a given complex value is projected on the closest point on the circle defined by the radius . If there is some uncertainty in the value of the radius , the circle becomes a band. The circle is a non-convex set, since the linear combination between two points on the same set and does not lie on the set. Also represented in the figure is the projection on the real axis (reality projection). The reflector applies the same step as the projector twice: . Geometric representation of various algorithms using a simplified version of the constraint: two lines intersecting. (a) Error reduction algorithm: we start from a point on the modulus constraint by assigning a random phase to the diffraction pattern. The projection onto the modulus constraint finds the point on the set which is nearest to the current one. The arrows indicate the gradients of the error metric. (b) The speed of convergence is increased by replacing the projector on the support with the reflector. The algorithm jumps between the modulus constraint (solid diagonal line) and its mirror image with respect to the support constraint (dotted line). (c) Hybrid input–output, see text [Eq. (19) ]. The space perpendicular to the support set is represented by the vertical dotted line . (d) Difference map, see text [Eq. (22) ]. The basic features of the iterative projection algorithms can be understood by this simple model of two lines intersecting (a). The aim is to find the intersection. The ER algorithm and the solvent flipping algorithms converge in some gradient-type fashion (the distance to the two sets never increases), the solvent flip method being slightly faster when the angle between the two lines is small. HIO and variants move following a spiral path. The lagrangian is represented in grayscale, and the descent-ascent directions are indicated by arrows. When the two lines do not intersect (b), HIO and variants keep moving in the direction of the gap between the two lines, away from the local minimum. ER, SF, and RAAR converge at (or close to) the local minimum. The horizontal line represents a support constraint, while the two circles represent a nonconvex constraint, i.e., the modulus constraint. The gradient-type (ER and SF) algorithms converge to the local minimum, while HIO and variants follow the descent-ascent direction indicated by the arrows. Positivity constraint: the support constraint is represented by a horizontal line originating from 0 . A barrier due to the positivity constraint changes the behavior of the algorithms, which no longer follow the descent–ascent direction. HIO bounces on the axis, while the other algorithms are smoother. A simple 2D phase-retrieval problem: only two variables (pixel values) are unknown. The solution–the global minimum–is the top minimum in the figures. The colormap and contour lines represents the error metric , and the descent direction is indicated by the arrows. The error reduction algorithm (a) proceeds toward the local minimum without optimizing the step length and stagnates at the local minima. The steepest descent method (b) moves toward the local minimum with a zigzag trajectory, while the conjugate gradient method reaches the solution faster (c). The HIO method generally converges to the global minimum, however some rare starting points converge to a local minimum (d). The saddle-point optimization with optimized step length [Eq. (37) ] stagnates in the same local minimum as HIO (e). The conjugate gradient version avoids stagnation (f). The saddle-point optimization using a two dimensional search of the saddle point reaches the global minimum from a larger range of starting points than HIO (g). The conjugate gradient version (h), (i) reaches the solution faster if the conjugate directions are obtained independently from (i), rather than their sum . Test figure used for benchmarking. The object of elements is surrounded by empty space. The whole image has elements. Percentage of successful reconstructions over many tests starting from random phases as a function of number of iterations. The support is the only constraint. Positivity and reality are not enforced, and the support is loose: it is larger than the object by one additional row and column Scitation: Invited Article: A unified evaluation of iterative projection algorithms for phase retrieval
{"url":"http://scitation.aip.org/content/aip/journal/rsi/78/1/10.1063/1.2403783","timestamp":"2014-04-19T15:11:58Z","content_type":null,"content_length":"99324","record_id":"<urn:uuid:56152b18-b988-475c-8a80-a0a21930a7f6>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
Formal Methods for Computer Security Semantics, specification language and verification of security protocols: Security protocols are almost everywhere. They have the particular feature of being concise and yet subtle and error prone. Just realize that the Needham-Schroeder protocol has been around for almost 17 years before an attack has been found. Today, we still lack semantically clean and widely accepted languages for describing security protocols. The situation concerning their properties is worse. Indeed, confidentiality and authentication are more or less well understood (it is actually not completely true of the latter) but anonymity, opactness, fairness, denial of service etc.... are far from being well studied. Main topics of this area are: • Verification of security protocols within Dolev-Yao model: the group has developed a general verification method for security protocols that can handle unbounded sessions. The following papers underpin its theoretic background: [TACAS’03], [TR-2004-1]. This work culminates in a powerful and efficient tool : Hermes that is described in [CAV03] and can be tested at the following url: Hermes. This method is further refined in [FOSSACS04], where the authors provide a sound and complete method for the verification of a rich class of security properties in the case of bounded number of sessions. • Beyond secrecy and authentication: the group has also studied anonymity properties and developed new results. In [WITS04] it is proved that opacity (the inability of intruder to decide for an execution of a security protocol, that a property is satisfied or not) is decidable in the case of a passive intruder. This work is further extended in [FAST04], where the hypothesis of encryption with only atomic keys is dropped, and in [ARSPA04] where it is proved that a restricted version of opacity is still decidable for active intruders. In [TR-2004-25], opacity is extended to labelled transition systems and generalised in order to better represent concepts from the work on information flow. In particular, in this work, links between opacity and the information flow concepts of anonymity and non-interference are established. Also abstraction-based methods of verifying opacity when working with Petri nets are studied. • More realistic models for security protocols: most work done in verification of security protocols uses a formal model, called the Dolev-Yao model. Important hypotheses of Dolev-Yao model are: 1) the encryption algorithms are perfect (the intruder must know the inverse key to obtain the plaintext from a ciphertext) and 2) nonces (random numbers used to ensure the freshness of a message) are ideally generated (they do not collude, and can not be "guessed"). A direction of research of the group deals with developing more realistic models. In [CONCUR04] the hypothesis of "perfect nonces" is weakened, and the authors extend the method of [FOSSACS04] to provide a sound and complete method for the verification of security protocols that use timestamps. • Soundness of the symbolic model: [TR-2004-19] represents a significant step towards reconciling the computational model (where cryptographic functions operate on string of bits and can be attacked using Turing machines) and the symbolic, also coalled Dolev-Yao, model. The main result of the paper is that the formal model (Dolev-Yao) is a sound abstraction of the computational model, provided that encryption schemes are IND-CCA. This work extends previous ones in several ways: 1) generalizes to multi-party protocols, 2) allows symmetric/asymmetric encryption as well as digital signature, 3) encoding of secret keys is allowed, 4) applies to secrecy properties as well. In [TR-2005-3], this result is extended to symmetric and asymmetric encryption, signature and hashing. The same reult is extended to include Diffie-Hellman exponentiation in [TR-2005-7]. The soundness of the symbolic model when probabilistic opacity is considered is proved in [TR-2005-4]. A predicate is opaque for a given system, if an adversary will never be able to establish truth or falsehood of the predicate for any observed computation. In the Dolev-Yao model, even if an adversary is $99\%$ sure of the truth of the predicate, it remains opaque as the adversary cannot conclude for sure. In this paper, a computational version of opacity is introduced in the case of passive adversaries called cryptographic opacity. The main result is a composition theorem: if a system is secure in an abstract formalism and the cryptographic primitives used to implement it are secure, then this system is secure in a computational formalism. Security of the abstract system is the usual opacity and security of the cryptographic primitives is IND-CPA security. To illustrate the results, two applications are given: a short and elegant proof of the classical Abadi-Rogaway result and the first computational proof of Chaum’s visual electronic voting scheme. This work is conducted in tight connection with the following projects: • EVA: Explication et Vérification Automatique de protocoles cryptographiques. • ACI Rossignol: Verification of Cryptographic Protocols. • RNTL PROUVE: Verification of Cryptographic Protocols. • AS Sécurité logicielle: Modèles et vérification. Semantics, specification language and verification of security protocols: Certification of security properties with respect to the Common Criteria: The Common Criteria is an international standard for IT Security Evaluation. These criteria define 7 levels of evaluation: EAL 1 to EAL 7. The last two, EAL 6 and EAL 7, require formal descriptions and proofs. More generally, this standard advocates a sort of top-down development through a number of description levels such as SPM (Security policy model), FSP ( Functional specification), HLD (High level design) etc..Currently, a quick look at the cite of the DCSSI (Direction centrale de la sécurité des systèmes d’information), the french official body in charge of certification, shows that evaluation at the higher levels has yet been achieved. This is due to a lack of an efficient tool-assisted methodology. We are participating in the national project EDEN whose aim is to develop such a methodology.
{"url":"http://www-verimag.imag.fr/Formal-Methods-for-Computer.html","timestamp":"2014-04-16T19:00:59Z","content_type":null,"content_length":"24543","record_id":"<urn:uuid:3c3c2e34-ba89-4e08-9f65-4efe7dab68b7>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
Getting Old Despite enjoying smoking a pipe, my grandfather lived to be 101 years old. Perhaps for this reason the emphasis placed on the increase in the number of very old people as a factor shaping population health has never seemed particularly interesting to me, although I must own up to a passion for passive pipe smoking. Mellow Virginia, in particular. Counting the number of people who reach 100 years of age is more difficult than you might think. The older people get the more difficult it is to know how old they really are but the Office of National Statistics (ONS) estimate that in 2010 there were 12,640 centenarians in the UK in 2010. The number of centenarians is likely to rise in future and several newspapers (The Telegraph and The Guardian) reported last week that a third of people born today will live to be 100 years old. The calculation of life expectancy is an alternative to age-standardisation as a way of summarising the overall level of population health. For example, over the last 20 years (1990 to 2010) life expectancy at birth for men has increased from 72.9 years to 78.5 years for men while for women the increase has been from 78.5 years to 82.4 years. Life expectancy is calculated from what is termed a ‘lifetable’. The lifetable is one of the oldest techniques in statistics having been first used in the 17th century for the purpose of detecting outbreaks of epidemics, mainly the plague. There are two types of lifetable, a cohort and a period lifetable, and both can be used to calculate life expectancy. In this article we will look briefly at the difference between a cohort and a period lifetable and assess how much confidence we should have in predictions of the future number of centenarians. The data on which a lifetable is based consists of the number of deaths in a year and an estimate of the mid-year population, both by single year of age. The key quantity used in calculating life expectancy from either type of lifetable is the probability of dying between successive ages. Once we know this we can calculate the probability of a person of any age today being alive at any future age as the product of the probability of not dying over the series of intervening successive one year age intervals. To calculate life expectancy at any age we simply add up the total lifetime left in person-years that we expect for individuals of that age and then divide by the number of people of that age in the population. For example, if there were 100,000 people aged 50 this year and we calculate that they will live for a total of 3,500,000 person-years in the future, then life expectancy at age 50 would be 35 years (or 3,500,000/100,000). Period and cohort lifetables make different assumptions about future death rates. The period lifetable is constructed using the mortality rate at each age in a single year. That is, the life expectancy of someone who is age 45 in 2012 is calculated using the death rates at ages of 46 years and over that were observed in 2012. The period lifetable therefore tells you what life expectancy would be if death rates remained unchanged. The cohort life table, in contrast, is constructed using a forecast of future death rates. That is, life expectancy for someone who is 45 years old in 2012 is calculated from forecasts of the death rate at age 46 in 2013, age 47 in 2014, age 48 in 2015, and so on. In a previous article we have seen that death rates have fallen notably over the last 40 years. Because the cohort lifetable takes into account future changes in death rates, life expectancy calculated from a cohort life table is considered more accurate than that calculated from a period lifetable. It will also be higher if it is assumed that death rates will continue to fall in the future. The report by ONS on which last week’s stories were based used results from cohort life tables. Because how death rates will change in the future is uncertain, the report produced three different estimates of life expectancy. The 'high life expectancy' and 'prinicpal' forecasts both assumed that death rates would continue to fall but at different rates while the 'low life expectancy' forecast assumed that the fall in death rates would slow down and then stabilise sometime around 2035. The figure above shows the number of people who are forecast to reach 100 years of age by their age in 2012. The ‘principal’ forecast predicts that slightly more than 150,000 women who are born this year (or around 1/3 of births) will reach 100 years of age. On the other hand, the 'low life expectancy' forecast predicts that only around 35,000 women born this year will reach 100 years of age while the 'high life expectancy' forecast predicts that nearly 280,000 women born this year will still be alive in 100 years’ time. In short, the calculation of life expectancy is very dependent on the accuracy of the assumptions concerning how death rates will change in the future. Predictions are hard to make, especially when they are about the future.
{"url":"http://www.significancemagazine.org/details/webexclusive/1722053/Getting-Old.html","timestamp":"2014-04-21T04:54:11Z","content_type":null,"content_length":"22700","record_id":"<urn:uuid:d878d150-ff11-4314-a204-7f71115cb6b6>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics - By Category Math Formulas and Math Tables Mathematical formulas, tables and facts for solving mathematical problems. Math Help and Tutorials by Subject and or Topic Whether you're looking for help in Advanced Algebra, Calculus, Geometry or Arithmetic, you'll find many supports here. Calculus help and resources. Math Lesson Plans Math lesson plans for grades K-12. A variety of algebra, prealgebra, calculus, arithmetic, geometry and measurement lesson plans. Basic arithmetic addressing the four operations with integers, rational and real numbers and including measurement, geometry and base ten. Math Glossary of Terms Math Glossary of Terms Calculators & Online Tools Calculators, converters and tools to find solutions to mathematical problems. Calculator tutorials. Math Stumpers Problem solving questions. Math related problem solving. Math Worksheets, Printables & Black Line Masters Math worksheets and printables. Fraction worksheets, addition worksheets, algebra worksheets, subtraction worksheets, multiplication worksheets and much more. Books, Software, Resources, DVDs to Learn Math Your guide's picks on math resources including: books, software, dvds, etc. Before you buy, check here first! Everything you wanted to know about mathematicians. Biographies, information, famous theorems and women mathematicians. Recreational Mathematics Math puzzles, games, tricks, squares, and magic to stimulate and challenge your right brain.
{"url":"http://math.about.com/od/","timestamp":"2014-04-19T11:56:06Z","content_type":null,"content_length":"33778","record_id":"<urn:uuid:b93d9a00-d15e-4c6e-93f3-61875f59bec9>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
Martins Add, MD SAT Math Tutor Find a Martins Add, MD SAT Math Tutor ...I believe that working with students 1-on-1 and in small group settings can allow students to get their questions asked and not feel intimidated or worry about the pace of the class. I truly believe that my students' success is my success. I invest in each and every one of my students and try to be flexible, accommodating and available. 24 Subjects: including SAT math, reading, geometry, algebra 1 ...Are you worried about how this will affect your chances of graduating or getting into college? Besides being necessary to graduate from high school, good math and science skills can help you expand your job opportunities once you enter the job market. I can assess your current level of understa... 31 Subjects: including SAT math, chemistry, calculus, geometry ...I was always a top math student in high school and college, and I am able to patiently show students more than one way to solve most problems. I get along well with younger people and am able to connect in order to explain math, which I know many students do not like. I am also effective with adult students who I know can feel awkward about material they have not seen in years. 28 Subjects: including SAT math, chemistry, calculus, physics ...I have taken various cooking classes for pleasure from high school all the way through graduate school. Recently, I have started taking classes from Sur La Table, a gourmet cooking class which makes classic recipes attainable at home. I am energetic, enthusiastic, and love food. 27 Subjects: including SAT math, reading, writing, English ...A lifelong learner with a Master's Degree in Education, I enjoy finding new ways to make challenging material make sense to my students. Having homeschooled my own children, and tutored others, I find particular delight in working with students individually. Because learning works best when the... 17 Subjects: including SAT math, reading, geometry, ESL/ESOL Related Martins Add, MD Tutors Martins Add, MD Accounting Tutors Martins Add, MD ACT Tutors Martins Add, MD Algebra Tutors Martins Add, MD Algebra 2 Tutors Martins Add, MD Calculus Tutors Martins Add, MD Geometry Tutors Martins Add, MD Math Tutors Martins Add, MD Prealgebra Tutors Martins Add, MD Precalculus Tutors Martins Add, MD SAT Tutors Martins Add, MD SAT Math Tutors Martins Add, MD Science Tutors Martins Add, MD Statistics Tutors Martins Add, MD Trigonometry Tutors Nearby Cities With SAT math Tutor Bethesda, MD SAT math Tutors Chevy Chase SAT math Tutors Chevy Chase Village, MD SAT math Tutors Chevy Chs Vlg, MD SAT math Tutors Garrett Park SAT math Tutors Glen Echo SAT math Tutors Kensington, MD SAT math Tutors Martins Additions, MD SAT math Tutors Mount Rainier SAT math Tutors N Chevy Chase, MD SAT math Tutors North Chevy Chase, MD SAT math Tutors Silver Spring, MD SAT math Tutors Somerset, MD SAT math Tutors University Park, MD SAT math Tutors West Mclean SAT math Tutors
{"url":"http://www.purplemath.com/Martins_Add_MD_SAT_Math_tutors.php","timestamp":"2014-04-17T13:19:56Z","content_type":null,"content_length":"24504","record_id":"<urn:uuid:034a3794-af04-41cd-a8f2-bbeebfbd852a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
INSIDE LEMIEUX, THE 1994 NORTHRIDGE EARTHQUAKE SHAKES ALMOST LIKE THE REAL THING. People in greater Los Angeles won’t soon forget January 17, 1994, 4:30 a.m. For eight seconds that may have seemed like eight hours, as if Mother Earth herself was restless in bed, nearly everyone across a 2,500 square-mile area jerked awake, adrenaline pumping. Striking the densely populated San Fernando Valley of northern LA, the Northridge earthquake was the second time in 60 years that the earth ruptured directly beneath a major U.S. urban area. By the time the dust settled and officials counted the toll, 57 people were dead, more than 1,500 seriously injured. Collapsed freeways choked traffic for days. Over 12,000 buildings and 170 bridges sustained moderate to severe damage. Total economic loss was estimated at $20 billion. Except for building codes that require earthquake-resistant structures, fatalities and damage would have been much worse. Still, the loss was enormous, and one of the main lessons of Northridge, as well as other urban earthquakes of recent decades, is the need for better information about where and how much the ground will shake. “We’ve learned that the severity of ground shaking and consequent damage patterns vary significantly within relatively small areas,” says Jacobo Bielak, professor of civil and environmental engineering at Carnegie Mellon University. “Even from one block to the next, the level of shaking can change dramatically due to types of subsurface soil and rock and other geological characteristics and the nature of the seismic waves.” Bielak and his Carnegie Mellon colleagues Omar Ghattas and David O’Hallaron lead the Quake Group, a large collaborative research team. Using sophisticated computational methods, they work to create realistic 3D models of earthquakes in geologically complex basins. This work, is supported by the National Science Foundation, and has been performed in collaboration with and additional support from the Southern California Earthquake Center (SCEC).Their objective is to provide accurate forecasts of earthquake ground motion as a necessary step toward creating building codes that provide for the safest possible structures at reasonable cost. They have used LeMieux, PSC’s terascale system, to great advantage, taking big steps forward in their work. “We’ve benefited enormously from having this powerful system at PSC,” says Ghattas, “and we’ve developed algorithms that maximize our ability to use it well.” Using as many as all 3,000 LeMieux processors at one time with high efficiency, they have carried out the most detailed, accurate simulations yet of the Northridge quake ” at twice the frequency of prior models. They’ve also made major inroads on an important problem called “the inverse problem,” the goal of which ” magical as it may seem ” is to determine subsurface geology by working backward from seismic measurements on the surface. Wavelength Tailoring Ancient advice says it’s better to build your house on rock than sand. If you live in an earthquake basin, that ancient advice, generally speaking, still holds. From soft soils near the surface to hard rock deeper down and in the mountains, subsurface material varies tremendously in stiffness, the property that dictates seismic wavelength. For a given frequency, the softer the material, the shorter the length of the seismic waves, which means finer resolution, more computing, to accurately model the shaking. “We’ve found,” says Bielak, “that even within a few hundred meters, the variability in soil properties — and therefore ground motion — can be very substantial. Because of this, similar buildings located near each other can experience significantly different levels of damage.” To accurately capture this wide range of ground vibration in a large earthquake basin like Los Angeles poses enormous challenges for earthquake modeling. One of the Quake Group’s key strategies has been to tailor their computational mesh - which divides the basin into millions of subvolumes - to soil stiffness. They generate their LA Basin computational model from a geological model created at the SCEC. Where the SCEC model indicates softer soils, therefore shorter wavelengths, the mesh-generating software creates a denser mesh. “By using disk space instead of computer memory,” says O’Hallaron, “our out-of-core algorithms can generate an extremely large mesh.” For the recent simulations, they represented the basin as 80 kilometers on each side by 30 kilometers deep. Within this volume, their irregular mesh maps more than 100 million subvolumes, making their computations with LeMieux the largest unstructured mesh simulations ever done. “These are the most highly resolved LA Basin earthquake simulations to date,” says Ghattas, “and they are made possible by our adaptive meshes and their low memory requirements. To achieve similar accuracy with a uniform mesh would require 1,000 times more computing power.” Realistic Frequencies & The Inverse Problem Using 2,048 LeMieux processors, they simulated the Northridge quake for more than 30 seconds of shaking. Their “wave-propagation” software sustained exceptional performance - nearly a teraflop (a trillion calculations a second) over six hours of computing time. And it ran at nearly 90 percent parallel efficiency, a measure of how well the software uses many processors at the same time. Ground-motion frequency is a key factor for building design, and this simulation accounted for shaking up to one vibration cycle per second (1 Hz) - double the previous high (.5 Hz). Earthquake modeling has been limited in its ability to simulate higher frequencies, from 1 to 5 Hz, that present the greatest danger to “low-rise” structures - which include most city buildings, predominantly two-to-five stories - because each doubling of frequency means a 16-fold increase in computing. “Our challenge is to attain realistic frequencies,” says Bielak. “Now, for the first time, we’re in the range that engineers need to know about. Typically, they want to see results up to 4 Hz, which points to the need for more computational power.” The simulation reproduced ground motion of the Northridge quake more accurately than possible until now, but — not surprisingly — at some locations it failed to reproduce significant shaking. These discrepancies, notes Ghattas, are inevitable considering that the geological model is inherently incomplete. “Because of uncertainties in what we know about earthquake source and basin material properties, a critical challenge facing us is to obtain these properties by source inversion from records of past earthquakes.” This problem - the inverse problem - is one of the important challenges of computational science and engineering, with potential applications in many fields, and it is key to the Quake Group’s goals. Ghattas and his former students Volkan Akcelik and George Biros won the best paper award last year at Supercomputing 2002 for their inverse wave-propagation algorithm that exploits parallel systems like LeMieux. Using a sophisticated mathematical approach, their algorithm makes it possible to ascertain deep geological features based on seismic recordings on the surface. The deep geology is often not well understood and is known to play an important role in surface shaking. With LeMieux, for the first time, the Quake team solved a test case in two dimensions that proves the feasibility of this inverse “The inverse problem is orders of magnitude more difficult than the forward problem,” says Ghattas. “Large parallel systems and powerful algorithms are crucial.” One of the Quake Group’s near-future plans for LeMieux is to further test their inverse approach with the added difficulty of three dimensions. Volkan Akcelik, Jacobo Bielak, George Biros, Ioannis Epanomeritakis, Antonio Fernandez, Omar Ghattas, David O'Hallaron, Eui Joong Kim, Julio Lopez and Tiankai Tu, Carnegie Mellon University Terascale Computing System User-developed code. Related Material on the Web: The Quake Project at Carnegie Mellon University. Getting Ready for the Big One, Projects in Scientific Computing, 1997. Supercomputers let scientists break down problems in reverse for better quake models, Pittsburgh Post-Gazette. V. Akcelik, J. Bielak, G. Biros, I. Epanomeritakis, A. Fernandez, O. Ghattas, E. J. Kim, J Lopez, D. O'Hallaron, T Tu & J. Urbanic, "High Resolution Forward and Inverse Earthquake Modeling on Terascale Computers," preprint (2003). J. Xu, J. Bielak, O. Ghattas, and J. Wang, " Three-dimensional nonlinear seismic ground motion modeling in inelastic basins", Physics of the Earth and Planetary Interiors, 137 (1-4), 81-95 (2003). Michael Schneider, Pittsburgh Supercomputing Center Web Design: Sean Fulton, Pittsburgh Supercomputing Center Revised: November 10, 2003 URL: http://www.psc.edu/science/2003/earthquake/big_city_shakedown.html
{"url":"http://psc.edu/science/2003/earthquake/","timestamp":"2014-04-18T11:16:37Z","content_type":null,"content_length":"23233","record_id":"<urn:uuid:efb7924d-58d1-4e1c-b339-8daffa28bbf4>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: • 9 months ago • 9 months ago Best Response You've already chosen the best response. \[\cos x \lim_{h \rightarrow 0}\frac{ \sin h }{ h }\] Best Response You've already chosen the best response. have you learned lhospital rule ? Best Response You've already chosen the best response. Best Response You've already chosen the best response. what are you doing in class? Best Response You've already chosen the best response. squeeze theorem? Best Response You've already chosen the best response. this is one of the last steps in a limit problem from the first unit (limits). it's supposed to evaluate to cosx*1=cosx. I don't understand how sinh/h comes out to be 1? Best Response You've already chosen the best response. well, there is a thing called La'Hospital rule, that says when you run a limit and get 0/0 you can take the derivative of the top and the derivative of the bottom and then run the limit again so your limit lim of sin(h)/h = lim of cos(h)/1 = lim cos(h) and at 0 that is 1 Best Response You've already chosen the best response. l'hopital's rule Best Response You've already chosen the best response. lol, we are on the first unit. haven't learned derivates, or any of the theorems or rules. Best Response You've already chosen the best response. can someone just explain how the limit of sinh/h is 1? Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. do you know the squeeze theorem? Best Response You've already chosen the best response. There is a geometric proof http://www.youtube.com/watch?v=Ve99biD1KtA I don't know of an elementary way of showing this limit without geometry and quite a bit of explaining...watch that video. Best Response You've already chosen the best response. all right, thanks Best Response You've already chosen the best response. Best Response You've already chosen the best response. It appears that you are beginning the study of Calculus with the topic of limits. We often will use a table of values as they approach the limit from the left and from the right. This is the first method you may want to use. A second method that is used is to look at the graph of the function. You can easily see that the limit as h approaches 0 from the left and right is 1. I hope this helps. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51c39581e4b055e613b8850c","timestamp":"2014-04-16T10:25:41Z","content_type":null,"content_length":"66141","record_id":"<urn:uuid:c2b530d7-58cd-4ba4-b8d0-8077401eae2c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
Asymptotic analysis of wall modes in a flexible tube Kumaran, V (1998) Asymptotic analysis of wall modes in a flexible tube. In: European Physical Journal B, 4 (4). pp. 519-527. Download (292Kb) The stability of wall modes in a flexible tube of radius R surrounded by a viscoelastic material in the region R < r < HR in the high Reynolds number limit is studied using asymptotic techniques. The fluid is a Newtonian fluid, while the wall material is modeled as an incompressible visco-elastic solid. In the limit of high Reynolds number, the vorticity of the wall modes is confined to a region of thickness O($\epsilon^\frac{1}{3}$) in the fluid near the wall of the tube, where the small parameter \epsilon = $Re^-1$, and the Reynolds number is Re = (\rho V R/ \eta), \rho and \eta are the fluid density and viscosity, and V is the maximum fluid velocity. The regime A = $\epsilon^\frac{-1}{3}$(G/ \rho$V^2$) \sim 1 is considered in the asymptotic analysis, where G is the shear modulus of the wall material. In this limit, the ratio of the normal stress and normal displacement in the wall, (- AC($k^\ast$; H)), is only a function of H and scaled wave number $k^\ast$ = (kR). There are multiple solutions for the growth rate which depend on the parameter $A^\ast$ = $k^\ast\frac{1}{3}$C($k^\ast$, H)A. In the limit $A^\ast$ \ll 1, which is equivalent to using a zero normal stress boundary condition for the fluid, all the roots have negative real parts, indicating that the wall modes are stable. In the limit $A^\ast$ \ll 1, which corresponds to the flow in a rigid tube, the stable roots of previous studies on the flow in a rigid tube are recovered. In addition, there is one root in the limit $A^\ast$ \ll 1 which does not reduce to any of the rigid tube solutions determined previously. The decay rate of this solution decreases proportional to $(A^\ast)^\frac{-1}{2}$ in the limit $A^\ast$ \ll 1, and the frequency increases proportional to $A^\ast$. Item Type: Journal Article Additional Information: Copyright for this article belongs to Springer-Verlag. Keywords: Deformation;Fe Stability of laminar flows;Flows in ducts;channels;nozzles;conduits Department/Centre: Division of Mechanical Sciences > Chemical Engineering Date Deposited: 25 Jan 2005 Last Modified: 19 Sep 2010 04:15 URI: http://eprints.iisc.ernet.in/id/eprint/1456 Actions (login required)
{"url":"http://eprints.iisc.ernet.in/1456/","timestamp":"2014-04-19T02:03:23Z","content_type":null,"content_length":"23522","record_id":"<urn:uuid:62b4366c-ed9a-45bb-afbd-9079fa0f4aa1>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
Importance of zero then infinity? there are inifinte many points in any line... does that make infinity a number? if 0 is nothing infinty is sort of everything... they have both a mathematical value but none of them, according to me, are numbers. Give us how you define a number then? In mathematics numbers are constructed from sets. 1 is the multiplicative identity as: 1*a=a 0 is the additive identity as: 0 + a = a Both are highly useful.
{"url":"http://www.physicsforums.com/showthread.php?t=69652","timestamp":"2014-04-19T22:45:56Z","content_type":null,"content_length":"66703","record_id":"<urn:uuid:554a00bf-51e5-4d6b-9d5c-df8a188b4dc7>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
Gauss mapping in finite characteristic up vote 4 down vote favorite Suppose that $X\subset\mathbb P^n$ is a $d$-dimentional smooth projective variety (not a linear subspace) over an algebraically closed field. If $\gamma\colon X\to\mathrm{Gr}(d,\mathbb P^n)$ is Gauss mapping that attaches to each point $x\in X$ the embedded Zariski tangent space to $X$ at $x$, then it is known that $\gamma$ is finite. If characteristic is zero, it is known that $\gamma$ is not just finite but birational onto its image. My question is whether $\gamma$ is generically one to one in finite characteristic. Edit: removed the question about birationality in finite characteristic, thanks to the example given by Felipe Voloch. Thanks in advance, ag.algebraic-geometry characteristic-p 2 You can start reading here: link.springer.com/article/10.1007%2Fs10711-008-9334-1#page-1 – M P Apr 25 '13 at 10:05 @MP: Great! Thanks for the reference. – Serge Lvovski Apr 25 '13 at 15:33 There are papers by Kleiman-Piene that discuss this question. My best recollection is that they tend to be inseparable, but finite. – aginensky Apr 25 '13 at 15:34 1 MP's reference provides counter-examples that are smooth space curves. On the other hand, all smooth plane curves have generically one-to-one Gauss map, by [Hajime Kaji : On the Gauss maps of space curves in characteristic p, Corollary 4.5]. – Olivier Benoist May 12 '13 at 14:06 add comment 1 Answer active oldest votes No. The plane curve $x^{p+1}+y^{p+1}=1$ has an inseparable Gauss map. up vote 5 down @Felipe: Thank yuo for the example (I will edit the question accordingly). However, this Gauss mapping is 1-1. Does there exist an example where it is not generically one to one? – Serge Lvovski Apr 25 '13 at 15:26 See MP's reference. – Felipe Voloch Apr 25 '13 at 18:12 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry characteristic-p or ask your own question.
{"url":"http://mathoverflow.net/questions/128688/gauss-mapping-in-finite-characteristic","timestamp":"2014-04-18T03:40:13Z","content_type":null,"content_length":"57484","record_id":"<urn:uuid:3aeea47c-5591-437b-9c75-cb3121d03fe0>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
Lower bounds for testing computability by small width OBDDs , 2011 "... We develop a new technique for proving lower bounds in property testing, by showing a strong connection between testing and communication complexity. We give a simple scheme for reducing communication problems to testing problems, thus allowing us to use known lower bounds in communication complexit ..." Cited by 12 (3 self) Add to MetaCart We develop a new technique for proving lower bounds in property testing, by showing a strong connection between testing and communication complexity. We give a simple scheme for reducing communication problems to testing problems, thus allowing us to use known lower bounds in communication complexity to prove lower bounds in testing. This scheme is general and implies a number of new testing bounds, as well as simpler proofs of several known bounds. For the problem of testing whether a boolean function is k-linear (a parity function on k variables), we achieve a lower bound of Ω (k) queries, even for adaptive algorithms with two-sided error, thus confirming a conjecture of Goldreich [25]. The same argument behind this lower bound also implies a new proof of known lower bounds for testing related classes such as k-juntas. For some classes, such as the class of monotone functions and the class of s-sparse GF(2) polynomials, we significantly strengthen the best known "... Property testing is concerned with deciding whether an object (e.g. a graph or a function) has a certain property or is “far ” (for some definition of far) from every object with that property. In this paper we give lower and upper bounds for testing functions for the property of being computable by ..." Cited by 1 (1 self) Add to MetaCart Property testing is concerned with deciding whether an object (e.g. a graph or a function) has a certain property or is “far ” (for some definition of far) from every object with that property. In this paper we give lower and upper bounds for testing functions for the property of being computable by a read-once width-2 Ordered Binary Decision Diagram (OBDD), also known as a branching program, where the order of the variables is known. Width-2 OBDDs generalize two classes of functions that have been studied in the context of property testing- linear functions (over GF (2)) and monomials. In both these cases membership can be tested in time that is linear in 1/ɛ. Interestingly, unlike either of these classes, in which the query complexity of the testing algorithm does not depend on the number, n, of variables in the tested function, we show that (one-sided error) testing for computability by a width-2 OBDD requires Ω(log(n)) queries, and give an algorithm (with one-sided error) that tests for this property and performs Property testing is concerned with deciding whether an object (e.g. a graph or a function) has a certain property or is “far ” (for some definition of far) from every object with that property [RS96, "... Abstract. We introduce strong, and in many cases optimal, lower bounds for the number of queries required to nonadaptively test three fundamental properties of functions f: [n] d → R on the hypergrid: monotonicity, convexity, and the Lipschitz property. Our lower bounds also apply to the more restri ..." Add to MetaCart Abstract. We introduce strong, and in many cases optimal, lower bounds for the number of queries required to nonadaptively test three fundamental properties of functions f: [n] d → R on the hypergrid: monotonicity, convexity, and the Lipschitz property. Our lower bounds also apply to the more restricted setting of functions f: [n] → R on the line (i.e., to hypergrids with d = 1), where they give optimal lower bounds for all three properties. The lower bound for testing convexity is the first lower bound for that property, and the lower bound for the Lipschitz property is new for tests with 2-sided error. We obtain our lower bounds via the connection to communication complexity established by Blais, Brody, and Matulef (2012). Our results are the first to apply this method to functions with nonhypercube domains. A key ingredient in this generalization is the set of Walsh functions, an orthonormal basis of the set of functions f: [n] d → R. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=13177870","timestamp":"2014-04-20T20:21:22Z","content_type":null,"content_length":"19817","record_id":"<urn:uuid:38863512-41aa-45f7-b49f-022c4028d4be>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Goodies is a free math help portal for students, teachers, and parents. Home | About Us | Contact Us | Privacy | Advertise | Share | Interactive Math Goodies Software | More Free Math Lessons Worksheets Generator Games Homework Articles Forums Glossary Puzzles Calculators Standards Solving More Decimal Unit 13 > Lesson 8 of 11 Word Problems Example 1: School lunches cost $14.50 per week. About how much would 15.5 weeks of lunches cost? Analysis: We need to estimate the product of $14.50 and 15.5. To do this, we will round one factor up and one factor down. Answer: The cost of 15.5 weeks of school lunches would be about $200. Example 2: A student earns $11.75 per hour for gardening. If she worked 21 hours this month, then how much did she earn? Analysis: To solve this problem, we will multiply $11.75 by 21. Answer: The student will earn $246.75 for gardening this month. Example 3: Rick's car gets 29.7 miles per gallon on the highway. If his fuel tank holds 10.45 gallons, then how far can he travel on one full tank of gas? Analysis: To solve this problem, we will multiply 29.7 by 10.45 Answer: Rick can travel 310.365 miles with one full tank of gas. Example 4: A member of the school track team ran for a total of 179.3 miles in practice over 61.5 days. About how many miles did he average per day? Analysis: We need to estimate the quotient of 179.3 and 61.5. Answer: He averaged about 3 miles per day. Example 5: A store owner has 7.11 lbs. of candy. If she puts the candy into 9 jars, how much candy will each jar contain? Analysis: We will divide 7.11 lbs. by 9 to solve this problem. Answer: Each jar will contain 0.79 lbs. of candy. Example 6: Paul will pay for his new car in 36 monthly payments. If his car loan is for $19,061, then how much will Paul pay each month? Round your answer to nearest cent. Analysis: To solve this problem, we will divide $19,061.00 by 36, then round the quotient to the nearest cent (hundredth). Answer: Paul will make 36 monthly payments of $529.47 each. Example 7: What is the average speed in miles per hour of a car that travels 956.4 miles in 15.9 hours? Round your answer to the nearest tenth. Analysis: We will divide 956.4 by 15.9, then round the quotient to the nearest tenth. Step 1: Step 2: Answer: Rounded to the nearest tenth, the average speed of the car is 60.2 miles per hour. Summary: In this lesson we learned how to solve word problems involving decimals. We used the following skills to solve these problems: 1. Estimating decimal products 2. Multiplying decimals by whole numbers 3. Multiplying decimals by decimals 4. Estimating decimal quotients 5. Dividing decimals by whole numbers 6. Rounding decimal quotients 7. Dividing decimals by decimals Directions: Read each question below. You may use paper and pencil to help you solve these problems. Click once in an ANSWER BOX and type in your answer; then click ENTER. After you click ENTER, a message will appear in the RESULTS BOX to indicate whether your answer is correct or incorrect. To start over, click CLEAR. 1. Estimate the amount of money you need to pay for a tank of gas if one gallon costs $3.04 and the tank holds 11.9 2. The sticker on Dean's new car states that the car averages 32.6 miles per gallon. If the fuel tank holds 12.3 gallons, then how far can Dean travel on one full tank of gas? 3. Larry worked 15 days for a total of 116.25 hours. How many hours did he average per day? 4. Six cases of paper cost $159.98. How much does one case cost? Round your answer to the nearest cent. 5. There are 2.54 centimeters in one inch. How many inches are there in 51.78 centimeters? Round your answer to the nearest thousandth. This lesson is by Gisele Glosser. You can find me on Google. ┃ Lessons on Decimals, Part II ┃ ┃ Estimating Decimal Products ┃ ┃ Multiply Decimals and Whole Numbers ┃ ┃ Multiplying Decimals ┃ ┃ Estimating Decimal Quotients ┃ ┃ Dividing Decimals by Whole Numbers ┃ ┃ Rounding Decimal Quotients ┃ ┃ Dividing Decimals by Decimals ┃ ┃ Solving More Decimal Word Problems ┃ ┃ Practice Exercises ┃ ┃ Challenge Exercises ┃ ┃ Solutions ┃ ┃ Related Activities ┃ ┃ Interactive Puzzles ┃ ┃ Printable Worksheets ┃ ┃ The Decimal Dance ┃ ┃ Need More Practice? ┃ ┃ Try our Decimal Worksheet Generator ┃ ┃ Try our Money Worksheet Generator ┃ ┃ Try our Place Value Worksheet Generator ┃ Lessons Worksheets WebQuests Games Homework Articles Forums Glossary Puzzles Newsletter Standards Buy the Goodies Now! Home | About Us | Contact Us | Privacy | Advertise | Share | Interactive Math Goodies Software | More Free Math Lessons Worksheets Generator Games Homework Articles Forums Glossary Puzzles Calculators Standards Solving More Decimal Unit 13 > Lesson 8 of 11 Word Problems Example 1: School lunches cost $14.50 per week. About how much would 15.5 weeks of lunches cost? Analysis: We need to estimate the product of $14.50 and 15.5. To do this, we will round one factor up and one factor down. Answer: The cost of 15.5 weeks of school lunches would be about $200. Example 2: A student earns $11.75 per hour for gardening. If she worked 21 hours this month, then how much did she earn? Analysis: To solve this problem, we will multiply $11.75 by 21. Answer: The student will earn $246.75 for gardening this month. Example 3: Rick's car gets 29.7 miles per gallon on the highway. If his fuel tank holds 10.45 gallons, then how far can he travel on one full tank of gas? Analysis: To solve this problem, we will multiply 29.7 by 10.45 Answer: Rick can travel 310.365 miles with one full tank of gas. Example 4: A member of the school track team ran for a total of 179.3 miles in practice over 61.5 days. About how many miles did he average per day? Analysis: We need to estimate the quotient of 179.3 and 61.5. Answer: He averaged about 3 miles per day. Example 5: A store owner has 7.11 lbs. of candy. If she puts the candy into 9 jars, how much candy will each jar contain? Analysis: We will divide 7.11 lbs. by 9 to solve this problem. Answer: Each jar will contain 0.79 lbs. of candy. Example 6: Paul will pay for his new car in 36 monthly payments. If his car loan is for $19,061, then how much will Paul pay each month? Round your answer to nearest cent. Analysis: To solve this problem, we will divide $19,061.00 by 36, then round the quotient to the nearest cent (hundredth). Answer: Paul will make 36 monthly payments of $529.47 each. Example 7: What is the average speed in miles per hour of a car that travels 956.4 miles in 15.9 hours? Round your answer to the nearest tenth. Analysis: We will divide 956.4 by 15.9, then round the quotient to the nearest tenth. Step 1: Step 2: Answer: Rounded to the nearest tenth, the average speed of the car is 60.2 miles per hour. Summary: In this lesson we learned how to solve word problems involving decimals. We used the following skills to solve these problems: 1. Estimating decimal products 2. Multiplying decimals by whole numbers 3. Multiplying decimals by decimals 4. Estimating decimal quotients 5. Dividing decimals by whole numbers 6. Rounding decimal quotients 7. Dividing decimals by decimals Directions: Read each question below. You may use paper and pencil to help you solve these problems. Click once in an ANSWER BOX and type in your answer; then click ENTER. After you click ENTER, a message will appear in the RESULTS BOX to indicate whether your answer is correct or incorrect. To start over, click CLEAR. 1. Estimate the amount of money you need to pay for a tank of gas if one gallon costs $3.04 and the tank holds 11.9 2. The sticker on Dean's new car states that the car averages 32.6 miles per gallon. If the fuel tank holds 12.3 gallons, then how far can Dean travel on one full tank of gas? 3. Larry worked 15 days for a total of 116.25 hours. How many hours did he average per day? 4. Six cases of paper cost $159.98. How much does one case cost? Round your answer to the nearest cent. 5. There are 2.54 centimeters in one inch. How many inches are there in 51.78 centimeters? Round your answer to the nearest thousandth. This lesson is by Gisele Glosser. You can find me on Google. ┃ Lessons on Decimals, Part II ┃ ┃ Estimating Decimal Products ┃ ┃ Multiply Decimals and Whole Numbers ┃ ┃ Multiplying Decimals ┃ ┃ Estimating Decimal Quotients ┃ ┃ Dividing Decimals by Whole Numbers ┃ ┃ Rounding Decimal Quotients ┃ ┃ Dividing Decimals by Decimals ┃ ┃ Solving More Decimal Word Problems ┃ ┃ Practice Exercises ┃ ┃ Challenge Exercises ┃ ┃ Solutions ┃ ┃ Related Activities ┃ ┃ Interactive Puzzles ┃ ┃ Printable Worksheets ┃ ┃ The Decimal Dance ┃ ┃ Need More Practice? ┃ ┃ Try our Decimal Worksheet Generator ┃ ┃ Try our Money Worksheet Generator ┃ ┃ Try our Place Value Worksheet Generator ┃ ┃ Lessons on Decimals, Part II ┃ ┃ Estimating Decimal Products ┃ ┃ Multiply Decimals and Whole Numbers ┃ ┃ Multiplying Decimals ┃ ┃ Estimating Decimal Quotients ┃ ┃ Dividing Decimals by Whole Numbers ┃ ┃ Rounding Decimal Quotients ┃ ┃ Dividing Decimals by Decimals ┃ ┃ Solving More Decimal Word Problems ┃ ┃ Practice Exercises ┃ ┃ Challenge Exercises ┃ ┃ Solutions ┃ ┃ Need More Practice? ┃ ┃ Try our Decimal Worksheet Generator ┃ ┃ Try our Money Worksheet Generator ┃ ┃ Try our Place Value Worksheet Generator ┃ Lessons Worksheets WebQuests Games Homework Articles Forums Glossary Puzzles Newsletter Standards Buy the Goodies Now!
{"url":"http://mathgoodies.com/lessons/decimals_part2/solve_more_problems.html","timestamp":"2014-04-21T07:17:19Z","content_type":null,"content_length":"41967","record_id":"<urn:uuid:4751a871-3f98-4fb2-be62-27d73f274a8e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
Yonkers Algebra Tutor ...I am currently a Spanish major with a concentration in Linguistics. I have tutored students in both ESL and Spanish using the phonetics. I had the student create noises; the younger students I had them make faces, and for the older students have show them the position where the tongue and shape of the lips. 13 Subjects: including algebra 2, algebra 1, Spanish, English ...I work with students to develop a custom study plan that attacks their weaknesses and enhances their strengths. Students that hone these techniques over considerable practice have had great success! I have developed techniques that have helped students raise their scores several hundred points! 34 Subjects: including algebra 1, algebra 2, calculus, writing ...Although daunting sometimes, I think you can beat them simply by taking on a positive attitude and by practicing. I have tutored math for a couple of semesters back in college, and I consider myself qualified to tutor math, computer science, and the logic portion of the LSAT (a favorite). I al... 9 Subjects: including algebra 2, algebra 1, precalculus, logic ...Working with each student, I use individual approach based on student's personality and background. The progress is guaranteed!I am a native Russian speaker. My overall grade in Russian and Russian literature in high school was 5 (the highest possible grade), and I received a Gold Medal for outstanding academic performance. 24 Subjects: including algebra 1, algebra 2, physics, calculus ...Prior to teaching I was a financial analyst with a major corporation. I have a MS in Education and a MBA in Accounting and Finance. As a tutor, I take a personal approach when working with my 8 Subjects: including algebra 2, algebra 1, geometry, SAT math
{"url":"http://www.purplemath.com/yonkers_algebra_tutors.php","timestamp":"2014-04-21T12:44:19Z","content_type":null,"content_length":"23682","record_id":"<urn:uuid:988fbceb-b21e-421e-9065-226759b37d6e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
[Haskell-cafe] phantom types TP paratribulations at free.fr Fri Aug 17 18:27:02 CEST 2012 First, thanks for your answer. On Friday, August 17, 2012 15:31:32 you wrote: > So if we define eval the way it is defined in the example, the > compiler cannot infer that the type of (I n) is Expr Int, even though > it knows that n's type is Int. I think that my problem came from the fact that I have misunderstood type We have seen that the function eval: eval :: Expr a -> a eval (I n) = n yields a compilation error: Couldn't match type `a' with `Int' `a' is a rigid type variable bound by the type signature for eval :: Expr a -> a A somewhat similar error is found at test :: Show s => s test = "asdasd" yields a compilation error: Could not deduce (s ~ [Char]) from the context (Show s) bound by the type signature for test :: Show s => s at Phantom.hs:40:1-15 `s' is a rigid type variable bound by the type signature for test :: Show s => s Both errors contain the expression "rigid type variable". The explanation in the Stack Overflow page made me understand my error: test :: Show s => s means "for any type s which is an instance of Show, test is a value of that type s". Something like test :: Num a => a; test = 42 works because 42 can be a value of type Int or Integer or Float or anything else that is an instance of Num. However "asdasd" can't be an Int or anything else that is an instance of Show - it can only ever be a String. As a consequence it does not match the type Show s => s. The compiler does not say: «s is of type String because the return type of test is a String». Identically, in our case, «eval :: Expr a -> a» means «for any type a, eval takes a value of type «Expr a» as input, and outputs a value of type a». Analogously to the above case, the compiler does not say «a is of type Int, because n is of type Int». The problem here is that (I n) does not allow to know the type of a. It may be of type Expr String as you have shown: *Main> let expr = I 5 :: Expr String *Main> expr I 5 *Main> :t expr expr :: Expr String So we may have anything for «a» in «Expr a» input type of eval. These multiplicity of values for «a» cannot match the output type of the equation «eval (I n) = n» which is an Int. Thus we get an error. Am I correct? More information about the Haskell-Cafe mailing list
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2012-August/102948.html","timestamp":"2014-04-19T09:28:08Z","content_type":null,"content_length":"5162","record_id":"<urn:uuid:92a179a8-680b-4ae2-9fdd-91ddafb0494f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Venn Diagram Problem August 4th 2011, 10:45 AM #1 Aug 2011 Venn Diagram Problem Hi Everyone, Sorry if this is in the wrong place, I'm not very techno-savvy. Basically I have a small question in some maths work that has absolutely stumped me: "A survey of 80 ICT users established that: all of them used at least one of Word, Excel and Access. 10 used all three packages, 34 used Word and Access, 25 used Word and Excel, 59 used Word, 35 used Excel, 52 used Access. Using a Venn diagram, or otherwise, calculate the number of ICT users in the survey who: used Word and Access but not Excel; used Word and Excel but not Access; used only Word; used only Excel; used only Access; used Word or Access." Every single time I create the Venn Diagram, I end up with a total of 77 people. As it says everyone uses at least one of the packages so this can't be right. Any help is much appreciated! re: Venn Diagram Problem Hi Everyone, Sorry if this is in the wrong place, I'm not very techno-savvy. Basically I have a small question in some maths work that has absolutely stumped me: "A survey of 80 ICT users established that: all of them used at least one of Word, Excel and Access. 10 used all three packages, 34 used Word and Access, 25 used Word and Excel, 59 used Word, 35 used Excel, 52 used Access. Using a Venn diagram, or otherwise, calculate the number of ICT users in the survey who: used Word and Access but not Excel; used Word and Excel but not Access; used only Word; used only Excel; used only Access; used Word or Access." Every single time I create the Venn Diagram, I end up with a total of 77 people. As it says everyone uses at least one of the packages so this can't be right. Any help is much appreciated! Use the second formula here, re: Venn Diagram Problem Hello there, thank you for your quick response! I am using the sleeve principle but still coming up with 77 :-( I'm only in my first year of Sixth Form so I have only briefly studied this and am finding it extremely frustrating! I don't understand where I'm going wrong, do you think the question is incorrect? Thanks again. August 4th 2011, 10:49 AM #2 August 4th 2011, 11:26 AM #3 Aug 2011
{"url":"http://mathhelpforum.com/advanced-math-topics/185603-venn-diagram-problem.html","timestamp":"2014-04-17T08:04:35Z","content_type":null,"content_length":"36855","record_id":"<urn:uuid:fef3143f-ead8-4f39-ae2d-1a80a42252fc>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
Generalized function From Encyclopedia of Mathematics A mathematical concept generalizing the classical concept of a function. The need for such a generalization arises in many problems in engineering, physics and mathematics. The concept of a generalized function makes it possible to express in a mathematically-correct form such idealized concepts as the density of a material point, a point charge or a point dipole, the (space) density of a simple or double layer, the intensity of an instantaneous source, etc. On the other hand, the concept of a generalized function reflects the fact that in reality a physical quantity cannot be measured at a point; only its mean values over sufficiently small neighbourhoods of a given point can be measured. Thus, the technique of generalized functions serves as a convenient and adequate apparatus for describing the distributions of various physical quantities. Hence generalized functions are also called distributions. Generalized functions were first introduced at the end of the 1920-s by P.A.M. Dirac (see [1]) in his research on quantum mechanics, in which he made systematic use of the concept of the Delta-function). The foundations of the mathematical theory of generalized functions were laid by S.L. Sobolev [2] in 1936 by solving the Cauchy problem for hyperbolic equations, while in the 1950-s L. Schwartz (see [3]) gave a systematic account of the theory of generalized functions and indicated many applications. The theory was then intensively developed by many mathematicians and theoretical physicists, mainly in connection with the needs of theoretical and mathematical physics and the theory of differential equations (see [4]–[7]). The theory of generalized functions has made great advances, has numerous applications, and is extensively used in mathematics, physics and engineering. Formally, a generalized function linear functional on some vector space of sufficiently "good" (test) functions An example of a test function in The space of generalized functions For a linear functional If the integer The space belongs to The simplest examples of generalized functions are those generated by locally integrable functions on Generalized functions definable by (2) in terms of locally integrable functions An example of a singular generalized function on It describes the density of a unit mass concentrated at the point in [8]. In general, a generalized function need not have a value at an individual point. Nonetheless, one speaks of a generalized function coinciding with a locally integrable function on an open set: A generalized function for all Support of a generalized function). If The following theorem on piecewise glueing generalized functions holds: Suppose that for each Examples of generalized functions. 1) The Dirac 2) The generalized function is called the finite part, or principal value, of the integral of 3) The surface Linear operations on generalized functions are introduced as extensions of the corresponding operations on the test functions. Change of variables. Since the operation Formula (3) enables one to define generalized functions that are translation invariant, spherically symmetric, centrally symmetric, homogeneous, periodic, Lorentz invariant, etc. Let the function It turns out that Generalized functions, product of). However, this product operation cannot be extended to arbitrary generalized functions in such a way that it is associative and commutative. In fact, if this could be done, then one obtains a Such a product can be defined for certain classes of generalized functions, but it may fail to be uniquely defined. of order Since the operation The following properties hold: the operation 12) The normal derivative of the density of a simple layer on an orientable surface The generalized function 13) The general solution of the equation 14) The general solution of the equation 16) The trigonometric series converges in Cf. also Generalized function, derivative of a. Direct products. Since the operation A generalized function in this case one writes 19) The general solution in (on any compact set), the sequence of numbers The completeness of hence, from (7), The example shows that convolution is a non-associative operation. However, associative (and commutative) convolution algebras exist. By (8), the A generalized function If a fundamental solution 20) The kernel of a fractional differentiation or integration operator Fourier transformation. It is defined on the class Every generalized function in The Fourier transform is the classical Fourier transform. Since the operation The following basic formulas hold for converging to Cf. also Fourier transform of a generalized function. Laplace transformation. Let the generalized function The mapping The inverse of the Laplace transform where the right-hand side of (10) is independent of The one-to-one correspondence between in which [1] P.A.M. Dirac, "The principles of quantum mechanics" , Clarendon Press (1947) MR0023198 Zbl 0030.04801 [2] S.L. Sobolev, "Méthode nouvelle à résoudre le problème de Cauchy pour les équations linéaires hyperboliques normales" Mat. Sb. , 1 (1936) pp. 39–72 [3] L. Schwartz, "Théorie des distributions" , 1–2 , Hermann (1950–1951) MR2067351 MR0209834 MR0117544 MR0107812 MR0041345 MR0035918 MR0032815 MR0031106 MR0025615 Zbl 0962.46025 Zbl 0653.46037 Zbl 0399.46028 Zbl 0149.09501 Zbl 0085.09703 Zbl 0089.09801 Zbl 0089.09601 Zbl 0078.11003 Zbl 0042.11405 Zbl 0037.07301 Zbl 0039.33201 Zbl 0030.12601 [4] N.N. Bogolyubov, A.A. Logunov, I.T. Todorov, "Introduction to axiomatic quantum field theory" , Benjamin (1975) (Translated from Russian) MR452277 [5] I.M. Gel'fand, G.E. Shilov, "Generalized functions" , 1–5 , Acad. Press (1966–1968) (Translated from Russian) Zbl 0801.33020 Zbl 0699.33012 Zbl 0159.18301 Zbl 0355.46017 Zbl 0144.17202 Zbl 0115.33101 Zbl 0108.29601 [6] V.S. Vladimirov, "Equations of mathematical physics" , MIR (1984) (Translated from Russian) MR0764399 Zbl 0954.35001 Zbl 0652.35002 Zbl 0695.35001 Zbl 0699.35005 Zbl 0607.35001 Zbl 0506.35001 Zbl 0223.35002 Zbl 0231.35002 Zbl 0207.09101 [7] V.S. Vladimirov, "Generalized functions in mathematical physics" , MIR (1979) (Translated from Russian) MR0564116 MR0549767 Zbl 0515.46034 Zbl 0515.46033 [8] P. Antosik, J. Mikusiński, R. Sikorski, "Theory of distributions. The sequential approach" , Elsevier (1973) MR0365130 Zbl 0267.46028 The notation [a1] K. Yosida, "Functional analysis" , Springer (1980) pp. Chapt. 8, Sect. 4; 5 MR0617913 Zbl 0435.46002 [a2] D.S. Jones, "The theory of generalized functions" , Cambridge Univ. Press (1982) [a3] W. Rudin, "Functional analysis" , McGraw-Hill (1974) MR1157815 MR0458106 MR0365062 Zbl 0867.46001 Zbl 0253.46001 [a4] L.V. Hörmander, "The analysis of linear partial differential operators" , 1 , Springer (1983) MR0717035 MR0705278 Zbl 0521.35002 Zbl 0521.35001 How to Cite This Entry: Generalized function. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Generalized_function&oldid=28200 This article was adapted from an original article by V.S. Vladimirov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{"url":"http://www.encyclopediaofmath.org/index.php/Generalized_function","timestamp":"2014-04-19T04:21:42Z","content_type":null,"content_length":"86970","record_id":"<urn:uuid:e5db843b-7d18-49d2-b29f-c903b7127dc2>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Calculating a simple integral Replies: 10 Last Post: Jun 14, 2013 4:50 AM Messages: [ Previous | Next ] Re: Calculating a simple integral Posted: Jun 14, 2013 4:48 AM Very nice. By the way, this method (differentiating with respect to a parameter) was often used by the physicist Richard Feynman. In fact, he once made a bet (see Needham's book "Visual Complex Analysis") that he could use this method so solve any integral that other people can do by contour integration. He lost his bet but this seems to be an example of a situation where contour integration does not work (or at least I can't see how to make it work). In fact, I myself thought about using Feynman's method to obtain an elementary expression for this integral, but I forgot to use partial fractions first, so never got anywhere. However, I doubt that this is how the "other program" does it. I would be interested in seeing what expression the "other program" gives for the indefinite integral. In my opinion, one of the weaknesses of Mathematica's integration is that it does not allow one to choose the method of indefinite integration. Mathematica has an implementation of the Risch algorithm (which always returns an indefinite integral in terms of elementary functions, if such an answer exists) but it often returns answers in terms of special functions. If properly implemented these can have advantages over "elementary" solutions, but the fact that Mathematica does not allow us to choose which method to use means that we can't tell if an elementary antiderivative exists or not. This is also the situation in this case. So, concerning how the "other program" gets the answer, there seem to be two most likely possibilities. One is that it also computes a primitive function (indefinite integral) in terms of special functions (MeierG?) but gets the limits right. The other possibility is that it uses the Risch algorithm to get an elementary anti-derivative. Andrzej Kozlowski On 13 Jun 2013, at 08:38, Dr. Wolfgang Hintze <weh@snafu.de> wrote: > On 11 Jun., 08:23, Andrzej Kozlowski <akozlow...@gmail.com> wrote: >> No, it's similar to: >> Integrate[(1 - >> Cos[x])/(x^2*(x^2 - 4*Pi^2)^2), {x, -Infinity, Infinity}] >> 3/(32*Pi^3) >> On 10 Jun 2013, at 10:11, djmpark <djmp...@comcast.net> wrote: >>> Doesn't this have a singularity at 2 Pi that produces non-convergence? It's >>> similar to: >>> Integrate[1/x^2, {x, \[Epsilon], \[Infinity]}, >>> Assumptions -> \[Epsilon] > 0] >>> 1/\[Epsilon] >>> That diverges as epsilon -> 0. >>> Are you sure you copied the integral correctly? >>> David Park >>> djmp...@comcast.net >>> http://home.comcast.net/~djmpark/index.html >>> From: dsmirno...@gmail.com [mailto:dsmirno...@gmail.com] >>> If there is a way to calculate with Mathematica the following integral: >>> in = -((-1 + Cos[kz])/(kz^2 (kr^2 + kz^2)^2 (kz^2 - 4 \[Pi]^2)^2)) >>> Integrate[in, {kz, -Infinity, Infinity}, Assumptions -> kr > 0] >>> Another system calculates the same integral instantly. :) >>> Thanks for any suggestions. > Sorry, but I made indeed a calculation error! > Correcting it the partial fraction decomposition leads to Dmitry's > result. > Furthermore, calculating first the indefinite integral and then taking > limits leads to a false result. > Direct calculation of the integral leads to MeierG functions which are > useless because we cannot enter any numerical value. > So, rather than provding the correct result Mathematica comes up with > different false result depending on the method used, and we cannot tel > which one is correct without "research" work. > Summarizing, I need to restate my criticism of Mathematica with > respect to integration (I'm using version 8). > Regards, > Wolfgang Date Subject Author 6/10/13 Re: Calculating a simple integral David Park 6/11/13 Re: Calculating a simple integral Andrzej Kozlowski 6/11/13 Re: Calculating a simple integral Brambilla Roberto Luigi (RSE) 6/13/13 Re: Calculating a simple integral Dr. Wolfgang Hintze 6/13/13 Re: Calculating a simple integral Dr. Wolfgang Hintze 6/14/13 Re: Calculating a simple integral Dr. Wolfgang Hintze 6/14/13 Re: Calculating a simple integral Andrzej Kozlowski 6/14/13 Re: Calculating a simple integral Brambilla Roberto Luigi (RSE)
{"url":"http://mathforum.org/kb/message.jspa?messageID=9135988","timestamp":"2014-04-20T12:08:49Z","content_type":null,"content_length":"29199","record_id":"<urn:uuid:bac728f0-dfe7-408b-8cf9-efc774c5a0fc>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
Yonkers Algebra Tutor ...I am currently a Spanish major with a concentration in Linguistics. I have tutored students in both ESL and Spanish using the phonetics. I had the student create noises; the younger students I had them make faces, and for the older students have show them the position where the tongue and shape of the lips. 13 Subjects: including algebra 2, algebra 1, Spanish, English ...I work with students to develop a custom study plan that attacks their weaknesses and enhances their strengths. Students that hone these techniques over considerable practice have had great success! I have developed techniques that have helped students raise their scores several hundred points! 34 Subjects: including algebra 1, algebra 2, calculus, writing ...Although daunting sometimes, I think you can beat them simply by taking on a positive attitude and by practicing. I have tutored math for a couple of semesters back in college, and I consider myself qualified to tutor math, computer science, and the logic portion of the LSAT (a favorite). I al... 9 Subjects: including algebra 2, algebra 1, precalculus, logic ...Working with each student, I use individual approach based on student's personality and background. The progress is guaranteed!I am a native Russian speaker. My overall grade in Russian and Russian literature in high school was 5 (the highest possible grade), and I received a Gold Medal for outstanding academic performance. 24 Subjects: including algebra 1, algebra 2, physics, calculus ...Prior to teaching I was a financial analyst with a major corporation. I have a MS in Education and a MBA in Accounting and Finance. As a tutor, I take a personal approach when working with my 8 Subjects: including algebra 2, algebra 1, geometry, SAT math
{"url":"http://www.purplemath.com/yonkers_algebra_tutors.php","timestamp":"2014-04-21T12:44:19Z","content_type":null,"content_length":"23682","record_id":"<urn:uuid:988fbceb-b21e-421e-9065-226759b37d6e>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
The Order of Operations Explained: Exponents This is the 3rd in the series The Order of Operations Explained. For the other articles in this series, click here to visit the introduction. Exponents are the second in the list for the Order of Operations (OoO). When we want to find the result of 3^2 x (2 + 7), we have no problem. We know to do parenthesis and then exponents, then multiplication. When you teach algebra, you’ll have to teach some distributing of exponents. But that’s still okay. And the rules of exponents are pretty straight up. So why a whole article on exponents? In the order of operations, the “Exponents” rule represents a bunch more than just superscripts or tiny numbers flying up and to the right of things. Roots are exponents, too! Not the ones from trees, but things like square roots and cube roots. Consider render1. You do the square root first because it qualified as an “exponent.” But if you had , the 9 + 2 is under the radical sign (the square root sign) so it’s bound together in the “Parenthesis” rule. This one isn’t that hard with arithmetic, but when you come to algebra and start “undoing” these things – it’s important to remember that roots fall into this category. Fractional exponents are exponents. This one seems pretty “duh” so it’s easy to see how they fall into the “E” of the order of operations. But what are fractional exponents really? So fractional exponents are the same as roots. Note that some fractional exponents are roots and “plain” exponents all mixed up. Like this one: This is a big fat full concept that needs a little more explaining. So I’ll write more on these in another article. Logs fall under the E. As my algebra and computer math teacher in high school, Mrs. Kelley, used to tell us – logarithms are exponents. It took me a long time to figure out what the heck she meant. But when I did, I thought it was brilliant. is a true statement. Let’s analyze it. Based on the definition of logarithms, this means that 3^2 = 9. Which we know is true. Notice who the exponent is in this: 3^2 = 9. 2 is the exponent. And 2 is the same as because the equals sign in means “is the same as.” So the logarithm is the exponent 2. Still with me? Either way, it’s okay. It’s a weird concept that I can go into detail in a video soon. The thing to remember here is that logarithms fall into the “Exponents” rule of the order of operations. So if you have you have to do the first and then add the 7 after. Want more on exponents? In the meantime, you can check out more than everything you always wanted to know about exponents on the Wikipedia Exponents page. Rebecca Zook created a great video on logarithms. And check out this explanation and problems to work on fractional exponents. And let me know what you think. Did I miss something? Related articles This post may contain affiliate links. When you use them, you support us so we can continue to provide free content! 2 Responses to The Order of Operations Explained: Exponents 1. Thanks so much for linking to me and for the shoutout! Kate Nowak also had a great post up about how to introduce logs without freaking students out, if you’re intereseted: http:// It’s a great approach and I’ve used it successfully with my own tutoring students! □ Thanks for the link, Rebecca! Kate’s suggestion is awesome. I’ll start using it. I even tweeted it! Another thing I sometimes do is teach the International Log Song – totally unrelated to math logs, but fun to sing. Totally loosens the students up! http://www.youtube.com/watch?v=zkNj1WXqDmw Leave a reply
{"url":"http://mathfour.com/algebra/the-order-of-operations-explained-exponents","timestamp":"2014-04-17T00:54:23Z","content_type":null,"content_length":"35011","record_id":"<urn:uuid:3e039d81-b316-41ca-a713-ffbf2f05b004>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
Completeness of Borel measure MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. Let $X$ be a compact Hausdorff space and $\mu$ a finite Borel measure without atoms which is outer regular with respect to open sets and inner regular with respect to compact sets. Can such measure be complete? up vote 10 down vote favorite fa.functional-analysis gn.general-topology measure-theory show 6 more comments Let $X$ be a compact Hausdorff space and $\mu$ a finite Borel measure without atoms which is outer regular with respect to open sets and inner regular with respect to compact sets. Can such measure be complete? No, it is not possible for $\mu$ to be complete. 1. There exists a closed subset $K$ of $X$ with $\mu(K)=0$ and a continuous onto map $f\colon K\to2^\omega$. 2. With $K,f$ as above, if $A\subseteq 2^\omega$ is any set not in the universal completion of the Borel sigma-algebra on $2^\omega$, then $f^{-1}(A)$ is not Borel measurable. In particular, taking $A$ to be something like a Vitali set (e.g., let $\sim$ be the equivalence relation on Cantor space $2^\omega$ where $x\sim y$ iff $x_i=y_i$ for all but finitely many $i$ and choose $A$ such that it contains one element from each equivalence class) then $f^{-1}(A)$ is a $\mu$-null set which is not Borel. This does require the Axiom of choice. Proof of 1: Start by letting $S$ be the support of $\mu$. That is, $S$ is the smallest closed subset of $X$ for which $\mu(S)=\mu(X)$. The existence of $S$ follows from compactness of $X$ and regularity of $\mu$; if $S$ is taken to be the intersection of all closed sets of full measure then, for any open $U$ containing $S$, compactness implies that $X$ is the union of $U$ and finitely many open $\mu$-null sets, so $\mu(U)=\mu(X)$. Outer regularity gives $\mu(S)=\mu(X)$ as required. up vote As $X$ is not atomic, there exists distinct points $x\not=y$ in $S$ and, choosing disjoint closed neighbourhoods $K_0,K_1$ of $x,y$ we have $\mu(K_0)\gt0$ and $\mu(K_1)\gt0$. Furthermore, we 1 down have $\mu(\{x\})=\mu(\{y\})=0$ so, by outer regularity, $\mu(K_0)$ and $\mu(K_1)$ can be made as small as possible. Applying this process inductively gives nonempty compact sets $K_{i_1,i_2,\ldots,i_n}$ for $(i_1,\ldots,i_n)\in2^n$ of positive measure such that $K_{i_1,\ldots,i_n}\cap K_{j_1,\ldots,j_n}=\ emptyset$ whenever $(i_1,\ldots,i_n)\not=(j_1,\ldots,j_n)$ and $K_{i_1,\ldots,i_n}\subset K_{i_1,\ldots,i_{n-1}}$. Furthermore, they can be chosen such that $\mu(K_{i_1,\ldots,i_n})\lt4^{-n} $. We can now define $K_x=\bigcap_{n\ge1}K_{x_1,\ldots,x_n}$ for each $x\in2^\omega$, which is nonempty by compactness of $X$ with zero measure (by countable additivity of $\mu)$, and $K\ equiv\bigcup_{x\in2^\omega}K_x$ is closed. Defining $f\colon K\to2^\omega$ by setting $f(a)=x$ for $a\in K_x$ satisfies the requied properties. QED Proof of 2: Suppose that $A\subseteq2^\omega$ is not in the universal completion of the Borel sigma-algebra. Then, there exists a finite Borel measure $\nu$ on $2^\omega$ such that $A$ is not in the completion of the Borel sigma-algebra with respect to $\nu$. The Hahn-Banach theorem gives a regular finite measure $\lambda$ on $X$ such that $\nu=f^\ast\circ\lambda$. If $f^{-1}(A)$ was in the Borel sigma-algebra on $X$ then, by regularity, there would exist sequences of compact sets $B_n\subseteq f^{-1}(A)$, $C_n\subseteq f^{-1}(A^c)$ with $\lambda(B_n)\to\lambda(f^{-1} (A))$ and $\lambda(C_n)\to\lambda(f^{-1}(A^c))$. It follows that $f(B_n)\subseteq A$ and $f(C_n)\subseteq A^c$ are compact sets with $\nu(f(B_n))\to\nu(A)$ and $\nu(f(C_n))\to\nu(A^c)$ from which it follows that $A$ is in the completion of the Borel sigma-algebra wrt $\nu$, contradicting the assumption. QED add comment No, it is not possible for $\mu$ to be complete. There exists a closed subset $K$ of $X$ with $\mu(K)=0$ and a continuous onto map $f\colon K\to2^\omega$. With $K,f$ as above, if $A\subseteq 2^\omega$ is any set not in the universal completion of the Borel sigma-algebra on $2^\omega$, then $f^{-1}(A)$ is not Borel measurable. In particular, taking $A$ to be something like a Vitali set (e.g., let $\sim$ be the equivalence relation on Cantor space $2^\omega$ where $x\sim y$ iff $x_i=y_i$ for all but finitely many $i$ and choose $A$ such that it contains one element from each equivalence class) then $f^{-1}(A)$ is a $\mu$-null set which is not Borel. This does require the Axiom of choice. Proof of 1: Start by letting $S$ be the support of $\mu$. That is, $S$ is the smallest closed subset of $X$ for which $\mu(S)=\mu(X)$. The existence of $S$ follows from compactness of $X$ and regularity of $\mu$; if $S$ is taken to be the intersection of all closed sets of full measure then, for any open $U$ containing $S$, compactness implies that $X$ is the union of $U$ and finitely many open $\mu$-null sets, so $\mu(U)=\mu(X)$. Outer regularity gives $\mu(S)=\mu(X)$ as required. As $X$ is not atomic, there exists distinct points $x\not=y$ in $S$ and, choosing disjoint closed neighbourhoods $K_0,K_1$ of $x,y$ we have $\mu(K_0)\gt0$ and $\mu(K_1)\gt0$. Furthermore, we have $\ mu(\{x\})=\mu(\{y\})=0$ so, by outer regularity, $\mu(K_0)$ and $\mu(K_1)$ can be made as small as possible. Applying this process inductively gives nonempty compact sets $K_{i_1,i_2,\ldots,i_n}$ for $(i_1,\ldots,i_n)\in2^n$ of positive measure such that $K_{i_1,\ldots,i_n}\cap K_{j_1,\ldots,j_n}=\emptyset$ whenever $(i_1,\ldots,i_n)\not=(j_1,\ldots,j_n)$ and $K_{i_1,\ldots,i_n}\subset K_{i_1,\ldots,i_{n-1}}$. Furthermore, they can be chosen such that $\mu(K_{i_1,\ldots,i_n})\lt4^{-n}$. We can now define $K_x=\bigcap_{n\ge1}K_{x_1,\ldots,x_n}$ for each $x\in2^\omega$, which is nonempty by compactness of $X$ with zero measure (by countable additivity of $\mu)$, and $K\equiv\bigcup_{x\in2^\ omega}K_x$ is closed. Defining $f\colon K\to2^\omega$ by setting $f(a)=x$ for $a\in K_x$ satisfies the requied properties. QED Proof of 2: Suppose that $A\subseteq2^\omega$ is not in the universal completion of the Borel sigma-algebra. Then, there exists a finite Borel measure $\nu$ on $2^\omega$ such that $A$ is not in the completion of the Borel sigma-algebra with respect to $\nu$. The Hahn-Banach theorem gives a regular finite measure $\lambda$ on $X$ such that $\nu=f^\ast\circ\lambda$. If $f^{-1}(A)$ was in the Borel sigma-algebra on $X$ then, by regularity, there would exist sequences of compact sets $B_n\subseteq f^{-1}(A)$, $C_n\subseteq f^{-1}(A^c)$ with $\lambda(B_n)\to\lambda(f^{-1}(A))$ and $\lambda (C_n)\to\lambda(f^{-1}(A^c))$. It follows that $f(B_n)\subseteq A$ and $f(C_n)\subseteq A^c$ are compact sets with $\nu(f(B_n))\to\nu(A)$ and $\nu(f(C_n))\to\nu(A^c)$ from which it follows that $A$ is in the completion of the Borel sigma-algebra wrt $\nu$, contradicting the assumption. QED
{"url":"http://mathoverflow.net/questions/87983/completeness-of-borel-measure?answertab=oldest","timestamp":"2014-04-18T03:04:31Z","content_type":null,"content_length":"64801","record_id":"<urn:uuid:3ab8045a-1ae1-49b2-87a4-161866da3f81>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
European Physical Journal C European Physical Journal C (EUR PHYS J C ) Publisher: Springer Verlag The European Physical Journal C Particles and Fields A merger of Il Nuovo Cimento A and Zeitschrift für Physik C The physics of elementary particles lies at the frontier of our understanding of nature. The journal EPJ C publishes most recent experimental and theoretical results obtained in this field. Experimental results come from the high energy physics laboratories such as CERN (Switzerland) DESY (Germany) SLAC and Fermilab (both USA) and KEK (Japan) with their accelerators and experimentral facilities and also from nonaccelerator laboratories such as Kamioka (Japan) Gran-Sasso (Italy) and others. Theoretical topics include studies and tests of the Standard Model computer simulations of Quantum Chromodynamics search for the Higgs particle and supersymmetry and the investigation of new ideas beyond the standard model. Experimental Physics e + e - experiments Lepton nucleon scattering Hadron hadron scattering B physics Neutrino physics Non-accelerator experiments High-energy nuclear reactions Theoretical Physics The standard model: Electroweak interactions and QCD Nonperturbative strong interactions Soft hadron physics Lattice field theory High temperature QCD and heavy ion physics Beyond the standard model Astroparticle physics and cosmology Quantum field theory • Impact factor Show impact factor history • 5-year impact • Cited half-life • Immediacy index • Eigenfactor • Article influence • Other titles European physical journal., Particles and fields, EPJ., Eur. phys. j • ISSN • OCLC • Material type Document, Periodical, Internet resource • Document type Internet Resource, Computer File, Journal / Magazine / Newspaper Publisher details • Pre-print □ Author can archive a pre-print version • Post-print □ Author can archive a post-print version • Conditions □ Authors own final version only can be archived □ Publisher's version/PDF cannot be used □ On author's website or institutional repository □ On funders designated website/repository after 12 months at the funders request or as a result of legal obligation □ Published source must be acknowledged □ Must link to publisher version □ Set phrase to accompany link to published version (The original publication is available at www.springerlink.com) □ Articles in some journals can be made Open Access on payment of additional charge • Classification Publications in this journal [show abstract] [hide abstract] ABSTRACT: We present a set of formulas to extract two second-order independent differential equations for the gluon and singlet distribution functions. Our results extend from the LO up to NNLO DGLAP evolution equations with respect to the hard-Pomeron behavior at low-x. In this approach, both singlet quarks and gluons have the same high-energy behavior at low-x. We solve the independent DGLAP evolution equations for the functions $F_{2}^{s}(x,Q^{2})$ and G(x,Q 2) as a function of their initial parameterization at the starting scale $Q_{0}^{2}$ . The results not only give striking support to the hard-Pomeron description of the low-x behavior, but give a rather clean test of perturbative QCD showing an increase of the gluon distribution and singlet structure functions as x decreases. We compared our numerical results with the published BDM (Block et al. Phys. Rev. D 77:094003 (2008)) gluon and singlet distributions, starting from their initial values at $Q_{0}^{2}=1\ \mathrm{GeV}^{2}$ . European Physical Journal C 02/2014; 73(5). [show abstract] [hide abstract] ABSTRACT: We describe the electromagnetic field by the massless limit of a massive vector field in the presence of a Coulomb gauge fixing term. The gauge fixing term ensures that, in the massless limit, the longitudinal mode is removed from the spectrum and only the two transverse modes survive. The system, coupled to a classical conserved current, is quantized in the canonical formalism. The classical field configurations due to time-independent electric charges and currents are represented by coherent states of longitudinal and transverse photons, respectively. The occupation number in these states is finite. In particular, the number of longitudinal photons bound by an electric charge q is given by N=q^2/(16\pi\hbar). European Physical Journal C 10/2013; 73(12). [show abstract] [hide abstract] ABSTRACT: Numerical Stochastic Perturbation Theory was able to get three- (and even four-) loop results for finite Lattice QCD renormalization constants. More recently, a conceptual and technical framework has been devised to tame finite size effects, which had been reported to be significant for (logarithmically) divergent renormalization constants. In this work we present three-loop results for fermion bilinears in the Lattice QCD regularization defined by tree-level Symanzik improved gauge action and n_f=2 Wilson fermions. We discuss both finite and divergent renormalization constants in the RI'-MOM scheme. Since renormalization conditions are defined in the chiral limit, our results also apply to Twisted Mass QCD, for which non-perturbative computations of the same quantities are available. We emphasize the importance of carefully accounting for both finite lattice space and finite volume effects. In our opinion the latter have in general not attracted the attention they would deserve. European Physical Journal C 10/2013; 73(12). [show abstract] [hide abstract] ABSTRACT: We discuss the high temperature behavior of retarded thermal loops in static external fields. We employ an analytic continuation of the imaginary time formalism and use a spectral representation of the thermal amplitudes. We show that, to all orders, the leading contributions of static hard thermal loops can be directly obtained by evaluating them at zero external energies and momenta. European Physical Journal C 10/2013; 73(10). [show abstract] [hide abstract] ABSTRACT: We study the quasinormal modes of the massless scalar field of Park black hole in the Ho\v{r}ava gravity using the third order WKB approximation method and found that black hole is stable against these perturbation. We compare and discuss the results with that of Schwarzschild-de Sitter black hole. Thermodynamic properties of Park black hole are investigated and the thermodynamic behavior of upper mass bound is also studied. European Physical Journal C 10/2013; 73(10). [show abstract] [hide abstract] ABSTRACT: A Cellular Automaton algorithm has been implemented in three dimensions for automated track reconstruction of neutrino interactions in a Liquid Argon Time Projection Chamber. We present details of the algorithm and characterise its performance on simulated data sets. European Physical Journal C 10/2013; 73(10). [show abstract] [hide abstract] ABSTRACT: Pilgrim dark energy is an interesting proposal which is based on the conjecture that phantom-like dark energy with strong enough repulsive force can prevent the formation of a black hole. We investigate this conjecture by assuming the apparent and event horizons in non-flat universe and we develop different cosmological parameters. We construct the corresponding equation of state parameter, which indicates that its present values lie in the phantom era of the universe for different ranges of μ (pilgrim dark energy parameter) as well as ξ 2 (interacting parameter). It is interesting to mention here that the pilgrim dark energy with event horizon yields a phantom region for all cases of ξ 2 with μ<0. We also develop the ω Λ – $\omega'_{\varLambda}$ plane and explore the thawing as well as freezing region and ΛCDM limit for these models. The statefinders plane is also constructed, which shows the correspondence with different models such as quintessence and phantom dark energy, ΛCDM and Chaplygin gas. Finally, we investigate the validity of the generalized second law of thermodynamics with event horizon in a flat as well as non-flat European Physical Journal C 10/2013; 73(10). [show abstract] [hide abstract] ABSTRACT: The scintillation light of liquid argon has been recorded wavelength and time resolved with very good statistics in a wavelength interval ranging from 118 nm through 970 nm. Three different ion beams, protons, sulfur ions and gold ions, were used to excite liquid argon. Only minor differences were observed in the wavelength-spectra obtained with the different incident particles. Light emission in the wavelength range of the third excimer continuum was found to be strongly suppressed in the liquid phase. In time-resolved measurements, the time structure of the scintillation light can be directly attributed to wavelength in our studies, as no wavelength shifter has been used. These measurements confirm that the singlet-to-triplet intensity ratio in the second excimer continuum range is a useful parameter for particle discrimination, which can also be employed in wavelength-integrated measurements as long as the sensitivity of the detector system does not rise steeply for wavelengths longer than 190 nm. Using our values for the singlet-to-triplet ratio down to low energies deposited a discrimination threshold between incident protons and sulfur ions as low as ∼2.5 keV seems possible, which represents the principle limit for the discrimination of these two species in liquid argon. European Physical Journal C 10/2013; 73(10). [show abstract] [hide abstract] ABSTRACT: We investigate the generalized second law (GSL) and the constraints imposed by it for two types of Friedmann universes. The first one is the Friedmann universe with radiation and a positive cosmological constant, and the second one consists of non-relativistic matter and a positive cosmological constant. The time evolution of the event horizon entropy and the entropy of the contents within the horizon are studied by obtaining the Hubble parameter. It is shown that the GSL constrains the temperature of both the radiation and matter of the Friedmann universe. It is also shown that, even though the net entropy of the radiation (or matter) is decreasing at sufficiently large times as the universe expands, it exhibits an increase during the early times when the universe is decelerating. That is, the entropy of the radiation within the comoving volume is decreasing only when the universe is undergoing an accelerated expansion. European Physical Journal C 10/2013; 73(10). [show abstract] [hide abstract] ABSTRACT: By employing some modification to the normal NJL model, we discuss the Wigner solution of quark gap equation at finite temperature and chemical potential when the current quark mass m is nonzero. The discovery of the coexistence of the Nambu solution and the Wigner solution at finite temperature and chemical potential beyond the chiral limit is of great importance in the study of the chiral phase transition of QCD. Using the pressure difference between Nambu phase and Wigner phase (or in other words, the bag constant) as an order parameter for chiral phase transition, we draw a possible phase diagram based on our calculations. European Physical Journal C 10/2013; 73(10). [show abstract] [hide abstract] ABSTRACT: We analyze new-physics contributions to e +e −→W +W − at the TeV energy scale, employing an effective field theory framework. A complete basis of next-to-leading-order operators in the standard-model effective Lagrangian is used, both for the nonlinear and the linear realization of the electroweak sector. The elimination of redundant operators via equations-of-motion constraints is discussed in detail. Polarized cross sections for e +e −→W +W − (on-shell) are computed and the corrections to the standard-model results are given in an expansion for large $s/M^ {2}_{W}$ . The dominant relative corrections grow with s and can be fully expressed in terms of modified gauge-fermion couplings. These corrections are interpreted in the context of the Goldstone-boson equivalence theorem. Explicit new-physics models are considered to illustrate the generation and the potential size of the coefficients in the effective Lagrangian. Brief comments are made on the production of W +W − pairs at the LHC. European Physical Journal C 10/2013; 73(10). [show abstract] [hide abstract] ABSTRACT: We find exact energy eigenvalues and eigenfunctions of the quantum bouncer in the presence of the minimal length uncertainty and the maximal momentum. This form of Generalized (Gravitational) Uncertainty Principle (GUP) agrees with various theories of quantum gravity and predicts a minimal length uncertainty proportional to $\hbar\sqrt{\beta}$ and a maximal momentum proportional to $1/\sqrt{\beta}$, where $\beta$ is the deformation parameter. We also find the semiclassical energy spectrum and discuss the effects of this GUP on the transition rate of the ultra cold neutrons in gravitational spectrometers. Then, based on the Nesvizhevsky's famous experiment, we obtain an upper bound on the dimensionless GUP parameter. European Physical Journal C 09/2013; 73(10). Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.
{"url":"http://www.researchgate.net/journal/1434-6044_European_Physical_Journal_C4","timestamp":"2014-04-19T06:15:41Z","content_type":null,"content_length":"203082","record_id":"<urn:uuid:4d6ec4e9-6144-4935-a455-9e9330c134fb>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Question: 3 lines intersect within a circle. What is the greatest number of separate,non overlapping regions that can be formed inside the circle by the intersection of the lines? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5020fadfe4b0f2897dbf2777","timestamp":"2014-04-19T04:19:14Z","content_type":null,"content_length":"135982","record_id":"<urn:uuid:86400b9a-b178-45c8-9a36-7cc94122f3b9>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
How many years is 60 million seconds? An order of magnitude is a scale of numbers with a fixed ratio, often rounded to the nearest ten. For example: The United States has the world's highest incarceration rate. It has an order of magnitude more imprisoned human beings than Norway. Thirty Seconds to Mars (also commonly stylized as 30 Seconds to Mars) is an American rock band from Los Angeles, California, formed in 1998. The band consists of Jared Leto (lead vocals, guitar, bass, keyboards), Shannon Leto (drums, percussion) and Tomo Miličević (lead guitar, bass, strings, keyboards, other instruments). The band's debut album, 30 Seconds to Mars (2002), was produced by Bob Ezrin and released to critical acclaim but only to limited success. The band achieved worldwide fame with the release of their second album A Beautiful Lie (2005), which received multiple certifications all over the world, including platinum in the United States. Their next release, This Is War (2009), reached the top ten of several national album charts and earned numerous music awards. In 2013, Thirty Seconds to Mars left EMI for Universal Music, and released the fourth album, Love, Lust, Faith and Dreams (2013), to critical and commercial success. As of May 2013, the band has sold over 10 million albums worldwide. Disaster Accident Health Medical Pharma Related Websites:
{"url":"http://answerparty.com/question/answer/how-many-years-is-60-million-seconds","timestamp":"2014-04-21T02:35:03Z","content_type":null,"content_length":"25566","record_id":"<urn:uuid:e82c968e-b65c-46da-a179-cf282bd6ebdb>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
Interview with Prof. Lynn Arthur Steen - Part II This is the second and final part of my online interview with Prof. Steen. Part I is posted here. It includes useful background information about the new Algebra II End-of-Course Exam, its purposes, its content and its impact on districts that use a 3-year integrated math sequence. Prof. Steen also courageously tackles issues as diverse as proficiency with fractions, the role of factoring in the 21st century, AP Calculus as a model for a national curriculum, the linear mastery model of learning mathematics, gifted education, the critical factors needed to elevate mathematics education in our country, and attempting to resolve the Math Wars. He ends with advice for mathematics educators, restating the core message of the NCTM Standards. Again I want to express my gratitude to Prof. Steen for taking the time to reply thoughtfully to some difficult and controversial questions regarding mathematics education. I'm hoping that this forum serves as a springboard for other bloggers to have further conversations with educational leaders and, perhaps, bring, opposing parties together at an 'online roundtable.' Regardless of personal ideologies, I hope those who have or will visit will find this interview as thought-provoking as I did. One thing is for certain. Both Prof. Steen and I have a new-found appreciation for how difficult it will be to resolve the major problems in education, mathematics education in particular. Again, I invite readers to post comments and keep the discussion alive. I would also be interested in reactions to the format of this interview. Suggestions for improvement? Perhaps make it more give-and-take? Math Notations Interview (continued) Lynn Arthur Steen, St. Olaf College, September, 2007 6. Many secondary teachers decry the lack of proficiency with fraction skills and fraction concepts demonstrated by their students. It’s always easy for each group of teachers from graduate school on down to place blame on prior grades. Do you believe that Achieve has addressed this problem adequately with their enumeration of K-8 mathematics expectations in their 2002 publication, Foundations for Success? The expectations summarized in Foundations for Success certainly subsume the arithmetic of fractions and the relationships among fractions, decimals, proportions, and percents, but they do so quite concisely. Details are unfolded in Achieve's K-8 Number Benchmarks, especially throughout grades 4-6. However, no one associated with this project was so naïve as to imagine that the mere inclusion of an extensive discussion of fractions in a report will adequately address the problem of students entering high school—or college, for that matter—without understanding fractions. Setting out clear expectations is only a first step. 7. What is your position on the role of technology, calculators in particular, in K-4, 5-8 and 9-12 mathematics classrooms? My view is that students should learn to use technology wisely, carefully, and powerfully. By wisely, I mean that they make conscious and appropriate decisions about when to use calculators or computers, and when not to. By carefully, I mean that they think enough about the problem they are working on to recognize when a calculator or computer result is beyond the realm of plausibility. By powerfully, I mean that they make full use of the most powerful tools available in order to prepare rich and accurate analyses. In this age, mathematical competence requires competence to use computer tools, so the use of technology must be an explicit goal of mathematics education. It no more follows from students' widespread misuse of calculators that calculators should be banned than from students' widespread misunderstanding of fractions that fractions should be avoided. Use of technology is as important as use of fractions, and both need to be taught and tested. 8. I have stated repeatedly on this blog that the Advanced Placement Calculus syllabus from which I taught for over 30 years, is essentially a national curriculum for calculus and that I strongly endorse it as such. Do you agree with this characterization? Do you see projects such as ADP moving in a similar direction, working closely with states to achieve a common set of mathematics topics K-12 that must be covered at each grade level? As AP courses go, AP calculus is one of the best. By intent of its sponsor (the College Board), it follows rather than leads national trends. For example, the most recent revision took place a few years after (not before) implementation of pilot projects supported by NSF's calculus reform program. The momentum for change was lead by college faculty, not by the College Board. ADP has a more ambitious goal, namely to lead the nation's K-12 schools to higher standards. In contrast to AP calculus whose syllabus is in the mainstream of college calculus courses, the expectations produced by MAP and ADP are on (and sometimes beyond) the leading edge of K-12 mathematics programs. 9. The types of problems Singaporean children, for example, are tackling seem more complex than their grade counterparts in the U.S. Do you believe that most mathematics curricula in the US, particularly in the area of problem-solving, are as challenging as those in other high-performing nations? U.S. education clearly lags behind many other nations. This is not just a matter of curriculum but of teacher preparation, time in school, parental expectations, community environment, and perhaps funding. Some other nations (e.g., Japan) decided that their curricular expectations were too high and have reduced them. Others (e.g., England) have seen student performance fall. As I implied in my answer to the previous question, the MAP and ADP expectations, being calibrated to international standards, are well beyond what can be achieved at this time by most districts for most students. Their purpose is to set a target, but to reach that target we will need to change much more than curriculum. 10. The End-of-Course Algebra II exam will have a central core and 7 optional modules. Why were traditional topics such as log functions, matrices, conics and sequences/series pulled out of the core? Also, were the standards influenced by the Algebra II topics currently included on the SATs? The traditional Algebra II course was developed as a stepping stone to calculus for the minority of students who felt they might want to study further mathematics. Two decades ago fewer than half of the age cohort took Algebra II. Today's course is intended for all students; it is a requirement for high school graduation in more than half the states. So it is natural that the "core" of Algebra II be rethought, with more specialized topics set aside into optional units. The new Algebra II may well be the last mathematics course ever taken by many of today's high school students, so I hope that the topics included in the new syllabus and test are well suited to the needs of all students. I say "hope" because I actually know very little about the details of the test development process. In particular, I do not know if anyone has made any effort to coordinate topics with the revised SAT. 11. I’m assuming that school districts are already or soon will be receiving more detailed information concerning the new End-of-Course Algebra II exam. Will there be a full sample practice test made available? The Achieve web site will be helpful to Algebra II teachers, but could you suggest some additional resources they could use? I am even more ignorant of these implementation issues than I am about the course goals. While it is helpful to see sample tests, the best way to prepare for an Algebra II test is to study a wide variety interesting and challenging problems. The internet is full of sites that offer enrichment and challenge problems for different high school courses. I'd suggest exploring the Math Forum in the United States and the Millennium Mathematics Project in the United Kingdom. 12. In your opinion, how will the End-of-Course Algebra II exam impact on those districts that use a 3-year integrated math sequence? This is a very important question, and relates directly to the issue you raised earlier about what constitutes the core of the course. In my view, since passing the new end-of-course Algebra II exam will be a requirement for high school graduation for many students, it should be thought of more as an exam covering the third year of high school mathematics than as an exam covering algebra topics that are needed for calculus. Clearly there is much overlap in these two perspectives, but there are also some differences. I understand that the strategy of a core test with optional modules is intended precisely to reflect these two options. I remain concerned that the older calculus-focused view remains too dominant, at the expense of many newly-important topics that serve to introduce combinatorics, finance, probability, statistics, computer science, etc. 13. I still have a hard time when a student reaches for the graphing calculator to analyze the signs of the quadratic function f(x) = x^2-2x-8. Most textbook publishers have deemphasized factoring, relegating it to the back of the book. Educators have generally followed suit, although not all. How do you view the role of factoring in Algebra II and the secondary curriculum in Factoring is one of the topics on the borderline of the two perspectives on Algebra II—preparation for life vs. preparation for higher mathematics. For life (e.g., citizenship and personal living) factoring is a relatively useless skill. For higher mathematics, the conceptual role of factors is crucial, but all real problems that may require factors are solved using computer tools (e.g., Mathematica). The only place where actual factoring of factorable polynomials is required on a regular basis is in mathematics courses. My advice is to be honest with students about this skill (and others like it). It is important for certain purposes, but not a life skill. 14. A recent article in Time magazine as well as a recently published book by Alec Klein make a strong case for gifted education and developing the talents of our brightest math and science students. Do you believe that our most talented math students are being adequately served? In particular, do you believe they can they flourish and develop equally well in heterogeneous classes as in fast-track accelerated classes? This too is a very important and difficult question. Research and experience confirm that the presence of bright and intellectually aggressive students in a class helps propel all students to higher levels of achievement, so pulling these students out will in most cases make it less likely that the average students will reach their full potential. On the other hand, bright students whose mind has moved beyond the class syllabus—which is very common in mathematics—will be bored, resentful, and rebellious. Neither option is good; each short-changes far too many students. Taking a clue from game theory, it seems to me that a mixed strategy is the best compromise: some work together, some work separate. In addition to raising the bar for average students, mixed groups help accelerated students learn to communicate mathematics—a skill that every client of secondary education—employers and professors alike—report is in very short supply. Separate groups help teachers and students focus on problems that are calibrated to match students' current skills. However, even when students are separated by skill level, acceleration is not the only option. Mathematically able students should be challenged as much as possible by opportunities for horizontal exploration of optional topics that are not part of the mainstream curriculum. For many students, excessive acceleration is a great disservice. Except for the tiny minority (beyond three sigma) who need to take college mathematics while still in high school, most student who finish the school mathematics curriculum early wind up with a gap between high school and college mathematics, with rushed rather than deep mastery of high school topics, and with little or no opportunity to employ the mathematics they learned in parallel natural or social science courses. It is appalling how often students who receive a passing grade on AP calculus discover upon entering college that they need to take remedial algebra since they have forgotten whatever little they learned in their pre-calculus rush. Far better to slow down, spread horizontally, and dig deeper into the hidden corners of the regular curriculum. 15. Many mathematics educators I’ve spoken to and worked with believe that the learning of mathematics is essentially linear, i.e., one cannot be successful at level D unless one can demonstrate proficiency with levels A, B and C. What is your view on this model of learning mathematics? In particular, do you believe that students need to demonstrate proficiency in arithmetic skills and numeration before moving on to algebra? The linear model of mathematics learning is wrong in almost every respect. Cognitive scientists remind us that the human brain learns by association, not logic. The history of science is full of examples of researchers who came to parts of advanced mathematics via some phenomenon or theory, not by a logical ladder of mathematical steps. Science students frequently encounter and use parts of mathematics in a physics or biology course well before they encounter it systematically in a mathematics course. Fields medalist mathematician William Thurston once described mathematics as like a banyan tree with branches that take root in different places, providing nourishment and growth along multiple pathways (Notices of AMS 37(1990) 844–850). It is also extraordinarily counterproductive to our national goals. Dozens of reports have raised alarms about shortages of mathematically trained graduates from schools and colleges. Curricula and requirements based on the assumption that there is just one proper path to mathematics artificially and unnecessarily restrict potential mathematics graduates to those who find an intellectual kinship with that preferred approach. It cuts out those who might approach mathematics from other directions, be it from biology, or statistics, or computers, or finance, or construction, or energy, or environment, or any of a dozen other things that may interest students more than mathematics but which share a side door to mathematics. 16. Many states ‘talk the talk’ about higher standards and expectations, but translating these goals into reality in the classroom has proved difficult. Could you rank order the most important factors that are needed to accomplish these goals? For example, would you place teacher preparation above textbook quality? Enthusiastic and imaginative teachers who are both mathematically and pedagogically competent are more important by far than anything else in the educational system. In particular, competent teachers need to be free to teach in whatever way is effective for them—which implies minimum constraints from state- or district-imposed curricula and tests. Imaginative teachers with minimum constraints would produce a lot of innovation; required standards and high stakes tests tend to stifle innovation. Clearly, some common expectations and assessments are important, but they should focus on the broad goals of education, not on narrow particulars. Why do we get narrow particulars (that is, "standards") instead of imaginative teachers? The answer is obvious: money and political commitment. It is cheaper by several orders of magnitude to convene a consensus process to write standards than to attract, educate, and retain people with the interests and skills needed to teach mathematics well to all our nation's students. When you don't have enough teachers with the required competence, then the way politicians "make do" is to lay out specific standards and assessments for everyone to follow. I don't think we have much evidence that this strategy will work. 17. Hindsight is always 20-20, but if you could go back in time to the development of the original NCTM standards, what are some changes you would make, in light of what has transpired over the past two decades? It is important to remember that at the time NCTM published its 1989 Standards, the very concept of standards was a subversive idea. Even the definition was in dispute: some viewed a standard as a banner to march behind, others as a hurdle that must be cleared. In this context, it was proper for NCTM to be somewhat cautious. Certainly there were places in the Standards where intentions were not adequately communicated, but nothing can ever prevent critics from selective reading. It is only human to read into a text what you want to find. Consequently, different readers read the Standards differently. I read them as clarion call for eliminating the tradition, most evident in mathematics, to select and educate only the most able students and to provide others, disproportionately poor and minority, with only the illusion of education. For the first time a powerful national voice said that all students deserve a mathematics education. How this can be done, and how long it should take, are details that are still being worked out (as your earlier questions about MAP, ADP, and Algebra II attest). This commitment, that every student deserves an equally good education, is the one unequivocally positive aspect of the No Child Left Behind (NCLB) law. If I were able to go back and make any change, I would highlight that central message more, and make clear that the suggested particulars were to be worked out through traditional American strategies of local innovation. The mistake NCTM made, if it can be called a mistake, was to let its critics define its message as the particulars rather than to keep the nation's attention on the central goal of providing all students with a meaningful mathematics education. 18. Here’s an innocent little question, Prof. Steen! The current conflicts in mathematics education are usually referred to as the Math Wars. In your opinion, what were the major contributing factors in spawning this conflict and how would you resolve it? There are many factors involved. I think I can identify a few, but I have no confidence that I could resolve any of them. One is the natural tendency of parents to want their children to go through the same education that they received—even when, as often is the case with mathematics, they admit that it was a painful and unsuccessful ordeal. This makes many parents critical of any change, most especially if it introduces approaches that they do not understand and which therefore leaves them unable to help their children with homework. Another source were scientists and mathematicians who pretty much breezed through school mathematics and who were increasingly frustrated with graduates (often their own children) who did not seem to know what these scientists knew (or thought they knew) when they had graduated from high school. Our weak performance on international tests appeared to provide objective confirmation of these concerns, and they came to pubic notice just as the NCTM standards became widely known in the early to mid-1990s. Even though very few students had gone through an education influenced by these standards, the confluence of events led many to believe that the standards contributed to the decline. A third source can be traced to the way in which the NCTM Standards upset the caste system in mathematics education. Mathematicians are accustomed to a hierarchy of status and influence with internationally recognized researchers at the top, ordinary college teachers in the middle, below them high school teachers, and at the very bottom teachers in elementary grades. The gradient is determined by level of mathematics education and research. So it came as somewhat of a shock to research mathematicians when the organization representing elementary and secondary school teachers, seemingly without notice or permission, deigned to issue "standards" for mathematics. Mathematicians would say, and did say, "we define mathematics, not you." I could go on, but won't. But I do want to add that, as with any contentious issue, face-to-face dialog helps bridge differences. With some exceptions, I believe that has happened with protagonists of the math wars. Achieve was one of the first organizations to bring to one table people from all these different perspectives. Subsequently, other groups have made similar efforts, generally with good results. As mathematicians and educators roll up their sleeves to work together on common projects, each learns from the other and the frictions that led to the math wars begin to reduce. 19. Finally, I’ve observed considerable frustration among K-12 mathematics educators for the past 20 years. Each wants to do what she/he perceives is the best for her/his students but they are often mandated to follow new curricula and programs that come and go every few years and for which they often receive inadequate training. What message would you like to convey to these dedicated I said above that teachers are the key to success in mathematics education, but that outsiders impose standards and assessments as a means of protecting students against soft spots in the system. This is not unreasonable, since in the K-12 sector the state is responsible for guaranteeing that children receive a proper education. It seems to me that the only way that teachers can regain control over their own affairs is for them to convincingly take on the role of ensuring quality education for all children. That will require much higher standards for initial licensure, for tenure, for professional development, and a commitment to post-tenure reviews. This is the regimen followed by most good colleges and with suitable modification, by hospitals. Self-imposed quality control is the sign of a true profession. The problem teachers face is a severe mismatch between the needs of K-12 education, especially in mathematics and science, and available resources. But here teachers have an asset that they need to make better use of, namely, regular access to parents and school boards. What they need to do with that access is help the public understand the changing nature of mathematics and science, the unique value it offers their children, the challenges involved in keeping up with a rapidly changing discipline while at the same time teaching students of quite varied skills and preparation, and the concrete steps that teachers have taken to ensure that all students receive a sound education. Focusing on quality for all—the core message of the NCTM Standards—should gradually elevate the respect in which teachers are held and with it, the support they receive from the public. 42 comments: Mr. Person said... Excellent interview, Dave! I'll post some reactions next week. (I think.) mr. person-- Thanks! It has been tiring to say the least - I need a break for a couple of days to get ready for next week's carnival! Actually, I hope Prof. Steen's astute comments have shed some light on the current state of math education. They have for me. I've always believed that some dialog is better than none! We need to keep the lines of communication flowing freely in both directions to move forward. Thank you for doing this interview. I think there is a real audience for this sort of stuff, and I appreciate that Steen took time out to answer questions from a retired math teacher/supervisor. Also, he knows that the blogs will have people who disagree with him, so it took some courage to participate. I take issue with some of the points raised. In particular, I am concerned about the role that teachers have played and will play in moving math forward. I'll blog some of my ideas in the next few days. I think you are going to find that you have started some genuinely constructive conversations. I'd like to address Prof. Steen's comments on ability grouping of gifted kids, where he said: "Research and experience confirm that the presence of bright and intellectually aggressive students in a class helps propel all students to higher levels of achievement, so pulling these students out will in most cases make it less likely that the average students will reach their full potential. On the other hand, bright students whose mind has moved beyond the class syllabus—which is very common in mathematics—will be bored, resentful, and rebellious. Neither option is good; each short-changes far too many students." Substantial research shows that ability grouping, or at least grouping the highest achievers separately, is beneficial to high-achieving children without harming the rest of the students. I'd direct readers to the Hoagies Gifted page on Grouping for pointers to research and article on this topic. Steen's comment about how to use calculators: "By powerfully, I mean that they make full use of the most powerful tools available in order to prepare rich and accurate analyses. In this age, mathematical competence requires competence to use computer tools, so the use of technology must be an explicit goal of mathematics education." This is just the usual generic blather. I see calculators and spreadsheets used as "avoiders", not "enhancers", in grades K-8. I was in college when the transition was made from sliderules to calculators. Calculators allowed courses to tackle more difficult theories and applications. Shorter and simpler hand calculation assignments gave way to 20-30 page analyses that were made possible with the calculator. This doesn't happen in grade school. They use calculators to avoid mastery of the basics. This would be understandable if they replaced that time with more complex analyses using calculators. It doesn't happen. They could have students see how the average of a set of 10 numbers is greatly affected by the change in one number as compared to a set of 100 numbers. The underlying issue is hard work, not understanding. "In this age, mathematical competence requires competence to use computer tools, so the use of technology must be an explicit goal of mathematics education." Mathematical competence in ANY age requires the skills to CREATE the tools rather than just use the tools. In grades K-8, no assumptions should be made about whether any one child will or will not be a creator of tools, rather than just a user. I find it astonishing that many educators base their teaching approach on what is commonly needed by the average adult. "Math brains" might survive this approach, but for many others, I can hear the doors slamming shut by 8th grade. "...students' widespread misuse of calculators ..." Students don't misuse calculators. schools do. "Use of technology is as important as use of fractions, and both need to be taught and tested." They aren't equal. Fractions are much more important. They are a foundation. "Technology" is not. "The new Algebra II may well be the last mathematics course ever taken by many of today's high school students, so I hope that the topics included in the new syllabus and test are well suited to the needs of all students." " ..needs of all students"? All? You mean the needs of those students who will never take a math class again. What about all of the stundents who may not need calculus in high school, but need a math background that prepares them for almost anything they want to do in college. Students don't fall into just the AP calc group and the no-more-math group. The common approach to math in K-12 starts separating kids by ability in 7th grade. Many of these kids, who might do well with almost any crappy math curriculum because of natural ability, parental support or tutoring, will get on the AP calculus track. You cannot assume that many others couldn't do it, or that they need a terminal course in math in high school. High schools shouldn't be closing doors on students. Many more kids want to go to college and most college degrees require some level of math courses. High schools need to categorize these different college levels and provide clear paths to those requirements. Many students avoid careers they might be very good at because they can't deal with the math requirements. The wrong assumption here is that K-12 schools do a good job teaching math; all that is needed is different content. That isn't true. Most kids need rigor and high expectations, but at a slower and more deliberate pace. Why not define a curriculum path that leads to a rigorous, non-terminal, Algebra II course in high school, one that prepares kids for almost anything they want to do in "The only place where actual factoring of factorable polynomials is required on a regular basis is in mathematics courses. My advice is to be honest with students about this skill (and others like it). It is important for certain purposes, but not a life skill." Aaaaaaarrrrrggggghhhhh! Why not be really honest and show kids exactly what kind of math is required for each degree path they might want to take in college. Do a survey of colleges and list the terminal math course required for each degree. Show it to freshmen in high school. There is "life" after high school and for many, it includes college, and colleges require lots of math "skills". The world isn't broken only into math majors and those who need "life skills". "Research and experience confirm that the presence of bright and intellectually aggressive students in a class helps propel all students to higher levels of achievement, .." Only when the ability levels are fairly close. This is not happening in shcools. Full-inclusion means that kids are tracked by age and that borderline autistic kids are mixed in with the brightest kids. This is the primary requirement or assumption upon which all else is built. Everything else they do is an attempt to mitigate the associated problems. One method is dual teaching where one teacher is geared towards special needs students. Another method they use to mitigate problems is "differentiated instruction". This is only useful in a limited sense because many students are ready for new material. Many schools will not do separate acceleration of content, so the differentiation becomes horizontal. "In addition to raising the bar for average students, mixed groups help accelerated students learn to communicate mathematics—a skill that every client of secondary education—employers and professors alike—report is in very short supply. Separate groups help teachers and students focus on problems that are calibrated to match students' current skills." These are just rationalizations to justify their governing desire for age-tracking. "However, even when students are separated by skill level, acceleration is not the only option." But acceleration can't be done. You can't have it both ways. You can't have very mixed-ability, child-centered group learning and acceleration of material for the more able kids. Something has to give. What gives is acceleration of material. It is replaced with enrichment. They try to make it sound really good, but it's only the best they can do given the primary assumption of age-tracking. For more able kids, schools try to make enrichment sound good with talk of "depth" and "understanding". When the school sent home a form this year (with the usual Multiple Intelligences junk) asking how my son learns, I wrote back and said "fast". "It is appalling how often students who receive a passing grade on AP calculus discover upon entering college that they need to take remedial algebra since they have forgotten whatever little they learned in their pre-calculus rush. Far better to slow down, spread horizontally, and dig deeper into the hidden corners of the regular curriculum." OK, I'll bite. What percentage of passing AP calculus (What did they get on the AP test?) students have to take REMEDIAL algebra in college? Is this even possible? Normally these kids would start college by retaking calculus because many colleges don't accept advanced placement. Second, how bad would high school math teaching have to be to produce calculus students who couldn't do algebra. If this happens, which I seriously doubt, it's NOT a problem with acceleration. This comment is just a way to bad-mouth acceleration and justify the lack of it in the early grades. They can't have full-inclusion and aceleration so acceleration, ability grouping, and talented students take the rap. When my son was in Kindergarten and first grade I got the sense that the teachers didn't like smart kids. At the very least, they wanted to maintain the idea that all kids are equal; it's just that they learn differently. This is done by reducing the emphasis on content and mastery of skills. By sixth or seventh grades they can't play this game anymore and are forced to start splitting kids by ability. Unfortunately for many kids, the damage has been done. "The linear model of mathematics learning is wrong in almost every respect." It's also a strawman. Almost all learning is a spiral. What matters is exactly how the spiral is done, exactly what level of mastery is required before students are allowed to move on to new material, and where the finish line is. This thinking is used to justify all sorts of low expectations. There may be lots of paths to go from point A to point B, but you darn well better get to point B. This can't be used as justification for changing the location of point B. "The history of science is full of examples of researchers who came to parts of advanced mathematics via some phenomenon or theory, not by a logical ladder of mathematical steps." But I bet they knew how to divide fractions. Besides, you don't want to build a curriculum around this concept. "It is also extraordinarily counterproductive to our national goals. Dozens of reports have raised alarms about shortages of mathematically trained graduates from schools and colleges." This isn't going to be solved with a terminal course in Algebra II in high school. "Enthusiastic and imaginative teachers who are both mathematically and pedagogically competent are more important by far than anything else in the educational system." Baloney. Curricula matter. Expectations matter. Mastery matters. This is the old "All we need are good teachers" approach. It isn't true and it won't happen, so deal with reality. Besides, your idea of mathematical and pedagogical competency is probably not the same as mine, especially if you don't agree with my point B. "Certainly there were places in the Standards where intentions were not adequately communicated, but nothing can ever prevent critics from selective reading." People weren't looking at the standards. They were looking at what was happening in the classroom. they were looking at what was coming home in backpacks. My son had MathLand in first grade. I wasn't looking at the standards, I was looking at the implementation. MathLand was the sorriest excuse for a math curriculum. Maybe the standards really meant something else, but it sure took them long enough to make their case more clear (backtrack). "I read them as clarion call for eliminating the tradition, most evident in mathematics, to select and educate only the most able students and to provide others, disproportionately poor and minority, with only the illusion of education." Don't pull out the poor and minority cards. So, traditional math was bad because of the curriculum, but the problem with reform math is the implementation. Do we now have a lot more poor and minority kids doing well in math? Math requires effort. It requires a clearly-defined path of content and mastery. There were problems before and there are problems now. You can't have it both ways. You can't demonize "traditional" math on one hand and then say that all we need are properly-educated teachers on the other. The illusion is that you are solving the problem. You are really just implementing your own opinions about mathematics and pedagogy. "This commitment, that every student deserves an equally good education, is the one unequivocally positive aspect of the No Child Left Behind (NCLB) law." The illusion is that the low cutoff standards of NCLB define an equally good education. Our public schools are all "High Achieving" schools. Almost all kids get above the low NCLB cutoff. Everyone is pleased as punch. In the meantime, affluent and educated parents know better. They set higher expectations for their kids. They work with them at home and provide tutors, if necessary. This is not to make them into super students. This is to provide the content, skills, and acceleration they will not get at public schools - by definition. "One is the natural tendency of parents to want their children to go through the same education that they received—even when, as often is the case with mathematics, they admit that it was a painful and unsuccessful ordeal." Baloney! We're not stupid. When my son was in pre-school I thought of all of the things I didn't like about my traditional math education. There were many. Then, the teacher told me that they used MathLand at our school! They're going in the wrong direction! MathLand has now disappeared from even the web, it was so bad. What parents want are high expecations for their kids. they want their kids to get to point B. Parents are smart enough to know that little Suzie won't get there by writing a report about her favorite number. The problem is not just how the school gets to the finish line, but where the finish line is. "Even though very few students had gone through an education influenced by these standards, the confluence of events led many to believe that the standards contributed to the decline." Lower expectations started long ago. The stadards just codified them and gave them a pedagogical foundation. "Mathematicians would say, and did say, 'we define mathematics, not you.'" Here it is. The justification that K-12 educators can redefine mathematics. Because they want to. Actually, it's no justification at all. It's a philosophical and pedagogical power grab based only on opinion and poor research. Unfortunately, students who want to go to college have to deal with this blatant power grab. K-12 educators do what they want and students have to pick up the pieces when they get to college. Is this better for the poor and minorities? they don't have parents who can see through the conceit and academic turf battles to get their kids prepared for college. The arrogance of K-12 educators is astonishing. "As mathematicians and educators roll up their sleeves to work together on common projects, each learns from the other and the frictions that led to the math wars begin to reduce." This is astonishing. You justify a unilateral decision to redefine K-12 mathematics, you laugh at college professors, and then say that the solution is to work together. Utterly astonishing. And parents see homework assignments asking their kids to write about their favorite number. "What they [teachers] need to do with that access is help the public understand the changing nature of mathematics and science, the unique value it offers their children, the challenges involved in keeping up with a rapidly changing discipline..." K-12 educators "understand the changing nature of mathematics and science" better than college professors and trained parents who have been working in those fields for decades? Go ahead and redefine math and science all you want. Just give parents their money to go someplace else. There are big reasons why public schools are so afraid of choice. They think that parents and kids will leave en masse; especially the poor and minorities. One is the natural tendency of parents to want their children to go through the same education that they received—even when, as often is the case with mathematics, they admit that it was a painful and unsuccessful ordeal. This makes many parents critical of any change, most especially if it introduces approaches that they do not understand and which therefore leaves them unable to help their children with homework. Steve H addressed this nicely, but let me add to it. I did not find my mathematical experience (in the 50's and 60's) to be painful nor unsuccessful. I constantly hear about how the "traditional approach doesn't work". It depends on your definition of "traditional". The traditional texts in the 1800's were not all that good, and the traditional texts in the late 1970's through the 80's that came about because of the "back to basics" movement were also not very good. But some of the texts that came about in the 30's through the 60's were designed to address the age old complaints that there was too much memorization and not enough understanding. William A. Brownell and others developed arithmetic textbooks which provided good contextual explanations for the concepts, as well as providing the drills for practice. The sequencing was logical, and was cumulative; i.e., what was learned in previous lessons was integrated in later lessons. Brownell has been recognized by NCTM as well as math reformers as a pioneer in education reform. The 60's new math led us away from these texts to a too-formal approach for grade schoolers. The high school math books that came out of the 60's new math is a mixed bag, but a bag worth looking at. Some of the texts that emerged as a result of that era were and are excellent: Dolciani's algebra textbooks, Moise and Downs' geometry text, Drayton and Wooten's algebra, and others. There is the charge that these texts (the high school ones) were designed for an "elite class" who were destined for college. That is a matter of opinion. They were designed for anyone who wanted to learn math, and they enabled one to go to college. The algebra book I used in high school was developed in the 50's, and though perhaps not a direct product of new math, was certainly influenced by Max Beberman who helped revolutionize how algebra textbooks were written. I went to a high school that was 50% African Americans. I recall an African American in my Algebra 2 class who though he had an attitude and talked back to the teacher at times, was able to do any problem at the board he was asked to do. Whether he chose to go on to a productive path was not a function of a textbook being designed for an elite class. He was capable of mastering the material, and from what I could see, had done so. While there were improvements that could have been made in the texts that I had, the math program served me well, and I ended up majoring in math. The texts were designed with high expectations in mind to allude to Steve H's post. Compared to the low expectations inherent in many of today's math curricula and texts, I would venture to say that even a book written in 1870 would do better than what we have today. At least students would know the procedures and do computations that for many of today's students are difficult to impossible (including taking 10% of a number). Dave Marain wrote: "We need to keep the lines of communication flowing freely in both directions to move forward." It doesn't work this way. Educators do what they want and then try to "educate" parents. Prof. Steen made this quite clear. Parents have zero power or influence. There is no middle ground when it comes to assumptions and it's clear that many educators have mixed-ability groupings at the top of their assumption list. Second is lower expectations. There are fundamental differences of opinion going on here that have nothing to do with research. The only option is choice, not middle ground. About calculators, I've just written my own rant about them. About "enrichment" for gifted kids, Steve is right that it's usually a joke, however, good enrichment may be better in some cases than additional acceleration for some gifted students, some of the time. (How's that for wishy washy.) My youngest is substantially accelerated right now; but by middle school level, I think there is a place for branching out into enrichment as well, and that is our plan for my middle son this year. I'm hoping to blog about that later this week. Steve, if you want choice, work to make it happen. In the meantime, work to make the schools your kids attend closer to what you would have chosen if you'd had the choice by volunteering to help them out. I suspect that if you volunteered to take rotating groups of kids out of math class to work with them on mastery of basic skills, you'd be unlikely to be turned down. (Some schools wouldn't let you do it for your own kid's classroom -- in that case find a like-minded parent from a different class, and team up with them, each working in the other's kid's class.) If you can show the school how your approach will improve their results, they will be a whole lot more likely to change their minds than if you just rant about their method sucking. If you want to tell me that people shouldn't have to have/find time to volunteer in their kids' school for them to get a decent education, I will tell you that, ideally, you're right. But in the real world, you do what you have to do. "...however, good enrichment may be better in some cases than additional acceleration for some gifted students, some of the time." What happens is that K-12 educators hijack words (like spiral, understanding, and enrichment) and turn them into their own. You can't talk about them in a general sense. This is a favorite ploy by educators. Talk generalities to make parents go away and then decide on all of the details. A classic technique is to talk about balance. I was at a meeting of teachers and parents (who were complaining about Everyday Math), when the teachers started talking about the need for balance. Who could be against balance? Teachers were going to provide more practice sheets. The discussion changed, parents were placated, but no details were discussed. The teachers continued to do what they wanted. "Steve, if you want choice, work to make it happen." I'm sorry, but you don't know what I have or have not been doing. You make it seem that the onus is on me or that I haven't been doing enough. "If you can show the school how your approach will improve their results, they will be a whole lot more likely to change their minds than if you just rant about their method sucking." Excuse me, but you're being extremely presumptuous here. If you want, I could go on and on about all of the committees I've been on, all of the teacher-parent meetings I've been to, and all of the after-school programs I've worked on. Perhaps I can tell you about all of the conversations I've had with our school's head of curriculum. Many of the issues are based on philosophy and assumptions and educators will do what they want. Ranting is a pejorative term used to make people go away. Prof. Steen made some comments and I provided rebutting arguments. The tone might be harsh, but the arguments are specific and you need to address them directly if you have any comments. "But in the real world, you do what you have to do." Why do you presume that this is not what I've been doing ... because you don't like my comments? You don't like my comments. You don't (or can't) rebut them, so you call it ranting and assume that I am part of the problem and not part of the solution. This is a classic educator response. Sorry Steve, I was not trying to presume what you have or have not done. But since you have said here several times, effectively, that parent choice is the only solution you will be happy with, then I was advising you to work toward it. If you already are, then great. I also advised you to volunteer in you child's school. I am assuming you don't since you haven't mentioned in in several cases where it would have easily fit, but I could be wrong. It is easy to get the wrong idea about people on the Internet. I'm sorry if you are unhappy with may use of the word "rant", however I feel that it accurately reflects the tone you have taken here. "Aaaaaaarrrrrggggghhhhh!" is generally a decent indicator of a rant. I'm not trying to make you go away, and I don't think your arguments are worthless. You and I agree on a great number of things. You are complaining about what goes in "in public schools" in general, but as far as I can tell, you are familiar with only the schools in your area. I've seen a lot of things in a lot of public schools that isn't consistent with what you've said. When you say "public schools do X or Y" *that* is a straw man. Public schools do not all do the same things. I live in an area where the public schools are *not* particularly highly rated, yet they do not do most of the things you claim "public schools" do. Many of the issues are based on philosophy and assumptions and educators will do what they want. Educators will do what they believe is best for their school, and the children in it, while at the same time not over-working the teachers. You can argue about assumptions and philosophy until you're all blue in the face, but if you can show them evidence that what you are saying will work for them, will give them what they want, they are much more likely to take it seriously. If you can make it easy for them, by offering to do some of the extra work for them, they are much more likely to try it. Maybe you have already done all this, and if so, that's great. But if not, consider it some constructive advice toward getting what you want/need for your children and the others in your community. Your statement that "educators will do what they want" belies, IMO, a loss of hope, or giving up. Good luck! "... but I could be wrong." You are. I already told you that. "'Aaaaaaarrrrrggggghhhhh!' is generally a decent indicator of a rant." No it isn't. Look at everything else I wrote, none of which you address. "Public schools do not all do the same things." Just because you don't see it doesn't mean that it isn't going on. Nobody is talking about 100%. "... but if you can show them evidence that what you are saying will work for them, will give them what they want, they are much more likely to take it seriously." "Evidence"??? No they won't. Assumptions are assumptions. I've seen this over and over. Can I prove it happens in all schools? Of course not. Is this a minor issue? Of course not. You advocate separating kids by ability and trying to meet their needs. I mentioned before that this is happening at my niece's public school in Michigan (notice that I'm not saying that this happens everywhere), but this is an impossibility anywhere in our region. It can't happen. Grouping by ability happens only when you get to 7th grade. Twenty-five percent of the kids in our town get sent to private schools. The private schools have their own issues, but at least the kids get higher expectations in the lower grades. Parents have tried and tried and tried to get change in the public schools. It hasn't happened. All they get is enrichment. "If you can make it easy for them, by offering to do some of the extra work for them, they are much more likely to try it." No they won't. From your comments in the other thread, it appeared to me that you don't know what's going on in many other places around the country. You don't know what parents have tried to do. What I am saying is nothing new. The so-called "math-wars" aren't a matter of fussy college professors or nostalgic parents. The solution is not a matter of us showing the schools (the onus is on us?) that there is a better way when proof is almost impossible. This is another classic argument. Schools get to pick a math curriculum based on whatever ideas pop into their heads, but others have to prove that they have something better. It won't happen. Even in large districts where resources are plentiful, schools will not provide an alternate math curriculum in spite of demand. Things are happening out in the world and you seem to have no clue. "But if not, consider it some constructive advice toward getting what you want/need for your children and the others in your community." "constructive advice"? This indicates that you really don't know what's going on around the country. After all of my comments in these threads (which you never address directly), you presume to give me "constructive advice", just like my son's first grade teacher (in her first grade teacher voice) telling us parents (sitting in little kids chairs) about the wonders of MathLand. "Your statement that "educators will do what they want" belies, IMO, a loss of hope, or giving up." That is why many (myself included) advocate choice. It's the only leverage many parents can hope for. That is the reason my niece was accelerated in certain subjects in 4th grade at her public school. Choice. In our region, however, this change is very unlikely. The public education system fights all charters (they got the state to impose a moratorium), there is zero money for gifted and talented education (I mean zero.), and they are very happy just meeting the low cutoff of trivial standardized tests. This attitude starts to change in 6th or 7th grades, but for many, the damage has been done. At that age it's easy for them to blame it on the child, the parents, or society. Steve, assumptions change when evidence is provided to contradict them. I have a pretty good idea what is going on in dozens of districts all across the country from various mailing lists I am on. Refusal to "ability group" at all does happen, but based on about 100 parents I correspond with on 3 separate mailing lists, parents who represent all different locations, it is the minority, not the majority, of public schools that work that way. Also based on one of those mailing lists, parents who go into school and volunteer to do for the kids whatever it is they wish the school would do for them but they are not, are rarely turned away. I know many parents who go into public schools and offer enrichment for gifted students. (I have done this myself in the past.) I know parents who help drill kids on their times tables. I know parents who go into schools and tutor struggling students. I know parents who help supervise the "rest of the class" while the teachers do enrichment activities with the gifted learners. Once these things are facilitated and happening in the school, it is easy for teachers and administrators to see their benefits. Once educators can see the benefits of something, their assumptions change. That is what I mean by evidence, and it has worked in many districts. Now, many people I know are "stuck" in Everyday Math districts, or other curricula that they don't like. In many cases they can't change the curriculum, which may be mandated from "on high", but they can facilitate differentiation within the curriculum (which Everyday Math is meant to support), and they can facilitate a supplementary review of the basics for those who need it, etc. Parents can help raise the standards and the performance of kids in their districts by giving of their time. It's true that I don't know a thing about your district, and maybe it is so rigid that they wouldn't let you do anything, or get near any classroom, or anything else. But you cannot tell me that this is what "public schools do" as if that were the rule and not the exception. The public education system fights all charters (they got the state to impose a moratorium), there is zero money for gifted and talented education (I mean zero.) Steve, there is no charter law at all in my state, and no funding for gifted and talented education either. Nor is there any mandate to provide it. However, this lack of funding is precisely why schools I've approached have been very grateful to have a volunteer come in and help them provide programming they do feel badly about not being able to provide themselves. Just because there is no funding for gifted education does not mean schools are obligated to turn down offers of free help. Frustrated, concerned, passionate, educated,intelligent and "I'm mad as hell and I'm not going to take it anymore" , 'trying to make a difference'parents... Passionate, talented, dedicated, and caring educators who are trying to make a difference voluntarily... Without these dialogs, there will only more of the same rhetoric I've been hearing for the past 20 years. Neither mathmom nor I nor hundreds of educators I've worked with or met convey a negative message of indifference to the needs of children. We are ALL frustrated by the current lack of direction in education. This is why I challenged the National Mathematics Panel. This is why I've challenged my state mathematics committee. This is why I started this blog. No one is really going to change anyone's views in any significant way because of this or any other blog. Mathematics education is evolving and when one is part of the process, it's hard to imagine what the outcome will be. Read the Achieve benchmarks! Significant increase of expectations for K-8, stressing skill mastery, conceptual understanding and a higher-level of Steve, get past your disgust over the history of reform and what you currently are enduring and focus on where we're heading. Choice is one way of thinking, however, I am not yet resigned to accepting that as the best solution. There are exciting changes taking place all over the country, changes, that I believe, are moving us in the right direction. Steve, stop dissecting what I or mathmom is saying. Tell me exactly the nature of the curriculum and pedagogy you want, and perhaps we can move forward. Right now this dialog is stuck in limbo, exactly where the Math Wars are for most people. But I am not stuck there. I see the examples enumerated in the K-8 benchmarks by Achieve and I see hope. I read mathmom's, denise's, jackie's and others' comments and I see more than hope. You can laugh at my optimism, but right now, I am in a position to talk to both parents and boards of education and tell them in what direction we should be moving. My opinions, of course, but some seem to respect those... ".. assumptions change when evidence is provided to contradict them." Not in the case of acceleration of material or ability grouping. "...it is the minority, not the majority, of public schools that work that way." First you argue that it's not 100%. Now you argue that it's a minority. I think you have a different idea of ability grouping. I see lots of "ability grouping" in our differentiated instruction classrooms. But once again, details and definitions matter. Much of the ability grouping is on a horizontal or enrichment basis, and not a vertical, or acceleration basis. Lack of acceleration and low expectations is a major issue. It is not a minor problem unless you think that typical state standardized test levels are fine. You argue against standardized tests, but you never explain why it's so difficult for schools to meet those trivial standards and still have lots of time left over. "I know many parents who go into public schools and offer enrichment for gifted students." The issue is NOT enrichment! It's acceleration. It's higher expectations of content and mastery of skills. These are basic assumptions. They are not changed by providing after-school enrichment. How do you think a school would react to after-school sessions of the Singapore Math curriculum? You're basically telling them that the curriculum they use is wrong. "I know parents who help drill kids on their times tables." This is the teacher's job. I had to do it with my son to make sure that it got done, but what about all of those kids who don't have parents who will take up the slack? Teachers don't like drill (and kill), but it's OK if someone else does it? "Once these things are facilitated and happening in the school, it is easy for teachers and administrators to see their benefits." You're talking about being helpers for the teachers, not changing fundamental assumptions. You're talking about enrichment (or remedial mastery work), not grade-level expectations of content and mastery, let alone acceleration. "In many cases they can't change the curriculum, which may be mandated from "on high", but they can facilitate differentiation within the curriculum (which Everyday Math is meant to support), and they can facilitate a supplementary review of the basics for those who need it, etc." I've spent most of my posts talking about assumptions and structural flaws in K-8 education, not how to deal with day-to-day reality. My son is doing fine, thank you. I could laugh all the way to the SAT bank. I could explain to parents how to deal with the "system" (I've done that.), but you can't mix up the two discussions. They are separate. "Parents can help raise the standards and the performance of kids in their districts by giving of their time." I've given my time (including my share of after-school enrichment, like First Lego League robotics) and I have argued for fundamental changes like switching from Everyday Math to Singapore Math. These are two separate approaches. Providing enrichment does not change assumptions. "But you cannot tell me that this is what "public schools do" as if that were the rule and not the exception." That's because you've redefined the problem, just like K-12 educators have redefined math. If you don't like something, redefine it, and the problem goes away. "However, this lack of funding is precisely why schools I've approached have been very grateful to have a volunteer come in and help them provide programming they do feel badly about not being able to provide themselves." Enrichment, not acceleration. Helping them with their solution, not finding a better solution. The last thing schools want are parents on a curriculum selection committee. "Without these dialogs, there will only more of the same rhetoric I've been hearing for the past 20 years." It takes more than dialog. It takes a process. There is none now. It's not a matter of just listening to what parents have to say. The process has to provide some sort power or leverage for parents. If large school districts will not provide an alternate math curriculum in spite of demand, then talk isn't enough. Parents need choice of schools. The goal isn't necessarily one solution for all students. "We are ALL frustrated by the current lack of direction in education." But who gets to decide on this direction? Why are parents (the largest stakeholders) lowest on the list when it comes to input? Why is the goal of education to provide small statistical increases in low cut-off averages? Why is education about finding some middle (low) ground for everyone? (except for the affluent) "There are exciting changes taking place all over the country, changes, that I believe, are moving us in the right direction." You don't think I keep up with these changes? I'm not excited. Will these changes get rid of curricula like Everyday Math? Will these changes require schools to ensure proper grade-level mastery? Will these changes allow poor and minority kids to excel even if they don't have parents who make up the difference? By excel, I don't mean on standardized tests. I mean algebra in 8th grade. I mean getting on the AP calculus track. If Prof. Steen sees only "math brains" and "life skills" paths, there are still real big problems. "Tell me exactly the nature of the curriculum and pedagogy you want, and perhaps we can move forward." I thought that was clear. Singapore Math with enforced grade-level expectations of mastery. This means no social promotion and hoping that kids will somehow figure it out later. There should be no math wars. There is a simple solution. Choice. This doesn't necessarily mean choice of schools. It could mean choice of curriculum. You can't make everyone happy, but two approaches would go a long ways towards solving the problem. But this goes back to my core issues: acceleration and expectations of mastery. Schools have to drop their full-inclusion and enrichment-only philosophy. They have to drop their conceit that they are the ones who get to determine all of the assumptions. Like Prof. Steen, they can't unilaterally redefine math. I can accept that others have different assumptions from mine. But I'm not trying to shove my assumptions down their throats. I think that teachers would love choice. I think that teachers would love higher expectations of grade-level mastery. Sixth grade teachers could focus on sixth grade material, not struggle to make sure that kids who don't know the times table are ready for standardized tests. Schools should laugh at standardized tests. They should take up only a small fraction of their time. Perhaps another track could take a spiral approach to mastery that includes all of the social promotion students. It's impossible to come up with a middle ground when fundamental assumptions are so different. Choice is the only reasonable solution, and it has to be parental choice. I think you have a different idea of ability grouping. I see lots of "ability grouping" in our differentiated instruction classrooms. But once again, details and definitions matter. Much of the ability grouping is on a horizontal or enrichment basis, and not a vertical, or acceleration basis. Lack of acceleration and low expectations is a major issue. You and I have a substantial difference of opinion if you think acceleration is the only or even best answer for gifted learners. Take a look at my blog for more about my take on acceleration versus enrichment for gifted learners. Lack of acceleration is not the same as low expectations. Since you and I agree that the current standards represent "low expectations" particularly as compared to the abilities of gifted learners, I'm surprised that you wouldn't be happy to have gifted learners be taught more, be taught stuff outside and in addition to that basic, minimal set of standards. They are not changed by providing after-school enrichment. Well, for starters, I am talking about in-school, rather than after-school enrichment. I believe that all students have a right to appropriate math education in math class, not only after school. And, if the enrichment is good, then they will be challenged by it. How do you think a school would react to after-school sessions of the Singapore Math curriculum? You're basically telling them that the curriculum they use is wrong. I think you'd get few students to sign up for after-school Singapore Math, never mind what the school would think. But if, for example, you volunteered to take the top kids out of the class and do some extra problem solving with them while they helped the struggling kids master the basic curriculum, most schools would take you up on that. If you took those "extra" problems out of, say, a resource you "happened to" already have like, say Singapore Math, I doubt the school would complain. If you found that the students lacked the mastery of basic skills required to complete the problems, then you would naturally work on those skills with them as well. And mention it to the teacher when you handed the kids back. When I volunteered to not only take the kids but do all the work of preparing the "lessons" I found no difficulty getting takers. When I did this in the public school, I found teachers who were dying to give the top students more challenge, but frequently had trouble making the time to do it. They were thrilled for me to come in and help in that way. If you really want to offer after-school Singapore Math, and you have the demand for it, use a room at the library, or in your house, or anywhere else. It will be harder to prove to the school that what you are doing matters to these kids, but you'll still be helping the kids, and at some point someone might indeed wonder why certain kids are suddenly doing better in math... You're talking about being helpers for the teachers, not changing fundamental assumptions. I'm talking about taking one step at a time. First, by being a helper for the teacher. And slowly changing fundamental assumptions by the results of the help you provide. Because if what you're doing really makes a difference, it will show. And that kind of "evidence" can change minds. Also, as the old saying goes, "you attract more flies with honey than with vinegar." When you volunteer your time and effort to help teachers and schools, they get to know you better, get to trust you more, and may take your opinions, suggestions, requests and advice just that much more I can tell you for certain that I have changed the mind of the principal and middle school math teacher of our current school about things like calculator use, and the value of contest math (for all students, not just the gifted ones). She could see the benefits of the things I was doing with the kids, and began to trust my opinion, and trust the evidence before her eyes. "You and I have a substantial difference of opinion if you think acceleration is the only or even best answer for gifted learners." I said neither, but many schools don't offer acceleration at all, by definition. It's not an option. Are you saying that no acceleration is fine? Enrichment can be good or it can be bad, but it can't be the only way to differentiate. "Lack of acceleration is not the same as low expectations." For most kids, it is. Enrichment cannot make up for a lack of acceleration. "I'm surprised that you wouldn't be happy to have gifted learners be taught more, be taught stuff outside and in addition to that basic, minimal set of standards." Once again, I didn't say that. The "more" I expect is based on acceleration of material, NOT enrichment. Enrichment is no substitute for acceleration, ESPECIALLY with minimal state standards. Our school is still trying to get kids to master their adds and subtracts to 20 at the beginning of third grade. What possible enrichment could you give the more able kids to make it OK not to move on to new material? "When I did this in the public school, I found teachers who were dying to give the top students more challenge, but frequently had trouble making the time to do it." You're still talking about something else. You're talking about dealing with a situation, rather than trying to fix the underlying problem. These are two separate issues. "... and at some point someone might indeed wonder why certain kids are suddenly doing better in math..." This is not a proper process for change. Besides, schools get plenty of kids who pass through Kumon and the schools don't know or care. Many parents help their kids at home and the school doesn't know or care. They're just happy that these kids make their scores look good. All of my posts have nothing to do with figuring out how to play the system. They are about fixing the underlying "Because if what you're doing really makes a difference, it will show. And that kind of "evidence" can change minds. Also, as the old saying goes, 'you attract more flies with honey than with So, I'm not allowed to challenge the system directly? Do you think that I haven't already done a whole lot of working within the system. This doesn't work for basic assumptions. Parents shouldn't have to play this game. "When you volunteer your time and effort to help teachers and schools, they get to know you better, get to trust you more, and may take your opinions, suggestions, requests and advice just that much more seriously." Do I have to spell it out for you? I've done these things. I've gotten along great with my son's teachers and principals. I've had long discussions with them, including those who were in charge of the math curriculum. We might as well be on different planets. I may have changed their thinking a little bit, but they still use Everyday Math. This process for change is not acceptable. The solution is not to change the subject and talk about how to best work within the system. I know all of that. When I talk about fundamental changes like curriculum choice, you talk about being an education helper. I think you'd get few students to sign up for after-school Singapore Math, never mind what the school would think. And you base this on what? Oh, excuse me, I entered a post a while ago about textbooks and parents' experience with their math programs. Thought I'd enter my two cents again, and really don't mind being ignored. Regarding after school programs, the Powell Elementary School in Washington DC started an after school Singapore Math program that was strictly voluntary. They got a pretty good crew of kids staying after, even up to the last week of school. The principal of the school talked to the new Chancellor of Education in DC (Michelle Rhee) about the success of the after school Singapore Math program, and Rhee agreed to have it be the official program in K-3 at that school starting this year. (FYI, DC Public Schools adopted Everyday Math in 2005. For more about that, see: http://www.thirdeducationgroup.org/Review/Essays/v2n6.htm ) Besides the approach used in Singapore Math, and its effectiveness, the principal of Powell School also liked the simplicity of the language used in the books. In a school in which 90% of the school population are English Language Learners, the book has been very accessible. This is not surprising given that in Singapore, three languages are spoken there, but in public schools, all classes are conducted in English. Thus, the books are constructed with English Language Learners in mind. Steve wrote: Are you saying that no acceleration is fine? Enrichment can be good or it can be bad, but it can't be the only way to differentiate. Yes, actually, I am saying lthat no acceleration is fine. If enrichment is good, I think it's not only an acceptable way to differentiate, but an excellent way. Acceleration of material does not offer the kids "more", it offers them "the same, but sooner". Enrichment actually offers them "more" -- material they wouldn't otherwise cover at all. Perhaps you and I are not using the word "acceleration" in the same way? What I have seen labeled as "subject acceleration" is allowing students who have completed the curriculum for 3rd grade to move on and work on the curriculum for 4th grade. The way I have seen this implemented most is by letting a couple of younger kids who are ready for 4th grade material go to a 4th grade classroom during math time. This generally doesn't cost the school anything, so most places offer this as a cheap way to write off the needs of the gifted kids. This is better than nothing, but by no means a great solution for gifted kids. You are offering the kids a some new content, but placing them in a class whose pace and teaching style are still aimed at average learners. It's still going to be too slow and too shallow for gifted learners. The most important feature of a good differentiation plan for gifted learners is grouping them together, so that they can be taught at a faster pace and they don't always have to sit around waiting for the average and below average kids to get it, whatever "it" is. A good math enrichment program is possible at all levels, even with kids who have only learned addition and subtraction. (And even with kids who haven't learned even that!) There is no particular reason that gifted learners absolutely must be taught multiplication or fractions as soon as they master addition and subtraction in order to offer them an appropriate level of challenge. Barry questioned on what I based my assumption that kid wouldn't want to sign up for after-school Singapore math programs. I based it on the experiences of many parents of gifted kids I know who try to "after-school" their kids (with Singapore Math or anything else). Most people run into problems at the point when the school starts giving more than a few minutes per day of homework. At that point, most of the kids rebel against the additional school work. I'm impressed with the fact that you got so many kids in your community to take a voluntary after-school math class. Your story about how the success of those kids influenced a larger-scale change of curriculum for the district (?) is an example of exactly the kind of thing I was talking about -- demonstrate the value of what you are asking for and you are more likely to get it. Congrats to all involved in that effort! "Yes, actually, I am saying lthat no acceleration is fine. If enrichment is good, I think it's not only an acceptable way to differentiate, but an excellent way." For any low standardized grade-level expectations? What if the school is just teaching kids to tie their shoes in third grade? Silly? OK, what about my example of still teaching kids their adds and subtracts to 20 in third grade? Perhaps you assume that any content and skills level a school chooses is fine, but the relationship between grade-level expectations and the value of enrichment is not arbitrary. Enrichment (assuming that it's good) will always help at any level, but it's not a substitute for low grade-level expectations. Acceleration can mean just allowing kids to progress to the level of Singapore Math. Also, enrichment, by definition, implies extra, or add-on. It shouldn't be necessary if the curriculum is good to begin with. Enrichment isn't necessary for Singapore Math. It may be nice, but it isn't necessary. You can't divide math into boring drill and kill and enrichment. There is a whole lot in-between the two. "Acceleration of material does not offer the kids "more", it offers them "the same, but sooner". Enrichment actually offers them "more" -- material they wouldn't otherwise cover at all." Acceleration is not more, but the same, sooner? If this material is so important, then it would be part of the curriculum, not enrichment. I guess you have a funny definition of "more". Enrichment is extra. Acceleration means allowing kids to progress at a faster pace through the material. Math curricula like Singapore Math is not a series of "the same, but sooner". Math is not built around enrichment. You can't redefine "extra" to mean "necessary". If you contort definitions, you can justify almost anything. If, however, a school offers (at least the choice of) a curriculum like Singapore Math, then one could argue against acceleration, but only from a practical or scheduling standpoint, not a pedagogical one. "Perhaps you and I are not using the word "acceleration" in the same way? What I have seen labeled as "subject acceleration" is allowing students who have completed the curriculum for 3rd grade to move on and work on the curriculum for 4th grade." And you call this "the same, but sooner", as if there is a whole lot more to math than what's in a math curriculum? Isn't a curriculum supposed to contain everything that's necessary? How can moving faster through a curriculum not be "more"? Are you just talking about "bad" curricula? " ...so most places offer this as a cheap way to write off the needs of the gifted kids." Strawman. I'm not talking about any such thing. Besides, many places use enrichment as a way to write off the faster pace needs of talented kids. And many G/T programs are about appeasing one group while not fixing the underlying problems for the rest. "The most important feature of a good differentiation plan for gifted learners is grouping them together, so that they can be taught at a faster pace and they don't always have to sit around waiting for the average and below average kids to get it, whatever 'it' is." I'm all for homogeneous grouping, but as I've said over and over this is anathema to most schools. It's an assumption. They will never do it, and no amount of after school volunteering will change it. But I don't understand you. Homogeneous ability grouping and a faster pace is NOT acceleration? Faster pace of what? The curriculum? Enrichment? If you separate kids by ability and if you use a good curriculum, like Singapore Math, then what's the big deal about enrichment? Acceleration becomes less of an issue ONLY for schools that provide multiple curricula, because the acceleration is built in. Of course, acceleration is not going to help a curriculum like the old MathLand or TERC. Faster-paced crap is still crap. But that's not what I'm talking about. Maybe you are. Even if you have a good curriculum, students can still benefit from acceleration or deceleration of the material, but this can only be reasonably done up to a certain point in one classroom. Enrichment is just extra, unless you're talking about a bad curriculum. "There is no particular reason that gifted learners absolutely must be taught multiplication or fractions as soon as they master addition and subtraction in order to offer them an appropriate level of challenge." This is another strawman, and a separate issue. I didn't say anything of the sort. Lots of methods can work, but it also depends on where you are going and what performance levels you expect, at least for some point in time. You can define your own expectations (perhaps with good reason), but there are certain key points where externally-defined tests need to be taken. The big ones are the SSAT, the SAT, and the AP Calculus tests. The other required expectation is a path to algebra in 8th grade that eveyone can manage to follow, NOT just gifted students. The solution is not just separating students by ability. The solution is to provide good math curricula and high expecations for all kids. "Most people run into problems at the point when the school starts giving more than a few minutes per day of homework. At that point, most of the kids rebel against the additional school work." That's because of the stupidity of doing double math work; because of the stupidity of not allowing Singapore Math as a choice during the day, not just after school. "Your story about how the success of those kids influenced a larger-scale change of curriculum for the district (?) is an example of exactly the kind of thing I was talking about -- demonstrate the value of what you are asking for and you are more likely to get it." This process is OK? This validates your smug position? Do all this work and hope that you can change minds? This is our fundamental disagreement. This process should not be necessary. All there needs to be is enough parental demand and a school with reasonable resources to meet that demand. We're not talking about teaching Creationism. We're talking about math. We're talking about years of opinion-based math and your position that the onus is on the parents to prove that something else is better. On top of it all, this process may not work. In fact, my son's school likes Singapore Math (in a general sense), but they think it's too challenging for most kids, especially in a mixed-ability classroom. Case closed. Proof is not always enough. Their assumptions and no choice rule. Steve, indeed you and I are using "acceleration" to use different things. The definition I am using (which corresponds to the one used in literature about teaching gifted students) is allowing students who are ready for a higher grade level to move to it at a younger age. This can include things like "grade skipping", "compaction" (doing 2 grades worth of material in one year), or "subject acceleration" (doing a higher grade level material in a given subject), which is what I'm talking about here. With subject acceleration, if you're in an Everyday Math school and you get subject acceleration in math, you would do 4th grade (or 5th grade or 6th grade) Everyday Math instead of 3rd grade Everyday Math. You seem to be using "acceleration" to mean "substituting a whole different curriculum". I haven't seen it used that way in the literature, so perhaps that's why we're talking past one another. You seem to think that the sun rises and sets with Singapore Math, and it's so "complete" that gifted kids in Singapore Math would never need (or benefit from ?) further enrichment. I disagree. Singapore math seems to cover the basics in a rigorous way and offer a good deal of problem-solving practice. It's a great curriculum, but there are still tons of opportunities for enrichment. Meaningful enrichment that will allow a student to be a better mathematician and a better thinker down the road. A curriculum, even a good one, prescribes what everyone must learn. Enrichment allows capable students to learn more. You asked why, if the topics are so "important" they are not included in the curriculum to start with. The answer is that the curriculum is developed for "everyone" not for gifted learners. Enrichment, effectively, adds additional topics to the "gifted curriculum". Many things can be beneficial for gifted learners to learn even if not everyone can or must learn them. Now, granted, if all a school taught was shoe-tying in math class, then it would be hard to offer mathematical enrichment to kids with no mathematical skills. But that's a strawman as well. As to whether changing things in the way the DC group changed things is an OK process, sometimes when you can't walk into a building through the front door, you may find the back door more accessible. I've said earlier in this discussion that in an ideal world, parents shouldn't have to do things like this, but in the real world, you do what you have to do. Mathmom, Steve-- I had been thinking of setting up an online interview between two articulate representatives of opposing forces in the Math Wars. Thank you for doing this for me. I believe you can respectfully agree to disagree. I would like to invite others to join in, but truthfully this debate is far too important and powerful to reside in a set of comments. I did have a set of 19 questions I would have liked to ask each of you, but, unfortunately, I seem to have misplaced them. I believe they are in the margins of one of my notebooks along with my proof of Fermat's theorem... Seriously, I admire your tenacity and sense of purpose. Steve, you may not believe that I too have been extraordinarily frustrated with educational bureaucracies. Like you and Mathmom, I don't give up. I see merit in both of your positions even though you say there is no middle ground. I have been committed to providing enrichment for all of my students for all my years in the classroom. The curriculum, the textbook, the ancillaries never ever went far enough to develop student understanding. If you've read the investigations I've developed for the past 9 months, I'd like you to tell me in what math program you can find them, Singapore included. There is so much depth to plumb here. BTW, these investigation require strong skills. I don't believe I could ever convince you of any of this... Math mom wrote: Barry questioned on what I based my assumption that kid wouldn't want to sign up for after-school Singapore math programs. I based it on the experiences of many parents of gifted kids I know who try to "after-school" their kids (with Singapore Math or anything else). Most people run into problems at the point when the school starts giving more than a few minutes per day of homework. At that point, most of the kids rebel against the additional school work. I'm impressed with the fact that you got so many kids in your community to take a voluntary after-school math class. Your story about how the success of those kids influenced a larger-scale change of curriculum for the district (?) is an example of exactly the kind of thing I was talking about -- demonstrate the value of what you are asking for and you are more likely to get it. Congrats to all involved in that effort! These were not gifted kids, however. They signed up because the principal of the school introduced it as a program. That's key. Someone high up in the school is advocating Singapore Math. Steve H and I can advocate for Singapore Math til we're blue in the face, and the best we can do is a response of "It's nice but too hard for our kids." There are not many principals like the one at Powell School, who are willing to take the chance she did. I had nothing to do with it. It was the principal who demonstrated the value of Singapore Math. So far, it is only being implemented in Grades K-3 at the Powell School, thanks to Chancellor Rhee. No other school is using it. I am hoping that the Chancellor will agree to have Singapore Math used at other schools in DC. Demonstrating the value of something is no guarantee, however. Politics plays a larger role. I am hopeful that the DC school politics are favorable to Singapore Math. They were not favorable to it in Montgomery County, MD where it was piloted in 4 schools from 2000 to 2003. Despite evidence of success, it was dropped. Everyday Math is now used in Montgomery County, MD. For more information on that, see: Dave, let me assure you, I don't "represent" anything other than my own unique opinions. (And anyhow, I thought I was somewhere in the middle ground, not really firmly on either "side".) Steve and I seem, at this point, to be talking in circles. I don't think either of us will convince the other of much of anything new at this point. 'Tis probably indeed time to agree to Let me know when you find that notebook. That Wiles guy's proof is a bit long, ya know? I knew you were more centrist like me, but, it's fair to say you and Steve are not exactly in the same place! I still think this ongoing dialog is publishable and perhaps should be required reading for all parents, teachers and board members everywhere! Another thought -- why not a face-to-face on YouTube like the Point-CounterPoint segment from 60 Minutes... I came across this interesting and semi-relevant blog post today about effecting change in schools. This particular article is about advocating for gifted kids, and the author has the advantage of being on staff at the school in question, but it still seems that there might be nuggets folks could take from that article and apply elsewhere. "The definition I am using (which corresponds to the one used in literature about teaching gifted students) is allowing students who are ready for a higher grade level to move to it at a younger We're not just talking about gifted students. This all started on a previous thread in response to Prof. Steen's answers to questions about math education in general. The Math Wars is all about big differences of opinion over grade-level content and required levels for mastery of skills. All of this is NOT about enrichment. This is about educators like Prof. Steen who seem to get a kick out dissing parents and college professors and saying that they (K-12 educators) have the final authority over what math is or is not for K-12. I have said before that there is a large academic turf battle to this war and parents are You seem to be hung up about enrichment. Who can be against enrichment? This is like the use of balance. Who can be against balance? Generally speaking, enrichment is good, but there has to be a trade-off, unless you demand that kids attend after-school enrichment. What are you giving up to get this enrichment? You seem to be saying that there are no trade-offs, that enrichment is always good. This is NOT the case. Then, you go further away from the original discussion by assuming that the curricula are generally good (or can't be changed), so adding enrichment is always good. That's not what this discussion is all about. It's about curricula, not what you can add on to them if nothing else can change. But then you go further and claim that enrichment is always better than acceleration. You can't say this. "Singapore math seems to cover the basics in a rigorous way and offer a good deal of problem-solving practice. It's a great curriculum, but there are still tons of opportunities for enrichment. Meaningful enrichment that will allow a student to be a better mathematician and a better thinker down the road." Yes, more is better than less, but once again, we're not talking about more, we're talking about substitution. Besides, if all schools used Singapore Math, there would be no Math Wars. But what happens when the curriculum is worse, much worse? You seem to think that enrichment is still the best solution. Perhaps ONLY if there is nothing that can be done about the curriculum. (Again, this is not what the Math Wars is all about.) But I would still disagree with you. Acceleration can be better than enrichment, even with good curricula. "Now, granted, if all a school taught was shoe-tying in math class, then it would be hard to offer mathematical enrichment to kids with no mathematical skills. But that's a strawman as well." You deliberately ignored what came after my comment about shoe-tying. That was the example about our school which is still trying to teach adds and subtracts to 20 in third grade. This is based on their assumption of no separation of students by ability and full-inclusion. To get this to work requires lower expectations. Spiral curricula facilitate this assumption by giving a pedagogical basis for delayed mastery. But mastery doesn't happen, and enrichment provides no guarantee of a fix. It's also no excuse for lack of acceleration. In fact, many educators trash the idea of skill mastery as being "drill-and-kill". They even go so far as unlinking mastery from understanding. They see mastery as only adding speed. You may use the example of 60 problems in 3 minutes, but the problem of mastery in schools is absolutely, positively nowhere near that level. We're talking about kids in 5th grade who have to think about the solution of 7+8. We have kids in 6th and 7th grades who still don't know their times tables. This problem is not just about basic arithmetic. It continues with fractions, decimals, and percents. This lack of mastery all adds up. Schools will only enforce mastery to the level of trivial standardized tests. This is not enough. Affluent parents get to send their kids to Kumon or to private schools where "serious" students can thrive with (or in spite of) almost any curriculum. Poor and minority students get low expecations at home and at school. The fallacy is that reform math is somehow better; that they teach more understanding; that they prepare students for the 21st century. They do no such things. Enrichment is no solution. They have to change their basic assumptions. The onus is not on others to prove that there is a better way. The onus is on the schools to explain why they can't offer a choice of curricula. National groups seem incapable of defining content and mastery expectations that lead to algebra in 8th grade, so the only option is choice. The goal is not about raising low cut-off points on standardized tests. It's about giving every individual equal access to to curricula and expectations that match their abilities. Steve, sorry, I still don't really understand what you're trying to advocate when you say "acceleration". You seem to be meaning "use a rigorous curriculum and require mastery". These are both things with which I heartily agree. They also have nothing to do with any educational definition of "acceleration" that I have ever seen. I like Singapore math, and I have no problem with requiring mastery -- of course I agree that mastery should be required. This is not inherently incompatible with the use of a spiral curriculum. A good spiral curriculum implementation will have students re-visit topics until they master them. At the same time, the seeds of more complex topics are sown. A good spiral curriculum implementation would be able to support children still learning adds and subtracts to 20 and also those who have mastered that and are ready for harder addition and subtraction. I wouldn't call that "acceleration" I would just call that proper implementation of a spiral curriculum, which includes different expectations for different children, based on their current degree of mastery of the skill being worked on. If it's the case that Everyday Math never checks or requires mastery, then that is indeed a problem. It would be interesting to know if when folks observe EM to be lacking in that way, if that is inherent to the design of EM, or whether it is being mis-implemented, possibly due to a lack of teacher training? Again, if it's the case that EM doesn't support differentiation of expectations based on individual students' levels of mastery, that is also a problem. Again, I don't know if that would be a fault of the program, or of those implementing it. And of course, even if EM does theoretically do these things "right", if it is so hard to implement correctly that it is being implemented incorrectly all across the country, then that too is a problem! I also have no problem offering parents a choice between a Singapore-like curriculum, and an EM-like curriculum. But if you're really concerned about students who you claim get "low expectations" at home as well as at school, I don't see how "choice" is the solution. (If, on the other hand, it is not the expectations themselves that correlate with affluence, but rather the ability to do anything about them, then choice could help.) There seems to be a mis-perception here that I'm defend "reform math" or to represent some "side" of an argument about particular reform curricula. I'm not here to participate in the "Math Wars" per se. I've made specific comments and arguments about specific statements and situations. I have never represented myself as a representative of any particular Math War position. It is you (and to some extent Dave) who seem to be interpreting my words more broadly than I wrote or intended them. I'm sharing my own experiences and observations here. With a well-implemented spiral curriculum that's not related to any major "reform curriculum". With taking time to include non-routine problem solving in the curriculum for all students. Mastery is good, choice is good (within reason), differentiation is good, ability grouping is good. We don't disagree as much as you seem to think we do, Steve. I think our main differences are that: 1) I think that a spiral curriculum can be well implemented and work well for children with a wide variety of abilities and aptitudes. 2) I think that teaching problem solving, using challenging problem and investigations (even beyond what Singapore already includes) is an important and valuable part of math education for all students (even those still working toward mastery of the underlying skills), and that mastery need not be traded away to make the time to do it. (As a poor compromise, I'd recommend offering this as "enrichment" for the high achievers, at a minimum.) I'm going to try very hard to shut up now. I've explained my positions, observations, arguments, etc. multiple times. If I have still failed to make myself clear, making additional attempts will make any difference. Anyone who was likely to be convinced by my arguments should already be convinced, and I doubt anything else I say will convince anyone else. Steve, thanks for the civil discussion. Dave, and Prof. Steen, thanks for the opportunity and instigation. "I still don't really understand what you're trying to advocate when you say "acceleration". You seem to be meaning "use a rigorous curriculum and require mastery". These are both things with which I heartily agree. They also have nothing to do with any educational definition of "acceleration" that I have ever seen." Then you have a very narrow understanding of acceleration. I'll try to be as clear as possible. 1. The fundamental assumption of many K-6 schools is full-inclusion. Schools track by age and include kids who used to be separated and sent to other schools. This also means no separate gifted and talented programs or pull-outs. This is a noble idea, but it doesn't come without a price. 2. To get full-inclusion to work, they use team teaching, set lower expectations, and use a spiraling curriculum that is built around no set dates for mastery. This is the fundamental premise of Everyday Math. This is one of the biggest reasons for its popularity, not that it's such a good curriculum. (and it isn't. It is structurally flawed.) 3. This approach to spiraling allows schools to maintain full-inclusion and hide behind a veneer of "no drill and kill", "conceptual understanding", and "real-world" problem solving. 4. What this happy talk hides are low expectations and a slow pace. I told you twice about our school finally trying to finish up adds and subtract to 20 in third grade. The reason for this is full-inclusion. They can't expect more from many kids. 5. To solve this problem, they push "differentiated instruction". This is supposed to allow full-inclusion classrooms to meet the needs of all students. It can't because the expectations are too 6. The primary method of differentiation is enrichment. They can't allow "acceleration" of material as a way to differentiate (within a curriculum and within a classroom) because it makes mixed-ability, child-centered learning impossible, and THAT is the main purpose of full-inclusion. Schools proceed to talk about the wonders of differentiation and enrichment, but what they are really saying is that they WILL NOT provide acceleration of content and skills because it can't work with full-inclusion - their fundamental assumption. That's why you see people like Prof. Steen saying that acceleration is not that important; that enrichment is all you need. When you come along and talk about enrichment as the only thing you need, you sound just like them. I don't hink you are, but you don't seem to see what's going on here. This is NOT a small problem. You say that you like the idea of separating kids by ability, but for many schools, that cannot, will not, ever be a possibility, by definition, until 7th grade. Expectations are low, math curricula are bad, and they don't allow acceleration in their full-inclusion classrooms, only enrichment. My son is in sixth grade and last week was in a group of three kids who were working on a social studies poster. (a collage in sixth grade!) The girl in his group was just cutting up tiny bits of paper all over their work and complaining to the teacher about the other two kids in her group. My son has no idea what is wrong with her and absolutely no preparation or training on how to deal with kids like this. This is what full-inclusion is like; a social experiment first, education second. As I said before, this is a noble idea, but there is a price. The price is low expectations, a slow pace, and a very fuzzy idea of what constitutes a proper K-6 education. Acceleration is a term that can be used to mean much more than separating kids by grade or classroom for a particular curriculum. Acceleration versus enrichment is a common focal point in many discussions of full-inclusion and differentiated instruction. Parents want acceleration. The schools give them enrichment. "If it's the case that Everyday Math never checks or requires mastery, then that is indeed a problem. It would be interesting to know if when folks observe EM to be lacking in that way, if that is inherent to the design of EM,..." Never, or lacking? EM states cleary that there is no expecation of mastery at any particular time. This doesn't mean never, and many schools are smart enough to impose some level, but it's not required. If you've never seen it, you should. The Math Boxes are the worst. I could teach EM well, but that's not the point. Other (non-reform math) curricula are better. "But if you're really concerned about students who you claim get "low expectations" at home as well as at school, I don't see how "choice" is the solution." Schools can't do anything about parental help at home, so they better do something about expectations at school. This doesn't mean raising low cut-off standards a little higher for all. It means choice. Parents may not be able to help with math homework at home, but they (and the school) can push kids into better curricula. Individual educational opportunity is not improved by raising low cut-off standards. "(If, on the other hand, it is not the expectations themselves that correlate with affluence, but rather the ability to do anything about them, then choice could help.)" It's both; expectations and the ability to do something about them. Many poor have expectations too. They might not be as well-defined, but they have no way to do anything about them. If urban kids were given a free ride and the choice to go to a fancy private school, not many parents would say no, and that's the only expectation they need. "It is you (and to some extent Dave) who seem to be interpreting my words more broadly than I wrote or intended them. " It's because you don't understand how many schools pit acceleration and enrichment against each other. Many schools lower expecations and hide behind enrichment. "1) I think that a spiral curriculum can be well implemented and work well for children with a wide variety of abilities and aptitudes." As I said before, all curricula do spiraling at some level. There is nothing wrong with spiraling in general. The problem is that spiraling is used by reform math to allow full-inclusion and delayed mastery. Everyday Math is a classic example. It's spiral consists of repeated partial learning. Math Boxes are used in the desperate attempt to somehow get kids to eventually figure it out themselves someday. This is really nothing about using previously mastered material in more complicated situations. "2) I think that teaching problem solving, using challenging problem and investigations (even beyond what Singapore already includes) is an important and valuable part of math education for all students (even those still working toward mastery of the underlying skills), and that mastery need not be traded away to make the time to do it. (As a poor compromise, I'd recommend offering this as "enrichment" for the high achievers, at a minimum.)" Boy, I would too, but reform math (with full-inclusion) is so far away from this point that just getting the option or choice of Singapore Math (without enrichment) seems like a dream. Do you really understand how many educators despise Singapore Math? And it has nothing to do with any so-called lack of enrichment. You seem to be focused on some sort of perfection while the rest of the world is struggling with meeting trivial math standards. Have you looked at the NAEP tests and results. We're not talking Singapore Math-type enrichment here. "(even those still working toward mastery of the underlying skills)" You have to understand that this is just the sort of argument used to justify delayed (or never) mastery of skills. You have to be very careful how you define "mastery" and "delayed". Apparently, I am not actually capable of sitting on my hands. :-/ But I'll keep it brief for a change. 1) In a class of 3rd graders where some students still haven't mastered addition and subtraction up to 20, what does EM say the other students who have mastered it are supposed to be doing? 2) What would you, Steve, like the kids who have mastered adds and subtracts up to 20 to be doing while the struggling kids work on that? In the heterogeneous groupings I've experienced (and I'm talking about 3 grade levels worth of kids in one groups, so there's tons of spread in abilities), the kids who have mastered adds and subtracts to 20 would be working on things like 3-digit adds and subtracts with no re-grouping, and also learning carrying and borrowing. I now think that this is what you're calling "acceleration". I would call this "differentiation". Once kids have mastered carrying and borrowing and other similar-level skills, they would move to the next group, and spiral on topics related to multiplication, division, fractions, etc... If they do this at a younger age than usual, I'd call this "acceleration". When you say EM advocates "differentiation" but that it "can't work" because expectations are too low, that doesn't make sense to me. "Differentiated instruction" should by necessity include "differentiated expectations". Sounds like maybe your school just doesn't get how to do differentiation? Or again, maybe we're using the same word to mean different things. Ok, that was only brief for large values of brief. :-} " ...what does EM say the other students who have mastered it are supposed to be doing?" In EM, everyone moves along at the same pace. Everyone is on the same page of the school and home workbooks. The kids who don't understand the material have to move on even though they haven't mastered the material (or even half-understand it). EM thinks this is OK because they will see the material again. Spiraling in EM is not about using previously-mastered material in a more complex fashion. It's about seeing that same material over and over, whether you've mastered it or not. For those who have mastered the material, they consider it a review. For those who haven't yet mastered the material, they get to work on it some more. You might call this differentiation over time or of mastery, rather than differentiation of material. One of the biggest complaints of EM is that they introduce new topics without giving enough time to master previous material. There is little careful development of the material. This gets worse in the later years of EM. I might have mentioned before that I spent this last summer going over sixth grade EM (the new edition) with my son so that he could start taking 7th grade pre-algebra in sixth grade. By sixth grade, EM is desperately trying to make sure that eveyone is up to speed on mastery. Math Boxes are the main tools for doing this and they dominate the lessons. They still introduce new material, but some of it is more appropriate for 7th grade pre-algebra. Everybody has to do these problems, even if they are struggling with the review Math Boxes. It makes EM seem advanced, but it's not a careful introduction and development of each topic. They just throw new material at the students. It doesn't matter to EM because the students will see it again. EM has no mechanism for ensuring mastery except repeated exposure. That's the point. I call this repeated partial learning. Inside of each EM lesson are two or three pages of Math Boxes. Each of these pages is broken into a number of rectangular boxes, each with a few review problems to do. These problems don't have anything to do with the current lesson, and each box has nothing to do with any other box. So, right in the middle of a lesson of new material to learn, students have to do these math boxes. There are so many math boxes and so much jumping around of the material in the math boxes, it's impossible for a teacher to spend class time to review the skills needed for any of the review problems if a student didn't master it the first time. The kids are on their own. In all of my son's previous EM classes, these boxes were self-corrected in class and not turned in. They just moved right along. Speaking of which, if You ever look at all of the books and workbooks that come with EM, add up the number of pages and divide by 180. There is way too much "stuff". It can't be done. My son's fifth grade teacher didn't have time to cover the last 30 percent of the course. She ran out of time. She either had to fly through the course or skip part and take some time to help struggling students. The advanced students twiddled their thumbs and never got the material that they were ready for. It doesn't matter because EM throws it at them like splatter in the middle of page-after-page of Math Boxes. EM is not set up to allow kids to skip Math Boxes and move ahead to new material. Everyone is on the same page. Differentiation in EM means different levels of mastery, not different material. "In the heterogeneous groupings I've experienced (and I'm talking about 3 grade levels worth of kids in one groups, so there's tons of spread in abilities), the kids who have mastered adds and subtracts to 20 would be working on things like 3-digit adds and subtracts with no re-grouping, and also learning carrying and borrowing. I now think that this is what you're calling "acceleration". I would call this 'differentiation'." Three grade levels of kids in one group is not common, but this is not real acceleration. If you don't allow kids to move on to material in the next level, then it's really just compacting. Some schools like to fool parents and call this acceleration. Since most schools don't allow acceleration past the material defined for that grade, the only thing they can offer is enrichment. If the grade-level expectations are low (and they are), enrichment can't solve the problem. True acceleration requires separation by ability. Even if you put three grade levels together, something has to give at the top end of the group. Differentiation, by definition, is used to group kids with widely different abilities. At least some of the time, these kids work together in mixed-ability groups. Prof. Steen argues that this is best for all kids. It is not. When ability differences get past a small range, separation and acceleration are necessary. Since many schools can't seem to provide math curricula that set high expectations of mastery and coverage of material, enrichment can't solve the problem, but that's what they claim. "When you say EM advocates "differentiation" but that it "can't work" because expectations are too low, that doesn't make sense to me. "Differentiated instruction" should by necessity include "differentiated expectations". Sounds like maybe your school just doesn't get how to do differentiation? Or again, maybe we're using the same word to mean different things." EM says that it's OK for each student to absorb whatever he/she can. It doesn't differentiate material. Expectations of mastery are low because that's the fundamental permise of EM. You might call this self-differentiation. There is no mechanism for teachers to decide whether students need extra time or a kick in the rear. For other subjects, the school will differentiate material or expectations explicitly, but it's easier to get away with this in non-math subjects. I don't agree with this, but it's less damaging than in math. In EM, kids who need a slower, more in-depth pace just get pushed along and told that they will see the material again. The problem is that later on, there is little or no time for explanations. There is so much "stuff" in EM, that you can't slow down unless you skip material. EM says that mastery will come (automatically?), but it doesn't. The best schools edit lots of junk out of EM, they slow down, and they don't allow any delay in mastery. They should just get a new curriculum. All of this still doesn't deal with the underlying issue of separating kids by ability or level. Schools can't increase student ability ranges with full-inclusion classrooms and then say that differentiation will solve the problem. This EM website says that each grade level comes with a Differentiation Handbook: Grade-specific handbook provides that helps teachers plan strategically in order to reach the needs of diverse learners. Has anyone seen it and know what it really contains? I wasn't trying to say that putting 3 grades worth of kids in one math group was "acceleration" but it certainly requires a great deal of differentiation! There are kids in the same group learning to count, add with manipulatives, understand place value, carry and borrow, all in the same group. They are not all doing the same work at the same time, of course. All I am trying to say here is that "differentiation" can be used to accomodate the needs of very diverse learners. What is acceleration is when a 6yo demonstrates mastery of the materials covered in that group and moves into the group that's working on multiplication, division, fractions, decimals, etc. And this does happen in our system, albeit rarely. Enrichment would be another reasonable (IMO) alternative for a 6yo who "finished" the K-2 curriculum, but it is a lot more work than letting them accelerate at that level. We do work enrichment in throughout as well. Nothing really gives at the top of the group, because expectations can be set on an individual basis, and eventually the student will move to a higher group. Our highest group covers the skills through pre-algebra. Kids who master those skills work on Algebra, but generally individually, with a tutor to touch base with them once a week. They could go beyond that as well, though we generally (in consultation with parents) prefer to intersperse more really good enrichment at this point and not accelerate them further than the standard honors stream, which expects freshmen to be ready for honors geometry. By the way, I'm not trying to say that all schools should do math the way my kids' school does it. I'm only making the point that a skilled teacher can effectively differentiate over a wide range of levels. It's hard, but in a situation where ability grouping is not done, it's a necessary part of the teacher's job. Ability grouping has many benefits, but if you can't have it, all is not necessarily lost. 1) All of my posts had nothing to do with making the best of an existing situation. 2) All of my posts had nothing to do with your school. But, since you brought it up, "Our highest group covers the skills through pre-algebra. Kids who master those skills work on Algebra, but generally individually, with a tutor to touch base with them once a week. They could go beyond that as well, though we generally (in consultation with parents) prefer to intersperse more really good enrichment at this point and not accelerate them further than the standard honors stream, which expects freshmen to be ready for honors geometry." Normally, kids should get pre-algebra in 7th grade and algebra in 8th grade. Standard honors tracks in high schools usually require at least a 'B' average on a rigorous algebra course in 8th grade. It sounds like you think that all they need to do is to "touch base with them once a week" in algebra. Are you talking about 8th grade? Most schools offer two or three levels of math in 8th grade, including honors algebra. Your school may be able to pull it off. I can't comment specifically, but this is a real problem in many other schools. Math curricula and decisions made by schools in 4th or 5th grade set kids onto non-honors ("life skills") tracks and parents don't figure it out until it's too late. In some cases, it's worse than that. Our school used to use CMP (stopped last year, finally!) which did not meet the requirements for entering honors geometry (or even algebra) in 9th grade. Very surprised ('A') students and their parents had to scramble to get ready for high school. Now, (like many other schools), they provide a class using the same algebra text that the high school uses. The only issue left is to prepare more kids for that track. This will not be done using a curriculum like Everyday Math, which leaves expectations (mastery) up to the kids or the state. Algebra in 8th grade should be the norm, not the exception. Whether your school can get something else to work or not doesn't matter much. Some schools and parents like a more complete un-schooling approach. That's OK, but just don't force it on my child. In fact, I don't want to force my ideas or opinions (like a normal path to algebra in 8th grade) on anyone else. The only option is choice. Although most schools provide a choice of a rigorous algebra course in 8th grade, they don't provide a path (choice) to get there. I didn't get any help from my parents to get to a course in algebra in 8th grade, but that's unlikely to happen nowadays unless you're a math brain or get outside help. It sounds like your district has problems with its math program. It's just not clear to me that those problems are caused by EM, as opposed to the refusal to ability group students, lack of in-class differentiation, etc. "It's just not clear to me that those problems are caused by EM, as opposed to the refusal to ability group students, lack of in-class differentiation, etc." Then you need to do your own research. Obviously, my detailed explanations didn't raise any doubts. This is common. Many teachers can't believe that EM is structurally flawed. They claim it's just the implementation. But this is always the case. Good teachers can teach math with almost any lousy curriculum. Imagine what could happen with a good curriculum. 1) EM doesn't allow ability grouping. Everyone is on the same page. 2) Their idea of differentiation relates to expectations and the different ways people learn, not differentiation of material. It's very difficult to turn EM into something it isn't. Ultimately, it's not my job to convince you or any school to change their opinion, and that's what it is. Opinion. There is a huge difference of opinion about what constitutes a proper math education. Many parents want something different, and many others would want it too when they see it in action. The onus is on the schools (not parents) to show why they cannot provide choice. Schools and teachers cannot force their own opinions of education and expectations on everyone and then offer no choice. People like Prof. Steen can't assume that parents are stupid and incapable of understanding the issues. Schools and teachers don't want to admit that a large portion of what they do is based on assumptions and opinions. They get to pick a curriculum based on whatever they want, but then require "proof" from others who want a change, or even a choice. The Math Wars is all about academic turf. Parents, professors, mathemeticians, engineers, and scientists have been arguing for (at least) choice in K-8 mathematics for years. Schools and teachers don't want to lose the right to force their opinions on others. They don't even want to allow choice. I understand why you and I and other well-informed parents want choice for our kids. I also understand why schools are reluctant to offer it without "proof" that what we are proposing to choose for our kids is at least as good as what the school thinks (its opinion*, yes) is best. It's quite simple. In the final analysis, it's the school that is "on the line" to make sure their kids meet NCLB requirements, and whose reputation is on the line if their precious test scores drop. So, that's why their opinion trumps everyone else's -- they're the ones who have been made responsible for students' progress. [* for EM, that opinion is backed up by years of research. I believe, as I know you do, that that research may be severely flawed, but it is there.] If some parents came along and made the school offer a choice of some inadequate program that they thought would be easier for their kids, and those kids did not meet state standards, it is the school that would get dinged for that. So, IMO, the school must demand "proof" that what anyone else is proposing or demanding be as good as what they want to do. I know perfectly well that that's not what you are asking for, that you're asking to be able to choose a particular curriculum that I agree would be better for most kids than EM, but once you start offering "choice", how do you draw the line as to what parents may choose, if not by requiring proof that what parents propose is adequate. If parents' opinions are to trump everything, it must be the case that parents are held responsible if kids don't progress, and that just isn't the way the system is currently set up (unless you choose to homeschool, of course). There needs to be some way for parents to at least share the responsibility if an "alternative" program that they chose for their kids fails to produce the required results, and I'm not really sure how you'd implement that. "I also understand why schools are reluctant to offer it without "proof" that what we are proposing to choose for our kids is at least as good as what the school thinks (its opinion*, yes) is Most schools allow choice in math in grades 7-12. Besides, K-6 educators don't want proof. They just don't want choice, by definition. "It's quite simple. In the final analysis, it's the school that is "on the line" to make sure their kids meet NCLB requirements, and whose reputation is on the line if their precious test scores It's quite simple. K-6 schools don't want choice by definition. High schools allow choice. Most schools provide choice in 7th and 8th grades. The choice many are asking for in K-6 is for more rigorous curricula, not less. Besides, their reputation in math education (as reflected by standardized test scores) is not very good to begine with. "So, that's why their opinion trumps everyone else's -- they're the ones who have been made responsible for students' progress." Baloney! Their opinion is for low expectations. The standards are low. "[* for EM, that opinion is backed up by years of research. I believe, as I know you do, that that research may be severely flawed, but it is there.]" Flawed, but you'll use it anyways? Everyone else has to provide "real" proof? Check out What Works Clearinghouse. The very, very small percentage of positive results for EM are only for small relative changes. In fact, there is little good educational research on anything. Even WWC is grasping at straws to make it seem like their existence is of any value. In fact, many schools are using the insufficient data provided by WWC as justification of EM. In spite of all of the extra emphasis on data collection, statistics, and real-world problems in EM, it hasn't made schools smarter in analyzing data. I had to explain to my son (his textbook didn't do it) that how one interprets a graph depends a lot on how you display the data. If you compress or eliminate the lower end of the vertical axis, you can make the data trend look flat or very steep. Many are scaling the EM data to make the benefit look good on an absolute scale. Sorry. They flunk even reform math. "If some parents came along and made the school offer a choice of some inadequate program that they thought would be easier for their kids, and those kids did not meet state standards, it is the school that would get dinged for that. So, IMO, the school must demand "proof" that what anyone else is proposing or demanding be as good as what they want to do." "Inadequate?" Singapore Math is "inadequate"? This is not about "some parents". This is about decades of complaints by professors, mathemeticians, engineers, and scientists. This is not about lowering standards. How can schools demand proof when they can't provide it themselves? This isn't about proof. It's about control. Schools don't require proof when they select curricula like EM. "... but once you start offering "choice", how do you draw the line as to what parents may choose, if not by requiring proof that what parents propose is adequate." See above. If decades of complaints by lots of professionals don't make any difference, then the issue isn't about a line. Besides, grades 7-12 provide choice. "If parents' opinions are to trump everything, ..." "... it must be the case that parents are held responsible if kids don't progress, and that just isn't the way the system is currently set up (unless you choose to homeschool, of course)." Schools blame parents, kids, and society all of the time (not without some justification). But when 50% of fourth graders can't say how many fourths are in a whole (NAEP test), then how much worse can it get? This is about K-6 teaching philosophy and control, not proof. Schools can't have it both ways. On one hand, they use "responsibility" to prevent others from making changes or demanding choice. Then, on the other hand, they blame external causes for bad results. I'm more than happy to relieve them of their responsibility. I'll bet they wouldn't like the trade-off. Besides, before NCLB they weren't offerning choice anyways. "There needs to be some way for parents to at least share the responsibility if an "alternative" program that they chose for their kids fails to produce the required results, and I'm not really sure how you'd implement that." So K-6 schools really would like to provide choice, but they don't know how? I don't think so. No school or teacher in their right mind would say that Singapore Math (as a choice!) would be worse than EM or TERC. Still no choice. Schools don't want choice, by definition. They want full-inclusion. Absolutely no tracking is allowed. They see choice as a form of tracking and they would be right. Singapore Math is so much stronger than what they are currently using. Lack of choice is not based on proof or responsibility. It's based on opinion and control. Most schools allow choice in math in grades 7-12. Choice? Most schools offer some kind of ability grouping and/or acceleration in grades 7-12, but kids are placed according to placement tests and/or teacher recommendations, not parental or student "choice". Besides, K-6 educators don't want proof. They just don't want choice, by definition. It's clear that that's your opinion. Singapore Math is "inadequate"? I think you know perfectly well that that was not the point I was making. Lack of choice is not based on proof or responsibility. It's based on opinion and control. Again, in your opinion Ah, there will be no resolution here folks... Actually, I'm going to be doing some consulting with K-12 math teachers in a school district in my area and I intend to begin our work together by having them read through Prof. Steen's interview and every one of the comments!I think it's a real 'page-scroller' that will set the tone for our work together. Steve, despite your insistence that everything can be reduced to choice, higher expectations and mastery, I just don't accept that there are simple answers to these issues. Public schools are part of a structure that needs change but this change will probably occur like most other changes in education -- very slowly. I do believe that we may not have Sputnik to spur radical change but we do have international comparisons of students and the realities of where our technological expertise will be coming from in the next few decades. Steve, I do have one question for you, which is rhetorical. How many classified children of your own do you have? Children with a range of learning disabilities but who are capable of being mainstreamed in some classes? Inclusion is not a 'choice' in public education, Steve -- it's the law. We may not all agree that Federal mandates like these are in the best interests of the regular ed population, but there other points of view out there, different from yours, on this score. I personally have extensive experience in this area. It has helped to reshape my thinking about how incredibly difficult it is to educate 'all' of the children, but that's what schools are required to do. It does take extraordinarily talented and committed professionals to find ways to challenge all the children in her/his classes, but every day millions of wonderful teachers are trying their best to do just that. Whether you choose to respond to these comments or not, well, that too is a matter of choice...
{"url":"http://mathnotations.blogspot.com/2007/09/interview-with-prof-lynn-arthur-steen_14.html","timestamp":"2014-04-19T06:52:46Z","content_type":null,"content_length":"369180","record_id":"<urn:uuid:568e5048-7133-4950-b2d5-f2ee3977df68>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
Lorton, VA Algebra 1 Tutor Find a Lorton, VA Algebra 1 Tutor I have a masters in economics and a strong math background. I have previously taught economics at the undergraduate level and can help you with microeconomics, macroeconomics, econometrics and algebra problems. I enjoy teaching and working through problems with students since that is the best way ... 14 Subjects: including algebra 1, calculus, statistics, geometry ...I love volleyball and play every summer with friends. I'm a personal trainer and I do incorporate some yoga exercises and techniques as it helps with muscle endurance, balance, and breathing. Three months of P90X Yoga is what got me started. 56 Subjects: including algebra 1, reading, English, biology ...You will learn Algebra, love it and get the best grades. I have been studying and working with geometry for more than 30 years, and Geometry was my favorite subject. Also, I have taught geometry for many students in different ages. 5 Subjects: including algebra 1, geometry, algebra 2, SAT math I have more than 10 years' experience teaching math in private, public, and charter school sectors. More than 80% of my students pass the state's standardized test each year. I have excellent communication skills which help me to relate mathematical content to my students, and make the concepts seem easy and more doable. 5 Subjects: including algebra 1, elementary math, linear algebra, prealgebra ...During my time, I have taught grades 3-5 and taught advanced academics. I know the curriculum and have experience with preparing students for tests, including the SOLs. Learning can be interesting especially history, I love to bring history alive for my students. 21 Subjects: including algebra 1, reading, English, ACT Reading Related Lorton, VA Tutors Lorton, VA Accounting Tutors Lorton, VA ACT Tutors Lorton, VA Algebra Tutors Lorton, VA Algebra 2 Tutors Lorton, VA Calculus Tutors Lorton, VA Geometry Tutors Lorton, VA Math Tutors Lorton, VA Prealgebra Tutors Lorton, VA Precalculus Tutors Lorton, VA SAT Tutors Lorton, VA SAT Math Tutors Lorton, VA Science Tutors Lorton, VA Statistics Tutors Lorton, VA Trigonometry Tutors
{"url":"http://www.purplemath.com/lorton_va_algebra_1_tutors.php","timestamp":"2014-04-19T19:59:33Z","content_type":null,"content_length":"23981","record_id":"<urn:uuid:5492a97d-c5e9-4570-9f49-1e37921cdd18>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
Point of Inflection So, get this: I’m writing my own grading software. This is largely because I really like computer programming and I want to try out some new technologies, but it’s also because I have found existing grading software severely lacking (especially now that I use this skill-centered grading that is not focused so much on deadlines). If you have grading software that you love, what is it? And, what about your grading software is most important to you? For me: 1. Automatic report generation 2. Internet access for students Thanks for any comments – I’m really pretty serious about making the ultimate grading solution here, and I’d love to incorporate your ideas if they meld well with mine. I should note that my school does NOT have an EdLine-like service for all classes, and our teachers all use whatever grading system they can figure out. If you don’t have this sort of burden / flexibility, this question is probably different for you! Try bringing the problem to physical reality It takes a lot of class time to have students build models, but here’s what I’m almost ready to call a pro tip: people love building models. I wanted to bring Algebra kids from an understanding of the typical calculus box (cut 4 squares out of the corners of a rectangular piece of…) to a function relating height and volume. I spent 100 minutes of class doing this – a huge amount of time. But you should have seen how proud kids were when they got their equation to hit all the points that represented the various boxes they made. I literally gave them 6 or 7 sheets of paper of the same size, had them cut corners out of them, and record, plot, and draw diagrams of their results. To them, they got to spend 100 minutes on a single, rewarding task. To them I’m going slow. To me, they’re spending 100 minutes practicing and learning the benefits of switching between multiple representations of a problem, and finally coming up with the ultimate representation, a function, about which we are just starting to learn <shocked gasp!/>. Kate Nowak is writing about a trig problem. She just wants to review, so she doesn’t have 100 minutes, but literally making a pyramid is such a good idea, you guys. Kids can start from the design below, and in a series of careful questions, you eventually ask them at what angle they should cut the triangles (what angle should that little arc be?) to get a height of 10 cm. To me this is the ultimate “be less helpful,” right? You have, verbally, given them literally zero information about the problem except the question. They have to go get a ruler from the ruler drawer (that is always accessible) after realizing they need a ruler. They have to come up with a way to check if the pyramid is actually 10 cm high (kind of hard!). And, to boot, they have an intrinsically rewarding experience at the end: a sweet little pyramid. They should probably color a diagram RIGHT ON IT for awesome mathiness that they can be proud of, and that you can glue to the door and have a sweet spikey door of trigonometry. Make sure to give different groups different goal sizes so that as a class you can confirm any patterns kids are seeing. And so that your spikey door is cooler. You can’t sit next to me! A colleague of mine, D, has a theory that makes a lot of sense: when a teacher helps a student work on something all the way to its completion, the student associates the final success with the presence of the teacher. I used to help my students all the way through a math problem and then be surprised when they couldn’t seem to do the same work when I wasn’t there. D recommended that I help a student only to the brink of success, by making sure that the student has all of the necessary tools for the situation, and then getting out of there before the actual achievement takes place. In part, to make sure I’m not just giving away the answer, but also so that the student can experience success when he’s alone! I offer “office hours” out of my classroom after school. It’s free-form, and students can come in and ask for any kinds of help, test out of a skill, or trade math jokes. I usually get a lot of work done while 3 or 4 kids work on their homework. This year I started a new rule: no students can sit near me. They thought I was joking at first; some students would even sit down right next to me as I was telling them they couldn’t, and I’d have to make them get up and move. It’s hard to keep enforcing the rule, but it makes it much easier to get away from the students before they make their Flash Presentations Flash as in really fast! I used a period today to assign a group project. I gave the kids (in teams of 3 or 4) about 40 minutes to: 1. Pick a function from a list (e.g. u(x)=2*sqrt(10-x^2)+5) 2. Figure out how to graph it in geogebra (the “sqrt” command was not obvious to them) 3. Figure out how to get a table of values out of geogebra 4. Figure out interesting things about it (my prompt suggested asking questions like “Are any points especially interesting or important,” and “What does the function do or look like?”, but they were encouraged to ask their own questions too) 5. Graph the function “very well,” 6. Make a poster describing the interesting parts they learned 7. Give a 3-minute presentation to the class The functions from which they could choose all had interesting features, like discontinuities, endpoints, etc. I was astonished at how focused the teams stayed throughout the period; I think the low amount of time helped a lot. Since the mathematical content was pretty simple (it would be, what, maybe 5 or 6 questions on a worksheet?) they could focus on finding interesting features, and I could encourage them during their work to find ways to convince the other students that these were interesting features. The small time limit made it so that they HAD to find a quick way to get the graph from geogebra on to graph paper, which meant that they HAD to identify the important characteristics and sketch on top of that guideline. Never before has it occurred to me to slam through presentations like this, but honestly the quality of the presentations did not suffer too much (as compared with a week-long assignment) and the interest in the content was much higher. The kids had to remind themselves of deadlines every ten minutes or so (I put one student in each team in charge of the time) and I think they got a lot of practice prioritizing and working efficiently. The biggest benefit, of course, is that our treatment of domain and range in the upcoming week is practically covered already, and the students already have a refreshed understanding of numbers that can’t be fed into or gotten out of functions. And, they got practice zooming around in geogebra, making tables (one group figured out spreadsheet view!!), etc. The best thing I could do with my summer During the school year I am a math teacher because I am passionate about youth, education, and, well, math. During the summer I am the director of Shiloh Quaker Camp, where I, along with my staff of twenty five, create a community of loving, supportive, laughing kids. I do this because I am passionate about teaching kids to respect themselves, each other, and the connections that form between people who live together. We create the community at camp with intention and great care, and in many ways it’s a lot like teaching math. (I’m beginning to suspect that any kind of teaching bears great resemblance to any other (who’d've thunk it?)). I went to a camp like Shiloh when I was a kid, and I’ve worked at Shiloh since I was a teenager in high school myself. It’s been a spiritual and social foundation for the entire framework I base my life upon, and I think what I learned at camp informs every interaction I have with my students in school. Math awes me because it is a fundamental structure of the universe; math is an undeniable order that we presume stretches to the edge of the cosmos (and beyond?). Camp awes me because of the growth I see in kids; independence and interdependence flourish together in kids as young as eight years when they’re faced with the right kinds of challenges by staff with the right training and a supportive focus. I miss out on a lot of professional development opportunities in the summer, and I miss out on a lot of travel opportunities. My summer break ends two days before it starts, and I’m planning classes desperately in the three days a week when the kids are off climbing or canoeing or hiking. If you’re not a camp person, you might not understand the magnitude of the experience I’m describing, but as a camp person I can tell you that its the most rewarding experience I can imagine! If you’re an amazing teacher (or at least working on becoming one) and want a taste of this camp experience, send me an email. If you’re looking for a way to spend your summer that will recharge your batteries and give you new perspectives on your classes and your role as a teacher, send me an email. Even if it’s not at Shiloh (which is hard to get a job at, if I may brag briefly), I can direct you towards some programs that need good people, are a blast, and will hone your teaching skills like no summer institute can! Are you a camp person? Leave a note with your camp experiences! How to create a skills list My last post focused on three major mistakes I made in my first semester of skill-focused, mastery-based assessment: separating skills into chunks that were impractically small, choosing some skills that were too simple (almost trivial), and neglecting to plan for the end of the semester. This post will focus on my process creating the skills list for semester two (for Algebra 2). I’d love to hear your opinions or your own process – leave a comment or a link below! The place to start is your curriculum map, whether that’s a list of topics, a set of state standards, a final exam, some chapters of a textbook, or whatever. Find or create the document that describes what you hope to teach this semester. The list I used last semester I got directly out of my textbook. I knew what chapters I was going to teach, and I just ripped concepts out of the table of contents. This process got me a list that I was moderately happy with; click here to download it. Your list will almost certainly need to be different, since I was planning for 36 class For semester two, I set out in much the same way. I went through the chapters in my text book I was planning on studying, and every time something popped up that seemed like it would be a good candidate for a skill, I wrote it down. This gave me the following list of 36 skills: │Evaluating functions │ │Analyzing the domain and range of functions │ │Modeling relationships with functions │ │Modeling arithmetic sequences with functions │ │Modeling geometric sequences with functions │ │Distinguishing between arithmetic and geometric sequences │ │Recognize exponential growth from situations, tables, graphs, or equations │ │Understand multiple representations of exponential functions │ │Represent exponential functions algebraically │ │Using basic laws of exponents to simplify expressions │ │Use exponential functions to solve problems involving growth or decay │ │Find equations of exponential functions through two given points │ │Identify graphs of quadratic, cubic, square root, absolute value, etc functions │ │Transform a graph by stretching, shifting, or flipping it │ │Write a general equation for a family of functions │ │Use the “completing the square” technique │ │Model physical situations with quadratic functions │ │Write equations in graphing form │ │Invert functions analytically and graphically │ │Form compositions of functions │ │Express the relationship between a function and its inverse │ │Understand logarithms and transform their graphs │ │Use properties of logarithms │ │Use logarithms to solve exponential equations │ │Count possibilities in situations that require a particular order │ │Count possibilities in situations in which order does not matter │ │Draw a tree diagram to represent and calculate probabilities │ │Draw an area diagram to represent and calculate probabilities │ │Calculate expected value │ │Using the fundamental principle of counting │ │Calculate conditional probabilities │ │Find the value of arithmetic series of arbitrary length │ │Find the value of geometric series of arbitrary length │ │Find the value of geometric series with infinite length │ │Writing a series with summation notation │ │Using mathematical induction │ The next step in the process is to look at each skill from the brainstorm and ask, 1. How will I test this skill? 2. Is this skill big enough to be its own skill? 3. Is this skill small enough to be a single skill? 4. Does this skill have multiple levels, so that intro level tests will be significantly different from master level skills? 5. If a student does not understand this skill at all, am I willing to flunk him? (My grades are set up that each student must get a minimum of 3/5 in every skill to pass the class. If you’re using a simple average, you can ignore this question). Consider the first skill, “Evaluating Functions.” 1. How will I test this skill? What leaps to mind is showing a kid f(x)=3x+6, and asking for f(2). Maybe f(f(2)) – or is that composition? They also need to be able to evaluate functions from graphs and tables. Maybe the question should be a three-parter? 2. Is this skill big enough to be its own skill? Hmm… it’s pretty small, isn’t it? 3. Is this skill small enough to be a single skill? Yes, I am confident that it is. 4. Does this skill have different levels? So, I could ask them to evaluate f(x)=3x+6, or f(x)=3x-2/x+(x+5)^-3, but those aren’t different levels of evaluating functions, those are different levels of order of operations or something. I could ask them to find g(f(2)), but maybe that’s composition. 5. Is this skill a requirement of passing the course? Absolutely. I am not letting anyone who can’t evaluate a function out of Algebra 2. So this first skill has some complications. I really like testing f(g(2)) because it requires students to understand the input/output aspect of functions where I feel like a simple g(2) might let them slip by without it. Since this skill is so essential, I’m leaving it in, though it might be a little bit small. It may end up conflicting with the composition skill, but there might be flexibility in that skill. ”Evaluating functions” seems like a solid requirement. Let’s take another skill, “Modeling arithmetic sequences with functions.” 1. How will I test this skill? I’m looking for students to be able to come up with functions that describe arithmetic relationships, like, “write a function that outputs the number of gloves that x people will need,” or something. I could show a table of inputs 1, 2, 3, 4, and outputs 8, 11, 14, 17, and have them write this function. 2. Is this skill big enough to be on its own? You know, I think it’s possible it could be combined with the skill before it, “modeling relationships with functions,” and the one after it, “modeling geometric sequences with functions.” These three skills are so closely related, with the only difference being the arithmetic skills required. I’m not trying to test those arithmetic skills – I hope the kids already have them – so I’m going to combine these three skills into just “modeling relationships with functions.” I’ll answer the rest of these questions for the new skill. 3. Is this skill small enough to be on its own? Clearly the title can involve arbitrarily complex functions and relationships, but I think the kinds of simple relationships we’ll study in class can all be combined under one roof. This skill may be a little bit too big. If I was the organized man I wish I were, I’d note somewhere that I should revisit this skill after we touch on it to see how I felt. 4. Are there different levels for this skill? For intro level questions I can ask the students to model the relationship between Celsius and Fahrenheit, or some other linear relationship. For master level questions I could ask a geometric volume question which includes an extra level of abstraction. 5. Is this skill absolutely required for every student that passes the class? Yes, I think so. Who wants to teach precalc to students who can’t create their own functions? Now, I don’t have time to write an essay about each of these questions for each of these skills, and neither do you. In this post and in my brain I’m deciding to move quicklky here. I only have so many hours to get this done, and it’s not going to be perfect. I hope that you can make sacrifices like this – it’s taken me a long time to accept the impossibility of perfection in a finite time frame. I spent about 30 minutes considering this list, eventually deciding to cut 7 skills and reword several. Click here for my final checklist. I hope this post has showed you how easy it can be to come up with a list of representative skills to assess. It’s an unglamorous process, and the hardest part is coming up with the rough list, but once you do that you can have an effective list in less than an hour. I don’t recommend using my skills lists wholesale. I am in the process of trying out several different textbooks and my order is wonky. My school is exempt from most standardized tests and if you have specific objectives you need to hit you’ll need to take them into consideration, obviously. That said, here are my skills lists for Algebra II and Calculus this year: Also, Dan Meyer has posted his suggestions for Algebra 1, Geometry, and Precalculus (imagine my chagrin when I saw he covered exactly everything but what I needed!). So, if you’re considering getting started with this system, you at least have a launching point for your class. If you have comments about my lists or process, I’d love to hear about them! I’m especially interested what you think of the questions I use on each skill. What else needs to be asked?
{"url":"http://larkolicio.us/blog/?m=201001","timestamp":"2014-04-21T07:05:52Z","content_type":null,"content_length":"44374","record_id":"<urn:uuid:55148b37-534b-49a1-b3ed-4974cf1371d7>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
In Search of Logic Last time, I mentioned a question: Taleb and others have mentioned that the bell curve (or Gaussian) does not deal with outliers well; it gives them a very small probability, and the parameter estimates end up being highly dependent on them. Yet, one of the justifications of the Gaussian is that it's the max-entropy curve for a given mean and standard deviation. Entropy is supposed to be a measure of the uncertainty associated with a distribution; so, shouldn't we expect that the max-entropy distribution would give as high a probability to outliers as There are several answers. First: a basic problem is that phrase "a given mean and standard deviation". In particular, to choose a standard deviation is to choose an acceptable range for outliers (in some sense). If we have uncertainty about the standard deviation, it turns out the resulting curve has a polynomial decay rather than an exponential one ! (This means distant outliers are far more probable.) Essentially, estimating deviations from data (using maximum-likelihood) makes us extremely overconfident that the data will fall in the range we've experienced before. A little Bayesian uncertainty (which still estimates from data, but admits a range of possibilities) turns out to be much less problematic in this respect. This is definitely helpful, but doesn't solve the riddle: it still feels strange, that the max-entropy distribution would have such a sharp (super-eponential!) decay rate. Why is that? My derivation will be somewhat heuristic, but I felt that I understood "why" much better by working it out this way than by following other proofs I found (which tend to start with the normal and show that no other distribution has greater entropy, rather than starting with desired features and deriving the normal). [Also, sorry for the poor math notation...] First, let's pretend we have a discrete distribution over n points, x . The result will apply no matter how many points we have, which means it applies in the limit of a continuous distribution. Continuous entropy is not the limit of discrete entropy, so I won't actually be maximising discrete entropy here; I'll maximise the discrete version of the continuous entropy formula: f(x) maximising sum_i: f(x Next, we constrain the distribution to sum to a constant, have a constant mean, and have constant variance (which also makes the standard deviation constant): )] = C )] = C )] = C To solve the constrained optimisation problem, we make lagrange multipliers for the constraints: - lambda - lambda - lambda Partial derivatives in the f(x 1 - log(f(x - lambda - lambda - lambda Setting this equal to zero and solving for f(x ) = 2 ^1 - lambda[1] - lambda[2]*x[i] - lambda[3]*x[i]^2 That's exactly the form of the Gaussian: a constant to the power of a 2nd-degree polynomial! So, we can see where everything comes from: the exponential comes from our definition of entropy, and the function within the exponent comes from the Lagrange multipliers. The Gaussian is quadratic precisely because we chose a quadratic loss function! We can get basically any form we want by choosing a different loss function. If we use the kurtosis rather than the variance, we will get a fourth degree polynomial rather than a second degree one. If we choose an exponential function, we can get a doubly exponential probability distribution. And so on. There should be some limitations, but more or less, we can get any probability distribution we want, and claim that it is justified as the maximum-entropy distribution (fixing some measure of spread). We can even get rid of the exponential by putting a logarithm around our loss function. Last time, I mentioned robust statistics , which attempts to make statistical techniques less sensitive to outliers. Rather than using the mean, robust statistics recommends using the median: whereas a sufficiently large outlier can shift the mean by an arbitrary amount, a single outlier has the same limited effect on the median no matter how extreme its value. I also mentioned that it seems more intuitive to use the absolute deviation, rather than the squared deviation. If we fix the absolute deviation and ask for the maximum-entropy function, we get something like e as our distribution. This is an ugly little function, but the maximum-likelihood estimate of the center of the distribution is precisely the median! e justifies the strategy of robust statistics, reducing sensitivity to outliers by making extreme outliers more probable. (The reason is: the max-likelihood estimate will be the point which minimizes the sum of the loss functions centred at each data point. The derivative at x is equal to the number of data points below x minus the number above x. Therefore the derivative is only zero when these two are equal. This is the minimum loss.) What matters, of course, is not what nice theoretical properties a distribution may have, but how well it matches the true situation. Still, I find it very interesting that we can construct a distribution which justifies taking the median rather than the mean... and I think it's important to show how arbitrary the status of the bell curve as the maximum entropy distribution is. Just to be clear: the Bayesian solution is not usually to think much about what distributions might have the best properties. This is important, but when in doubt, we can simply take a mixture distribution over as large a class of hypotheses as we practically can. Bayesian updates give us nice convergence to the best option, while also avoiding overconfidence (for example, the asymptotic probability of outliers will be on the order of the most outlier-favouring distribution present). Still, a complex machine learning algorithm may still need a simple one as a sub-algorithm to perform simple tasks; a genetic programming approximation of Bayes may need simpler statistical tools to make an estimation of distribution algorithm work. More generally, when humans build models, they tend to compose known distributions such as Gaussians to make them work. In such cases, it's interesting to ask whether classical or robust statistics is more appropriate.
{"url":"http://lo-tho.blogspot.com/2013/02/weird-curves.html","timestamp":"2014-04-20T08:14:38Z","content_type":null,"content_length":"72237","record_id":"<urn:uuid:fb905dd0-9be4-4689-bbf3-ca444c6b1aa5>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
Set - Distributive Law June 16th 2009, 04:15 AM Set - Distributive Law I've came across an example... By using A-B=A n B', prove that (A u B)-(A n B)=[A-(A n B)] u [B-(A n B)] The solution they gave is... (A u B)-(A n B)=(A u B) n (A n B)' =[A n (A n B)'] u [B n (A n B)'] <---Distributive Law =[A-(A n B)] u [B-(A n B)] What I wanna know is what happen at the 2nd step? How to a that by using distributive law? June 16th 2009, 04:54 AM Hello, cloud5! They "multiplied from the right". $\underbrace{(A \cup B)}_{(a+b)} \underbrace{\cap}_{\times} \underbrace{ (A \cap B)' }_{c}\;=\;\underbrace{\bigg[A \cap (A \cap B)'\bigg]}_{a\times c} \underbrace{\cup}_{+}$$\underbrace{\bigg[B \ cap (A \cap B)'\bigg]}_{b\times c} \quad\leftarrow\text{ Distributive Law}$ See it? June 16th 2009, 05:09 AM I've came across an example... By using A-B=A n B', prove that (A u B)-(A n B)=[A-(A n B)] u [B-(A n B)] The solution they gave is... (A u B)-(A n B)=(A u B) n (A n B)' =[A n (A n B)'] u [B n (A n B)'] <---Distributive Law =[A-(A n B)] u [B-(A n B)] What I wanna know is what happen at the 2nd step? How to a that by using distributive law? $A-B=A\cup B'$ $(A\cup B) - ( A\cap B) = \left[A-(A\cap B) \right] \cup [B-(A\cap B) ]$ $(A\cup B ) - (A\cap B)=({\color{red}A}{\color{green}\cup} {\color{blue}B})\cap (A\cup B)'$ $\left[{\color{red}A}\cap (A\cup B)'\right] {\color{green}\cup} \left[{\color{blue}B}\cap (A\cup B)' \right]$ June 16th 2009, 05:16 AM Hello, cloud5! They "multiplied from the right". $\underbrace{(A \cup B)}_{(a+b)} \underbrace{\cap}_{\times} \underbrace{ (A \cap B)' }_{c}\;=\;\underbrace{\bigg[A \cap (A \cap B)'\bigg]}_{a\times c} \underbrace{\cup}_{+}$$\underbrace{\bigg[B \ cap (A \cap B)'\bigg]}_{b\times c} \quad\leftarrow\text{ Distributive Law}$ See it? Oh... Now I get it... Thanks for the help~
{"url":"http://mathhelpforum.com/discrete-math/93001-set-distributive-law-print.html","timestamp":"2014-04-18T14:37:36Z","content_type":null,"content_length":"8814","record_id":"<urn:uuid:6ab1cac5-1b74-4dc7-aefd-58f4ce63defb>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Cartoons Trying to find a great math cartoon? You can stop looking. Andertoons.com has tons of funny math comics & cartoons. And browsing and buying is easy too. Sign up for a cartoon subscription and download as many math cartoons as you'd like. Find out more Not seeing the exact math cartoon you want? Build your own! Request a custom cartoon. Find out more
{"url":"http://www.andertoons.com/search-cartoons/math","timestamp":"2014-04-18T08:03:58Z","content_type":null,"content_length":"17948","record_id":"<urn:uuid:e31bd716-cac7-43e9-916e-a99071742579>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
Random Variables - Continuous A Random Variable is a set of possible values from a random experiment. Example: Tossing a coin: we could get Heads or Tails. Let's give them the values Heads=0 and Tails=1 and we have a Random Variable "X": In short: X = {0, 1} Note: We could have chosen Heads=100 and Tails=150 if we wanted! It is our choice. Random Variables can be either Discrete or Continuous: • Discrete Data can only take certain values (such as 1,2,3,4,5) • Continuous Data can take any value within a range (such as a person's height) In our Introduction to Random Variables (please read that first!) we look at many examples of Discrete Random Variables. But here we look at the more advanced topic of Continuous Random Variables. The Uniform Distribution (Also called the Rectangular Distribution). The Uniform Distribution has equal probability for all values of the Random variable between a and b: The probability of any value between a and b is p We also know that p = 1/(b-a), because the total of all probabilities must be 1, so the area of the rectangle = 1 p × (b−a) = 1 p = 1/(b−a) We can write: P(X = x) = 1/(b−a) for a ≤ x ≤ b P(X = x) = 0 otherwise Example: Old Faithful erupts every 91 minutes. You arrive there at random and wait for 20 minutes ... what is the probability you will see it erupt? This is actually easy to calculate, 20 minutes out of 91 minutes is: p = 20/91 = 0.22 (to 2 decimals) But let's use the Uniform Distribution for practice. To find the probability between a and a+20, find the blue area: Area = (1/91) x (a+20 - a) = (1/91) x 20 = 20/91 = 0.22 (to 2 decimals) So there is a 0.22 probability you will see Old Faithful erupt. If you waited the full 91 minutes you would be sure (p=1) to have seen it erupt. But remember this is a random thing! It might erupt the moment you arrive, or any time in the 91 minutes. Cumulative Uniform Distribution We can have the Uniform Distribution as a cumulative (adding up as it goes along) distribution: The probability starts at 0 and builds up to 1 This type of thing is called a "Cumulative distribution function", often shortened to "CDF" Example (continued): Let's use the "CDF" of the Uniform Distribution to work out the probability: At a+20 the probability has accumulated to about 0.22 Other Distributions Knowing how to use the Uniform Distribution helps when dealing with more complicated distributions like this one: The general name for any of these is probability density function or "pdf" The Normal Distribution The most important continuous distribution is the Standard Normal Distribution It is so important the Random Variable has its own special letter Z. The graph for Z is a symmetrical bell-shaped curve: Usually we want to find the probability of Z being between certain values. Example: P(0 < Z < 0.45) (What is the probability that Z is between 0 and 0.45) This is found by using the Standard Normal Distribution Table Start at the row for 0.4, and read along until 0.45: there is the value 0.1736 P(0 < Z < 0.45) = 0.1736 A Random Variable is a variable whose possible values are numerical outcomes of a random experiment. Random Variables can be discrete or continuous. An important example of a continuous Random variable is the Standard Normal variable, Z.
{"url":"http://www.mathsisfun.com/data/random-variables-continuous.html","timestamp":"2014-04-18T15:39:50Z","content_type":null,"content_length":"10824","record_id":"<urn:uuid:1ba502a4-4c51-40ca-9409-cc6f1a72598b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
A bag contains five blue marbles and three red marbles. One marble is drawn at random, What is the probability that the first marble was blue Number of results: 16,836 a bag contains 33 green marbles and 25 blue marbles. you select a marble at random from the bag. find the theoretical probability of selecting a blue marble. thanks Pr(blue)= number blue/total Friday, April 13, 2007 at 5:52pm by dillon A bag contains 10 yellow, 15 blue and 5 red Marbles. Mark chooses a candy and doesn't replace it in the bag. Find P (neither blue) Thursday, February 14, 2013 at 8:37pm by Anon a bag of marbles contains 5 red,7 green ,and some blue.if 1/3 of the marbles in the bag are blue,how many blue marbles are in the bag? Sunday, May 1, 2011 at 7:53pm by shank A bag contains 5 red, 4 white and 6 blue marbles. Another bag with 9 blue marbles has the same ratio of blue to white marbles as the first bag. How many white marbles are there in the second bag? Thursday, October 17, 2013 at 2:22pm by Anonymous There are seven red, five blue, and eight white chips in a bag. What is the probability of pull out a blue or white and then quickly putting it back in the bag and pulling out another blue or white, making a combination of one blue and one white. Monday, January 26, 2009 at 1:07pm by John A drawer contains six bags numbered 1-6, respectively. Bag i contains i blue balls and 2 green balls. You roll a fair die and then pick a ball out of the bag with the number shown on the die. What is the probability that the ball is blue? Thank you! Friday, January 27, 2012 at 6:54am by Erica One bag contains 5 red marbles, 4 blue marbles, and 3 yellow marbles, and a second bag contains 4 red marbles, 6 blue marbles, and 5 yellow marbles. If Lydia randomly draws one marble from each bag, what is the probability that they are both not yellow? Wednesday, May 9, 2012 at 10:41pm by Hannah A bag contains a total of 24 white, red, and blue marbles. A student randomly selects a marble from the bag, records the color, and then returns the marble to the bag. The student repeats the process 24 times. The bar graph shows the results of the student's experiment and ... Monday, November 12, 2012 at 3:43pm by kate Could someone please let me know if I did this problem right? A bag contains seven blue marbles and five red marbles. One marble is drawn at random without replacement, and then a second marble is drawn. What is the probability that the second marble is blue if the first ... Friday, August 7, 2009 at 3:41pm by B.B. selection of colored balls from two bags. Assume that each bag contains 4 balls. Bag a contains 2 red and 2 white, while bag b contains 2 red, 1 white, and 1 blue. You randomly select one ball from bag a, note the color, and place the ball in bag b. You then select a ball from... Wednesday, February 16, 2011 at 12:12am by Hil-hellpp! a bag contains 4 red chips, 2 blue chips and 5 green chips. A chip is randomly removed from the bag, and then without replacing the first chip, a second chip is removed. What is the probability that both of the chips selected from the bag were blue? Tuesday, May 3, 2011 at 3:00pm by veronica Assume that each bag contains 6 balls. Bag a contains 3 red and 3 white, while bag b contains 2 red, 2 white, and 2 blue. You randomly select one ball from bag a, note the color, and place the ball in bag b. You then select a ball from bag b at random and make note of its ... Monday, October 4, 2010 at 9:38pm by Tiff Algebra 2 A bag contains five red marbles and three blue marbles. In how many different ways can two red marbles be drawn if the first marble is not returned to the bag before the second marble is drawn? Tuesday, February 28, 2012 at 12:28pm by Megan A set of five marbles is selected (without replacement) from a bag that contains 4 blue marbles, 3 green marbles, 6 yellow marbles, and 7 white marbles. How many sets of five marbles contain no more than two yellow ones? Sunday, September 11, 2011 at 6:14pm by jay A bag contains 4 orange marbles, 11 blue marbles and 1 yellow marble. What is the probability of drawing a blue followed by another blue without replacement? Not sure what to do? Sunday, December 11, 2011 at 8:23pm by Dylan Find the sample space. a Bag contains 4 marbles: one each of red, blue, green and violet. Two marbles are drawn from the bag. assume that the first marble is not put in the bag before drawing the second marblle Thursday, March 21, 2013 at 3:51pm by Nangoyi algebra 2 The bag contains three red marbles, two blue marbles, and seven yellow marbles. Two marbles are randomly drawn from the bag. What is the probability of drawing a blue marble, replacing it, and then drawing a blue marble? Friday, May 21, 2010 at 4:37pm by Thomas one bag contains 1 red ball,1 yellow ball, 1green ball and 1 blue ball the second bag contains 1 pink ball and 1 grey ball what is the sample space out come Thursday, January 19, 2012 at 12:31pm by Sue A bag contains 6 blue balls, 4 green and 2 red. If you put your hand in and picked a ball at random what are the chances it is blue? Wednesday, February 1, 2012 at 10:01am by Anonymous algebra 2 A bag contains 7 blue marbles, 5 red marbles, and 9 green marbles. A marble is taken at random from the bag. What is the probability that the marble is blue or red? Please check my answer 12/21 Wednesday, May 26, 2010 at 5:15pm by John A bag contains 7 blue marbles, 5 red marbles, and 9 green marbles. A marble is taken at random from the bag. What is the probability that the marble is blue or red? Please type in your answer as a Wednesday, May 26, 2010 at 4:13pm by John A bag contains 2 red, 2 blue, and 2 green marbles, Sue takes one marble at a time from this bag without looking. What is the least number of marbles Sue must take from this bag to be sure that she has taken 2 marbles of the same color? A) 2 B)3 C)4 D)6 Sunday, October 26, 2008 at 6:11pm by Pooja If there are only red, green and blue marbles in the bag, then given that 1/3 are blue, then red and green occupy 1-1/3=2/3 of the bag. Use direct proportion: 2/3 bag : 5+7=12 1/3 bag : X Cross multiply: x=(1/3)*12÷(2/3) =12/2 = 6 (blue marbles) Sunday, May 1, 2011 at 7:53pm by MathMate A bag contains four balls. Three are red, and the fourth is either red or blue, selected with equal probability. A ball is drawn from the ball and it is red. What is the probability that one of the balls still in the bag is blue? Sunday, November 7, 2010 at 3:32pm by kyle If a bag contains 50 marbles with 28 red ones and 22 blue ones. A marble is picked at random from the bag.what is the probability of picking a red marble after a blue marble was picked first? Thursday, March 3, 2011 at 12:14am by Kay Math ms sue please please please help!!!!!! 1)a bag contains five red marbles and seven blue marbles. you pull one marble out of the bag and replace it before putting out a second marble. what kind of event is this? independent, dependent, or both? 2) A bag contains 5 marbles. Without looking, you pick a marble with ... Sunday, November 10, 2013 at 6:59pm by Math help a bag contains 5 blue marbles 4 red marbles and 1 green marble. what is the probability of drawing a blue marble,the another blue marble without replacement Tuesday, December 18, 2012 at 10:01pm by bryce A bag contains nine red marbles, four green marbles, three blue marbles, and two yellow marbles. If Lisa draws a random marble from the bag, what is the probability that it will be a red, green, or blue marble? Friday, April 18, 2014 at 2:11pm by Lyla A bag contains five purple ("P") blocks and one yellow ("Y") blocks. Without looking into the bag you reach in, take out a block, return the block to the bag, and then repeat the same process a number of times. what is the sequences of selected blocks Wednesday, August 1, 2012 at 7:47pm by Anonymous the bag contains 20 red marbles and30 white marbles and 40 blue ones what is the ratioof red to blue marbles what is the ratio of white to red if one marble is drawn from the bag what is the probability that the marble will not be white??? need ansewr fastt plzz Sunday, April 8, 2012 at 10:10pm by kristen . A box contains five blue, eight green, and three yellow marbles. If a marble is selected at random, what is the probability that it is not blue? Thursday, June 9, 2011 at 11:15pm by CC A box contains five blue, eight green, and three yellow marbles. If a marble is selected at random, what is the probability that it is not blue? Sunday, May 13, 2012 at 6:41pm by free A BAG CONTAINS 3 RED CHIPS, 2 BLUE CHIPS AND 1 WHITE CHIP. IF 2 CHIPS ARE CHOSEN FROM THE BAG WITHOUT REPLACEMENT DETERMINE THE PROBABILITY THAT THEY ARE OF DIFFERENT COLORS Monday, March 15, 2010 at 7:37pm by RAMATU A bag contains 4 purple, 3 blue and 2 red cubes. Select 3 cubes and stack on table next to the bag. What is probability of the cubes being red, purple then blue Sunday, February 28, 2010 at 1:55pm by dave A bag contains five purple ("P") blocks and one yellow ("Y") blocks. Without looking into the bag you reach in, take out a block, return the block to the bag, and then repeat the same process a number of times. Which of the following sequences of selected blocks is more likely? Thursday, August 2, 2012 at 2:04pm by Anonymous MATH Prob. if a bag contains 8 red marbles, 6 blue marbles and 9 green marbles, what is the probability of choosing a blue marble?? Saturday, August 8, 2009 at 1:42pm by Twg A bag contains 2 red marbles, 4 blue marbles, and 8 green marbles. What is the probability of choosing a blue marble? Tuesday, November 22, 2011 at 11:46am by Anonymous a bag contains 7 red marbles, 8 blue marbles and 4 green marbles. what is the probablity of choosing a blue marble? Friday, June 15, 2012 at 1:10am by Anonymous algebra 1 The bag contains three red marbles, two blue marbles, and seven yellow marbles. Two marbles are randomly drawn from the bag. What is the probability of drawing a blue marble, replacing it, and then drawing a yellow marble Saturday, July 21, 2012 at 4:07pm by gab a bag contains 20 marbles containing 8 green, 4 red, 2 blue, 6 yellow. If a person picks out 1 single marble from bag without looking. what is the probability that it will be a red marble? Tuesday, September 25, 2012 at 11:45am by Nanette Math, please check work A bag contains 7 red marbles, 2 blue marbles, and 1 green marble. If a marble is selected at random, what is the probability of choosing a marble that is not blue? 7+2+1=10 There are two blue marbles So is the probability 1/5? Sunday, February 21, 2010 at 11:59am by Lyndse A bag contains five purple ("P") blocks and one yellow ("Y") blocks. Without looking into the bag you reach in, take out a block, return the block to the bag, and then repeat the same process a number of times. Which of the following sequences of selected blocks is more likely... Thursday, August 2, 2012 at 3:38pm by Anonymous A bag contains 5 red, 3 green, 12 white and 7 blue marbles. One is drawn at random. With is the probability it is green if it is not red or blue? Tuesday, April 21, 2009 at 2:13pm by Andrey L A bag contains 5 red, 5 blue, 5 green, and 5 white marbles. Two marbles are drawn, but the first marble is not replaced. Find P(red, then blue) Tuesday, July 22, 2008 at 5:22pm by Nathaniel Math 1 9th grade A bag contains 3 green marbles and 6 blue marbles. What is the probability of drawing a blue marble, then another blue marble, then a green marble, replacing the marble after each draw? Thursday, January 20, 2011 at 7:35pm by Sandra A bag contains 3 red chips, 2 blue chips, and 1 white chip. If 2 chips are chosen from the bag (without replaceent), determine the probability that they are of different colors. Sunday, February 28, 2010 at 1:55pm by Ryan! pre calculus A bag contains 4 blue marbles, 2 black marbles, and 3 red marbles. If a marble is randomly drawn from the bag, what is probability that it is not black? 1/2 2/9 5/9 7/9 14/19 Tuesday, November 27, 2012 at 10:09am by king chris A bag contains 4 blue, 4 red, and 4 green marbles. Four marbles are drawn at random from the bag. How many different samples are possible which include exactly two red marbles? Tuesday, April 9, 2013 at 1:13am by Kierra A bag contains 4 blue, 4 red, and 4 green marbles. Four marbles are drawn at random from the bag. How many different samples are possible which include exactly two red marbles? Tuesday, April 9, 2013 at 1:13am by Kierra 7. A bag contains 2 red cubes, 3 blue cubes, and 5 green cubes. If a cube is removed and replaced in the bag and another is drawn, what is the probability that both are green? 1/4 3/8 2/5 1/2 Thursday, July 22, 2010 at 12:44pm by Ross Algebra 2 A bag contains 5 red, 9 white, and 10 blue marbles. Suppose you choose a marble from the bag and then choose another marble without replacing the first one. Find the probability of picking the same color both times. Monday, January 30, 2012 at 11:56am by Katy There are 2 opaque bags, each containing red and yellow blocks. Bag 1 contains 3 red blocks and 5 yellow. Bag 2 contains 5 red and 15 yellow. To play the game, you pick a bag and then you pick a block out of the bag without looking. Would a person be more likely to pick a red ... Thursday, December 2, 2010 at 7:48pm by Michelle a bag contains 9 red marbles and 4 blue marbles. how many clear marbles should be added to the bag so the probibility of drawing a red marble is 3/5????? Monday, April 30, 2012 at 5:25pm by Meg a bag contains red and blue marbles. with 10% more blue marbles than red. a)what is the probability of selecting 5 blue marbels ? b) Of selecting 2 reds and 3 blues ? Friday, June 10, 2011 at 1:50pm by pedro1 A box contains blue and white ribbons. If five ribbons are choosen at random, how many ways can at least 1 blue ribbon be selected? Friday, May 3, 2013 at 12:53pm by marlene experimental probability you spin a spinner 50 times. it lands on red 8 times, yellow 12 times, green 20 times, and blue 10 times. based on the results, what is the experimental probility of its landing on green or yellow? thanks I will be happy to critique your work on this. i have a different ... Friday, April 13, 2007 at 2:19pm by dillon Elementary math College A bag contains 7 red chips and 9 blue chips. Two chips are selected randomly without replacement from the bag. What is the probability that the two chips are the same color? Tuesday, March 30, 2010 at 1:33pm by Anna MATH Prob. a bag contains 7 red chips and 10 blue chips. Two chips are selected randomly without replacement from the bag. what is the probability that both chips are red?? Saturday, August 8, 2009 at 1:55pm by Twg 20. A bag contains 7 red chips and 10 blue chips. Two chips are selected randomly without replacement from the bag. What is the probability that both chips are red Friday, January 15, 2010 at 1:05pm by for Ms. Sue There are 25 red, blue, yellow and green marbles in a bag. Four of the marbles are blue and the probability of selecting a blue or green marble at random is 40%. Write and solve an equation to determine the number of green marbles in the bag Monday, March 28, 2011 at 5:44pm by maymay a bag contains 6 red marbles, 5 yellow marbles, 7 blue marbles, and 4 white marbles. if one marble is drawn at random from the bag, what is the probability that it will be white? the answer i got is 2 over 11 . is this correct? Tuesday, July 14, 2009 at 3:56pm by sweetangle Algebra.. PLEASE help me!! Robin tosses a fair coin and then draws a ball from a bag that contains one red, one blue, and one green ball. PART A: What are the possible outcomes for the experiment? Explain. PART B: Three balls-one red, one blue, and one green-are added to the bag. Will the total number ... Thursday, April 11, 2013 at 5:32pm by Syram A bag contains 2 white marbles, 4 blue marbles, and 5 red marbles. Three marbles are drawn from the bag. What is the probability that not all of them are red? Thursday, October 18, 2007 at 10:04pm by Anonymous A bag of M&Ms contains 12 red, 11 yellow, 5 green, 6 orange, 5 blue, and 16 brown candies. What is the probability that if you choose 2 M&Ms from the bag (one after the other) without looking, you will choose 2 yellow ones? Wednesday, June 3, 2009 at 9:38am by Todd A bag contains 6 red marbles nad 4 blue marbles if Dana draws one marble from the bag and then draws another without replacing the first, what is the probability that both marbles will be red? Thursday, May 5, 2011 at 10:12pm by heather suppose a bag contains three orange marbles and two blue marbles. you are to choose a marble, return it to the bag, and then choose again and suppose you do this 50 times what is chance of getting 2 marbles of the same color. Tuesday, October 18, 2011 at 9:59pm by kiki A bag contains x marbles. Half of the marbles are blue. Four less than the number of blue marbles are red. The remaining marbles are purple. What is the probability of choosing a purple marble at random? a. 2 b. 4 c. 1/2-(x/2-4)/x d. x/2-4 e. x/2 please answer and explain Friday, October 25, 2013 at 12:18pm by Tomas A bag contain 9 red, 8 yellow, 5 pink and 6 blue marbles. The experiment is to randomly select on marble out of the bag. What is the probability that chosen marble is a)blue? b)yellow? c) red or pink d)green or blue? need to know what all are Friday, October 19, 2012 at 12:18am by #@@# college math a bag contains 7 red chips and 9 blue chips. Two chips are selected randomly from the bag without replacement. What is the probability that the two chips are the same color? If someone could just point me in the right direction for the many equations I could use I would be ... Monday, November 23, 2009 at 12:45am by Lauren a bag contains 6 red marbles and 2 blue marbles. what is the probability of chosing a red marble and a blue marble at the same time? Thursday, May 5, 2011 at 11:09pm by Cassidy A bag contains red, blue, and orange marbles. If the probability of randomly selecting a red marble is 0.4 and the probability of selecting a red or blue marble is 0.9 what is the probability of selecting a red or orange marble? In my opinion, the easiest way to solve these ... Thursday, January 25, 2007 at 6:46pm by Preston A bag contains 7 red chips and 10 blue chips. Two chips are selected randomly without replacement from the bag. What is the probability that both chips are red? Friday, January 15, 2010 at 1:11pm by KiKi 7. A bag contains 5 red marbles, 6 green marbles, and 9 blue marbles. Suppose 3 marbles are chosen without replacement. What is the probability of choosing a red, a blue, and a red in that order?: A. 9/320 B. 5/152 C. 1/1280 D. 1/38 Wednesday, September 26, 2007 at 12:59am by Anonymous A bag contains 11 red balls, 9 blue balls and x yellow balls. The probabilty of choosing a yellow ball at random is 1/6. What is the probability of not getting a blue ball. Tuesday, November 19, 2013 at 8:52am by danial MATH 12 HELP! One bag contains 4 white balls and 6 black balls. Another bag contains 8 white balls and 2 black balls. A coin tossed to select a bad, then a ball is randomly selected from that bag. Suppose white ball was drawn. What is the probability that it came from the first bag? Sunday, May 20, 2012 at 2:15am by the man 7. A bag contains 5 red marbles, 6 green marbles, and 9 blue marbles. Suppose 3 marbles are chosen without replacement. What is the probability of choosing a red, a blue, and a red in that order?: A. 9/320 B. 5/152 C. 1/1280 D. 1/38 Please explain answer. Thanks Wednesday, September 26, 2007 at 12:59am by Anonymous Y can be anything, depending what was originally in the bag. If there were originally 1 blue and 10 green marbles in the bag, y=1. If there were originally 101 blue and 10 green marbles in the bag, y =101. If there were originally 20 blue and 15 green marbles in the bag, y=5. Sunday, May 27, 2012 at 12:10pm by MathMate Out of 40 gum balls in Jamie's bag, 10 will be red and 30 will be blue. Out of Bryce's bag, 8 will be red and 32 will be blue. You calculate. Monday, January 28, 2013 at 5:40pm by PsyDAG algebra 2 Rami draws a chip from the bag without looking. He keeps the chip and then draws antoher one from the bag. What is the probability that both chips are blue? 3 yellow, 2 brown, 5 green, 4 blue, an 2 Friday, May 21, 2010 at 4:37pm by Logan A bag contains 3 red, 5 black and 5 blue marbles. Four marbles are selected at random without replacement: - the probability that all 4 are black? -the probability that exactly two are blue and none are red? Tuesday, January 25, 2011 at 2:23pm by Sandy Sharon is dividing her polished rock collection into bags. Each bag contains the same number of each color of rocks. She is placing the green and blue rocks into the same bags. How many rocks will be in each bag? Tuesday, October 30, 2012 at 10:08pm by Anonymous 1.A box contains four red marbles, seven white marbles, and five blue marbles. If one marble is drawn at random, find the probability for each of the following: a.A blue marble is drawn: ______ b.A red or blue marble is drawn: _____ c.Neither a blue or red marble is drawn: Friday, October 9, 2009 at 5:09pm by cindy Say that a bag contains 100 marbles: 30 red, 30 blue, 30 green, plus a mix of 10 yellow and orange marbles. To be certain that you have 10 marbles of the same color, what is the minimum number you would need to remove (without looking) from the bag? Monday, March 12, 2012 at 11:08pm by Cyndie A bag contains 10 red marbles, 10 green marbles, 10 yellow marbles and 10 blue marbles. You reach into the bag and grab a marble, then reach into the bag and grab a second marble. The probability that the second marble is the same color as the first marble is ab, where a and b... Monday, April 15, 2013 at 3:33am by Ian Could you Please help me punctuate this paragraph. on his way home from school tom found a bag on the ground is this bag yours he asked tara no its not my bag I left mine at school I think it might be emma's bag because hers is blue and green tom laughed and said you may be ... Sunday, September 23, 2007 at 5:38am by Robert Math help! A bag contains 9 blue marbles and 1 green marble. What is the probability of drawing a blue marble followed by a green marble, without replacing the first marble before drawing the second marble? Is the answer 1/9? Thursday, May 30, 2013 at 3:32pm by Jacob We have two bags of marbles. The first bag has a ratio of 2 blue marbles to 7 red ones. The second bag has the same ratio. If the 2nd bag has 37 red marbles, how many blue ones does it have? Wednesday, August 24, 2011 at 10:25am by Anonymous a bag of 100 marbles contains 0 blue, 25 green, 25 mixed,and 20 clear. what is the probality of selecting a blue? ablue and green? a marble that is not clear? ablue on the first draw and a clearon the second draw? a Monday, April 25, 2011 at 12:45pm by ryee research methods a bag of 100 marbles contains 0 blue, 25 green, 25 mixed,and 20 clear. what is the probality of selecting a blue? ablue and green? a marble that is not clear? ablue on the first draw and a clearon the second draw? Sunday, April 24, 2011 at 6:16pm by maria a bag of 100 marbles contains 30 blue, 25 green, 25 mixed,and 20 clear. what is the probality of selecting a blue? ablue and green? a marble that is not clear? ablue on the first draw and a clearon the second draw? a Tuesday, April 26, 2011 at 12:09am by ryee 14) There are 6 marbles in a bag, 3 blue, 2 red, 1 green. Aubrey reaches into the bag and pulls out a marble, does not replace it, then chooses another. What is the probability that Aubrey chides a blue and then a green marble? Thursday, June 7, 2012 at 12:05am by Sammyjoo sharon is diving her green and blue rock collection into bags. Each bag will contain the same number of each color of rock. How many rocks of each color will be in each bag? there are 16 green marbles and 24 blue Monday, October 22, 2012 at 8:57pm by Anonymous a bag full of gum drops. 2/3 of which are blue. the red drops are 3/8 of the number of blue. There are 3 green drops. How many drops in the bag? Friday, March 25, 2011 at 7:36am by Rachel A bag contains 7 red marbles, 2 blue marbles, and 1 green marble. If a marble is selected at random, what is the probability of choosing a marble that is not blue? I have seen so many different answers on this site to this question and am not sure. I thought it would be 7 red... Saturday, November 27, 2010 at 1:42pm by m A bag contains five pieces pf paper wit hthe emasures of various angles. The angle measures are 23,35,23,41,113. Suppose you pick 3 pieces of paper from the bag at random. What is the probability you can form an isosceles triangle with the three angles? Sunday, July 24, 2011 at 1:17pm by Lisa Math 1 9th grade Well, adding all the marbles together, 3 green marbles and 6 blue marbles, you would get 9 marbles in the bag altogether. If you place the marble back into the bag after drawing it, the probability of getting a green marble would be 3/9, or 1/3 of the bag. If you wanted to get... Thursday, January 20, 2011 at 7:35pm by Danielle A box contains four red marbles, seven white marbles and five blue marbles. If one marble is drawn at random, find the probability for each of the following: (a) A blue marble drawn. _____________ (b) A red or a blue marble is drawn. _____________ (c) Neither a red nor a blue ... Monday, March 8, 2010 at 9:22pm by alicia Tim put red, white and blue marbles in a bag. One-third are red, one-fourth are are white and ten are blue. how many of each are in the bag and what color are they? Wednesday, April 14, 2010 at 9:52pm by Sally Bag 1 contains 2 white and 3 red balls and bag 2 contains 4 white and 5 red balls. 1 ball is drawn at random from one of the bags and is found to be red. Find the probability that it was drawn from bag 2 Wednesday, March 20, 2013 at 1:07pm by anoynomous a bag contains 3 blue marbles , and 9 green marbles , and 11 yellow marbles . Twice you draw a marble and replace it . Find P(blue then green) a) 27/529 b) 27/23 c) 15/529 d) 12/23 Thursday, May 9, 2013 at 11:46am by Diana Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=A+bag+contains+five+blue+marbles+and+three+red+marbles.+One+marble+is+drawn+at+random,+What+is+the+probability+that+the+first+marble+was+blue","timestamp":"2014-04-21T08:28:51Z","content_type":null,"content_length":"41791","record_id":"<urn:uuid:d4b39ad7-731f-43d4-867f-f366423d6cd1>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
Using Microsoft Excel to Explore Gravity Forces and Accelerations In this activity, students will learn about the force of gravity and the factors that affect it (mass of objects and distance between them). They will learn the universal gravitation equation and analyze it qualitatively. They will then learn how to use Microsoft Excel and use the program to create a "gravity calculator" which will automatically calculate force of gravity and acceleration of two different objects if the user inputs the masses of the objects and the distance between them. They will use this gravity calculator to analyze several actual and hypothetical gravitational interactions (on earth and in space) to get a feeling for gravity forces and accelerations. Learning Goals This activity will help students gain a qualitative and quantitative understanding of gravity forces and accelerations. This activity will help students learn how to use Microsoft Excel to perform complex calculations. Key Concepts: Gravity force is directly proportional to masses of objects and involved and inversely proportional to the square of distance between those objects. Force of gravity between two objects is equal and opposite for the two objects. Acceleration of the objects toward each other depends on the force and the masses of the objects. On earth, gravity forces are different for all objects, but acceleration values are always about 9.8 m/s/s. On earth, gravity forces do pull the earth toward objects, but the resulting acceleration of the earth is almost always so small as to be imperceptible. Gravity, Force, Acceleration, Gravitational Constant Context for Use This is best used with high school physics classes of any size that have access to computers or a computer lab. The lesson begins in the classroom, moves to the computer lab, and can then be wrapped up in the classroom. It is a lecture/discussion/computer activity lesson. Before beginning, students should be familiar with Newton's laws of physics. The amount of time required will vary from 2-4 normal (50 minute) class periods, depending on the mathematical sophistication of the students and their familiarity with Microsoft Excel and computers in general. Subject: Physics:Classical Mechanics:Gravity Resource Type: Activities:Classroom Activity Grade Level: High School (9-12) Description and Teaching Materials This lesson comes after students have done a pre-test and a qualitative computer simulation activity (involving solar system orbits) involving gravity. The lesson begins with some background information about universal gravitation and the equation used to quantify forces of gravity. Microsoft Excel is then introduced and examples of how I use it (tracking student participation grades, keeping track of softball stats, randomizing student groups, etc) are shown on the projector. I then show students how to do some simple calculations and formatting using Excel before explaining the assignment, which is to construct a simple calculator, have it checked, and then construct a more complicated gravity calculator that will take masses of two objects and distance between them as inputs and calculate the force of gravity and acceleration of each object toward the other. After explaining the activity, students move to the computer lab, where they follow written directions to build the calculators and have them checked off by the teacher. Ideally, each student should have their own computer, but this could be done in pairs if necessary. Once they have created functional calculators, students calculate forces and accelerations for various sets of objects, including ant & earth, elephant & earth, student & earth, student & Jupiter, student & moon, earth & moon, earth & sun, student & student, among others. They are asked to record their results and answer some analysis questions about their findings. Ideally, they will note that all objects accelerate at the same rate toward earth even though the forces of gravity between earth and different objects vary. They also note that forces and accelerations are minuscule for all but the most massive of objects, and that forces fall off rapidly as distances between objects increase. Upon completing the activity, we go over some of the important points in class. A group quiz or individual quiz on the basics of gravity can also be given at this time. Worksheet to handout / turn in / assess (Microsoft Word 207kB Oct13 08) Teaching Notes and Tips Student familiarity with computers and MS Excel in particular will vary wildly. Consider allowing advanced students to complete extra credit spreadsheets that will help them with something of interest (tracking grades or sports results or wages, etc.) You may consider letting your MS Excel experts help the beginners, but I find that they usually end up doing their work for them, so I try to avoid this. It is very difficult to write good instructions for MS Excel formatting and calculations. Be sure to let students know that whatever instructions you give will need to be read closely to be useful, and that the "help" feature on Excel will probably be more useful. Kick students out of the computer lab if they use email or youtube before completing the assignment. If you have advanced students, consider simply giving them the problem and withholding any instructions. If you plan to use Excel later in the course, this is a good introduction. If you have already used Excel, this is a good review/new application. Students are visually assessed based on their participation during computer lab time. Their worksheets (with results and analysis) are collected and graded. Students are quizzed (in groups or individually) on gravity concepts and Excel usage after this activity. 0.2.2.1.3 How push and pull forces make objects move. 4.1.3.2.1 Use data to construct reasonable explanations. 4.1.4.2.1 Recognize that parts of a system influence one another. 5.2.2.1.1 Demonstrate that the greater the force applied, the greater the change in motion. 8.2.2.2.3 Explain how gravity affects the motion of objects, including planets. 9.1.3.1.3 Use appropriate tools (including computers) and techniques (including mathematical formulas) in gathering, analyzing, and interpreting data. 9P.2.2.1.5 Use Newton's law of gravitation to explain the motion of astronomical bodies. References and Resources
{"url":"http://serc.carleton.edu/sp/mnstep/activities/27949.html","timestamp":"2014-04-18T16:10:30Z","content_type":null,"content_length":"25836","record_id":"<urn:uuid:1f8bb24c-bbfc-4a15-8e70-fe39f2b3fb8a>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
On September 12, 1966, a Gemini spacecraft piloted by astronauts Pete ... compute the average force exerted by the seatbelt and shoulder strap on the person. ... – PowerPoint PPT presentation Number of Views:712 Avg rating:3.0/5.0 Slides: 33 Added by: Anonymous more less Transcript and Presenter's Notes
{"url":"http://www.powershow.com/view/28f33-MzJkN/Momentum_powerpoint_ppt_presentation","timestamp":"2014-04-21T09:41:36Z","content_type":null,"content_length":"109060","record_id":"<urn:uuid:0da33b9c-36c5-43f2-9207-8a132ae492b2>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
First International Conference on Quantum Error Correction Quantum error correction of decoherence and faulty control operations forms the backbone of all of quantum information processing. In spite of remarkable progress on this front ever since the discovery of quantum error correcting codes a decade ago, there remain important open problems in both theory and applications to real physical systems. In short, a theory of quantum error correction that is at the same time comprehensive and realistically applicable has not yet been discovered. Therefore the subject remains a very active area of research with a continuing stream of progress and breakthroughs. The First International Conference on Quantum Error Correction, hosted by the USC Center for Quantum Information Science & Technology (CQIST), will bring together a wide group of experts to discuss all aspects of decoherence control and fault tolerance. The subject is at this point in time of a mostly theoretical nature, but the conference will include talks surveying the latest experimental progress, and will seek to promote an interaction between theoreticians and experimentalists. Topics of interest include, in random order: fault tolerance and thresholds, pulse control methods (dynamical decoupling), hybrid methods, applications to cryptography, decoherence-free subspaces and noiseless subsystems, operator quantum error correction, advanced codes (convolutional codes, catalytic, entanglement assisted, ...), topological codes, fault tolerance in the cluster model, fault tolerance in linear optics QC, fault tolerance in condensed matter systems, unification of error correction paradigms, self-correcting systems, error correction/avoidance via energy gaps, error correction in adiabatic QC, composite pulses, continuous-time QEC, error correction for specific errors (e.g., spontaneous emission), etc. The conference will take place Dec. 17-21 at the University of Southern California in Los Angeles. Source: University of Southern California
{"url":"http://phys.org/news110454920.html","timestamp":"2014-04-17T01:09:00Z","content_type":null,"content_length":"65638","record_id":"<urn:uuid:e444a302-b967-4e88-b83c-3fe9ea9ca534>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
Win a family ticket (2 adults + 3 kids) for The Dungeons® worth up to £97.50!! - Win a family ticket (2 adults + 3 kids) for The Dungeons® worth up to £97.50!! **THIS COMPETITION IS NOW CLOSED – THE WINNER IS LYNN SAVAGE** To celebrate the UK’s First 5D Laser Ride to be Unleashed at the London Dungeon on 28th May, Room for 5 have teamed up with The Dungeons to give away one lucky person a family ticket for 2 adults and 3 children for any of the Dungeons listed below* A UK first, Vengeanceis only the third 5D ride in the world, it promises a unique and totally immersive experience which puts riders at the heart of the action. Taking scare to a whole new level, this technologically groundbreaking 5D ride is a full scale assault on the senses. Guests will be transported to a séance set in Victorian times at 50 Berkeley Square, alleged to be the most haunted house in London at the time. Here riders will come face to face with one of the most famous and feared mediums of the age, Florence Cook, who will take them on the ultimate ghost hunt in a high octane battle of skill, speed and nerve. This is definitely not a ride for the faint hearted and wusses should take the ‘escape door’ option! Vengeance will join the London attractions action-packed mix of fun scares and twisted experiences. These include two other thrilling rides, gruesomely dark interactive shows featuring live actors, special effects and tongue in cheek humour – all bringing to life history’s most horrible bits. To be in with a chance of winning this fab prize all you have to do is: Click the like button on our facebook page and write on our FACEBOOK WALL telling us the answer to this simple question: How many 5D rides are there in the world? *The lucky winner, picked by random.org, will receive a family ticket (2 adults and 3 children) to any of the Dungeons: London, York, Edinburgh, Amsterdam or Hamburg. That’s it! What have you got to lose? Closing date is Friday 13th May 2011 (Mwwaaahaaaa!!) Entrants must be 18 years or over. There is no cash alternative. Competition open to UK residents only. Please click here to see all attractions available at The Dungeons® Tags: family attractions amsterdam, family attractions edinburgh, family attractions hamburg, family attractions london, family attractions york, family day out amsterdam, family day out edinburgh, family day out hamburg, family day out london, family day out york, family london dungeon competition, family ticket amsterdam dungeon, family ticket edinburgh dungeon, family ticket hamburg dungeon, family ticket london dungeon, Family ticket York Dungeon, win family ticket MY Answer is 3 answer is 3 That’s the right answer but don’t forget to post it on our facebook wall!:) Answer 3 Answer: 3 That’s the right answer but don’t forget to post it on our facebook wall!:) You need to post it on our facebook wall!:) answer is 3 x the answer is three My answer is 3. There are 3 Yes, that’s the right answer but don’t forget to post on our facebook wall:) Yep, that’s right but don’t forget to post on our facebook wall:) Don’t forget to post your answer on our facebook wall:) Answer: 3 Yep, don’t forget to post your answer on our facebook wall:) Answer = 3 Answer – 3 ANSWER ~ 3 THANK YOU Have liked and poted 3 as answer on the wall That’s great!:) answer – 3 Answer: 3 ANSWER 3 The answer is – 3 So glad i’ve found this site, will be very useful for us. Thanks Michelle:) Please don’t forget to post the answer on our facebook wall:) That’s right Luisa but don’t forget to post it on our facebook wall:) My answer is 3 Answer – 3 answer – 3 Think it might be 3 also posted on your facebook wall
{"url":"http://www.roomfor5.co.uk/blog/?p=1532","timestamp":"2014-04-18T18:18:52Z","content_type":null,"content_length":"78337","record_id":"<urn:uuid:a95a2ab5-cfb2-434b-96e4-bcffaeb79da0>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - how do you factor this? Date: Oct 11, 2012 3:01 PM Author: justlooking for someone else Subject: how do you factor this? Hi, I am teaching my daughter how to factor equations and came across an expression in a worksheet that looks like below: The answer given (no work shown) shows (a+1)(b+1)(c+1) would be the factors. Sure enough, when I multiply all those terms, eventually, I get the above result. How do I go about deciphering how to group and factor the original equation above. Kinda confused. I was able to group them and factor somewhat, but could not get it down to the answer that is listed. Please help.
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7904405","timestamp":"2014-04-17T16:10:21Z","content_type":null,"content_length":"1537","record_id":"<urn:uuid:32de818d-6547-4e30-aaab-e8aefd491aed>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
Δt/h^2 framerate:fps Diffusion on 512×512 square. Implicit scheme and relaxation algorithm. Diffusion equation For the diffusion equation ∂[t ]u = Δu , u|[t=0] = exp(-(r - r[o ])^2/ a^2), the finite-difference scheme on square grid with the space step h and time step dt is u[x,y]^t+1 = u[x,y]^t + {[u[x+1,y] + u[x-1,y] + u[x,y+1] + u[x,y-1]] - 4u[x,y ]} (Δt/h^2). Note that the time superscript is omitted in braces. Explicit scheme If we use superscript t in braces we get the simple explicit scheme (the long term in the square brackets is symbolized as [u^t]) u[x,y]^t+1 = u[x,y]^t + {[u^t] - 4u[x,y]^t} (Δt/h^2) It is stable only for Δt/h^2 < 1/4. So for small h we need to use very small time step. Implicit scheme and relaxation algorithm Implicit scheme u[x,y]^t+1 = u[x,y]^t + {[u^t+1] - 4u[x,y]^t+1} (Δt/h^2) is stable for all Δt/h^2. We rewrite it as u[x,y]^t+1(1 + 4Δt/h^2) = u[x,y]^t + [u^t+1] (Δt/h^2) and solve iteratively starting with [u^t]. Really the application above makes only two iterations. Therefore it is not accurate for large time steps. Diffusion is not very impressive without convection... Simulations on GPU updated 1 May 2011
{"url":"http://www.ibiblio.org/e-notes/webgl/gpu/diff_impl.htm","timestamp":"2014-04-21T07:30:49Z","content_type":null,"content_length":"10247","record_id":"<urn:uuid:7e6247ff-f08d-4e63-8105-1d9aefd940fc>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
Ecology and Society: Human Activity Differentially Redistributes Large Mammals in the Canadian Rockies National Parks Fig. 2. Graphs of the change in wolf relative probability of use as a function of increasing trail activity within six ‘distance-to-trail’ categories. The x-axis is hourly trail activity and y-axis is relative probability of use. A linear stretch was used to scale the predicted values between 0 and 1 following Johnson et al. (2004).
{"url":"http://www.ecologyandsociety.org/vol16/iss3/art16/figure2.html","timestamp":"2014-04-17T04:32:20Z","content_type":null,"content_length":"1611","record_id":"<urn:uuid:59e6f641-f9a7-473d-accd-887fd23cb3c1>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Tools Discussion: All Activities in Calculus on Computer Algebra System, Intigration Discussion: All Activities in Calculus on Computer Algebra System Topic: Intigration << see all messages in this topic < previous message | next message > Subject: RE: Intigration Author: Mathman Date: Apr 26 2006 On Apr 25 2006, natty wrote: I would like to know which kind of tech that I have to use in > order to intigrate the the following equation: e^(-8*x) / ( > (x^3)*(4*x + 1) ) I thought I'd see if anyone took a shot first. In any event, this sort of unresolvable problem is, or used to be common in some internet newsgroups. It is neither textbook nor of a physical nature, but is devised. Any number of such problems can be devised by mixing functions in various ways. There is no analytic solution. There may be a graphical or numerical approximation to the definite integral, but there is no straightforward workout. Consider simplifying this for a connection: 1/x^3 - 4/x^2 + 16/x - 64/(4x+1) Each term is associated with the exponential as the differential portion of the Each has a unique form of solution, even if associated with dx, but when associated with e^(-8x) it's simply a can of worms that even the symbolic algebra programs kick out. Perhaps Mathematica has a solution. I don't have access to that. Good luck. Reply to this message Quote this message when replying? yes no Post a new topic to the All Activities in Calculus on Computer Algebra System discussion Visit related discussions: Computer Algebra System Discussion Help
{"url":"http://mathforum.org/mathtools/discuss.html?context=cell&do=r&msg=24189","timestamp":"2014-04-19T15:25:25Z","content_type":null,"content_length":"16955","record_id":"<urn:uuid:5aff3cad-1b01-4a35-847f-5f6c53f66860>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
Using Laboratory-Based Surveillance Data for Prevention: An Algorithm for Detecting Volume 3, Number 3—September 1997 Using Laboratory-Based Surveillance Data for Prevention: An Algorithm for Detecting Salmonella Outbreaks Suggested citation for this article By applying cumulative sums (CUSUM), a quality control method commonly used in manufacturing, we constructed a process for detecting unusual clusters among reported laboratory isolates of disease-causing organisms. We developed a computer algorithm based on minimal adjustments to the CUSUM method, which cumulates sums of the differences between frequencies of isolates and their expected means; we used the algorithm to identify outbreaks of Salmonella Enteritidis isolates reported in 1993. By comparing these detected outbreaks with known reported outbreaks, we estimated the sensitivity, specificity, and false-positive rate of the method. Sensitivity by state in which the outbreak was reported was 0% (0/1) to 100%. Specificity was 64% to 100%, and the false-positive rate was 0 to 1. Effective surveillance systems provide baseline information on incidence trends and geographic distribution of known infectious agents. The ability to provide such information is a prerequisite to detecting new or reemerging threats (1). Laboratory-based surveillance can provide data on the location and frequency of isolation of specific pathogens, which can be used to rapidly detect unusual increases or clusters. These data can be transmitted electronically from multiple public health sites to a central location for analysis. Many acute outbreaks of infectious diseases are detected by astute clinical observers, local public health authorities, or the affected persons themselves. However, outbreaks dispersed over a broad geographic area, with relatively few cases in any one jurisdiction, are much more difficult to detect locally. Rapid analysis of data to detect unusual disease clusters is the first step in recognizing outbreaks. We developed an algorithm for the Public Health Laboratory Information System (PHLIS) (2) that detects unusual clusters by using a statistical quality control method called cumulative sums (CUSUM), a method commonly used in manufacturing. CUSUM has also been applied to medical audits of influenza surveillance in England and Wales (3,4). The Algorithm The statistical problem of detecting unusual disease clusters in public health surveillance is similar to that of detecting clusters of defective items in manufacturing. In both cases, the aim is to detect an unusual number of occurrences. Manufacturing operations use several existing quality control methods, e.g., Shewhart Charts, moving average control, and CUSUM, to indicate abnormalities in data collected (5,6). Of these methods, CUSUM has two unique attributes that make it especially suitable for disease outbreak detection. CUSUM detects smaller shifts from the mean, and it detects similar shifts in the mean more quickly (6-8). The computational simplicity of this method also makes it especially well suited for use on personal computers. Other published methods (9-11) require more personal interactions, e.g., model building, and use more intense computations. Applying the Algorithm to Surveillance Data To evaluate how well the CUSUM algorithm detects unusual clusters of disease, we applied it to the Centers for Disease Control and Prevention (CDC) National Salmonella Surveillance System dataset. Since 1962, this surveillance system has collected reports of laboratory-confirmed Salmonella isolates from human sources from all U.S. state public health laboratories and the District of Columbia (12). The laboratories serotype clinical isolates of Salmonella by the Kauffman-White methods, which subdivide this diverse bacterial genus into more than 2,000 named serotypes (13). Each week, laboratories report to CDC each Salmonella strain they have serotyped, along with the age, sex, county of residence of the person from whom it was isolated, and date of specimen collection. The algorithm uses date of specimen collection, which we consider the nearest reliable date to the date the infection began. A one-sided CUSUM was calculated for every reported Salmonella serotype and week by using several values for the expected mean. Different expected means were used in the algorithm to identify which value accurately represented the historical data. First we calculated the mean of 5 weeks and the median of 5 weeks for each Salmonella serotype for the same week over the previous 5 years. We then calculated the mean of 15 weeks, which is the mean over a 3-week interval over the past 5 years. For example, for surveillance of the sixth week of 1993, we would use weeks 5 through 7 for each year from 1988 through 1992 to calculate the mean over a 3-week interval. The results of each calculation were compared to identify which value for the expected mean provided the best sensitivity, specificity, and false-positive rate. To minimize the time needed to process the outbreak detection algorithm for each reported serotype for each reported week, the algorithm was processed only for those Salmonella serotypes having a potential outbreak, an expected mean greater than zero, and counts greater than the expected mean (Figure 1). Since the entire algorithm is processed when the count for a given serotype exceeds the expected mean, the probability structure of CUSUM is not affected. Testing the Algorithm The outbreak detection algorithm was tested retrospectively to determine how well it discovered known outbreaks. To identify outbreaks, 52 weekly counts were calculated by serotype for each of the reporting sites over 5 years. The algorithm compared x[t], the current weekly count of each Salmonella serotype reported to the National Salmonella Surveillance System, with summary information from the same week over the previous 5 years. The summary information includes N[t], the total number of each Salmonella serotype reported over the past 5 years for a given week, and the expected mean over the past 5 years for a serotype for a given week. Each week, except week 52, was defined to contain 7 days. The first week of each year included January 1 through January 7; the last week contained 9 days on a leap year and 8 days otherwise. S[t]=max(0,S[t-1] + z[t] –k)) CUSUM (St) is where S[0] = 0, and k >0. This simplifies to The standard deviation was used in our calculations instead of the standard error. S[t] cumulates both the positive deviations of counts greater than k standard deviations from the mean and zero for the negative deviation of counts (8,10,14). The central reference, k, determines how many standard deviations are added to the mean. Setting k=1 helped control the variability in counts due to reporting errors, seasonality, and outbreaks. To detect any count above delta standard deviations from the mean, a CUSUM decision value, h, was set to ensure an appropriate average run length (ARL). The values h=0.5, k=1, and delta=0.5 yielded an ARL=6 years. This ARL allowed consideration of 5 past years of counts and the count for the current year before the CUSUM signals become out of control (15,16). A rare or uncommon serotype, i.e., a serotype that had not been reported from a state during the past 5 years, was flagged immediately as a serotype of interest. We compared flags generated by the algorithm by state and week with occurrences of reported outbreaks. We considered the sensitivity, specificity, and false-positive rate for three outbreak sizes: 1) any isolates, 2) at least three isolates, and 3) at least five isolates. Data were limited to reports during 1993 and, because we had information about previously reported outbreaks involving this serotype, CDC's Salmonella serotype Enteritidis (SE) Outbreak Surveillance System (17). Sensitivity was calculated as the number of outbreaks flagged by the algorithm that matched SE outbreaks reported to CDC by state and by week. Because an outbreak could have received several flags corresponding to different weeks, flags in consecutive weeks were counted as both being correct. Specificity was defined as the number of weeks without flags that corresponded to weeks without reported outbreaks. The false-positive rate was defined as the proportion of flags that did not correspond to outbreaks. Results of the Test The SE Outbreak Surveillance System had 63 outbreaks reported during 1993 from 20 states and one U.S. territory. Of these 63 outbreaks, 38 reports included date of collection. Two of the reported 38 SE outbreaks occurred in the same state in the same week, and multiple outbreaks occurred 1 week apart in the same state. Therefore, it is difficult to distinguish all 38 reported outbreaks as individual outbreaks. When we used the mean of 5 weeks as the expected mean in the algorithm, 35 states had 230 flags for clusters with >= (greater than or equal to)3 isolates (Table 1). For clusters of >= (greater than or equal to) 5 isolates, 25 states had 121 flags. Sensitivity calculations on these flags were 0% (0/1) to 100%, specificity was 64% to 100%, and the overall false-positive rate was 77% (Table 2). When the median of 5 weeks was used for the expected mean in the algorithm, the algorithm flagged SE in 35 states with 380 unusual clusters with >= (greater than or equal to) 3 isolates. Twenty-five states had 210 flags with >= (greater than or equal to) 5 isolates (these states were the same ones that were flagged when the mean of 5 weeks and counts of >= (greater than or equal to) 5 isolates were used). In each instance in which using the median of 5 weeks resulted in an unusual cluster being flagged that had not been flagged using the mean of 5 weeks, the median of 5 weeks was smaller than the mean of 5 weeks. Clusters flagged by using the median of 5 weeks but not flagged by using the mean of 5 weeks were three to 37 isolates, with a mean of seven per cluster. Three of these clusters with five or more isolates were known outbreaks. Thus, using the median of 5 weeks would have detected three more outbreaks than using the mean of 5 weeks, but at the expense of lower Evaluating the algorithm by using the mean of 15 weeks for the expected mean, we found 125 SE flags in 25 states on clusters with >= (greater than or equal to) 5 isolates. These were the same states flagged when the mean of 5 weeks was used for the expected mean. Each time a flag occurred using the mean of 15 weeks, while no corresponding flag occurred using the mean of 5 weeks, the mean of 15 weeks was smaller than the mean of 5 weeks. In this scenario, the sizes of the clusters were 3 to 8 isolates, with an average of 5 isolates per cluster. In comparison, the mean of 5 weeks was associated with a higher specificity than the mean of 15 weeks. Without a way to calculate an overall specificity for all serotypes, the decision about which value to use as the expected mean in the algorithm was based on the data gathered about SE. Using the median of 5 weeks produced the largest number of flags and the lowest specificity; a mean of 15 weeks generated the second highest number of flags and the second lowest specificity; and using the mean of 5 weeks produced the fewest flags and the highest specificity. Even though using both the median of 5 weeks or the mean of 15 weeks produced additional early flags, this negligible increase in sensitivity was associated with a decrease in specificity. Therefore, we elected to use the mean of 5 weeks for the expected mean in the algorithm, to obtain the highest specificity. An Assessment of the Algorithm The CUSUM algorithm provides a simple method to evaluate surveillance data as they are being gathered and provides sensitive and rapid identification of unusual clusters of disease. In this algorithm, a mean of 5 weeks was a better value for the expected mean than a median of 5 weeks or a mean of 15 weeks. Using a mean of 5 weeks, the algorithm failed to flag reported outbreaks only three times. In addition, a median of 5 weeks and a mean of 15 weeks were associated with lower specificity than the mean of 5 weeks. Therefore, to achieve the best specificity we used a mean of 5 The sensitivity, specificity, and false-positive rate results indicate that the algorithm works well. However, there are several potential limitations to calculating sensitivity, specificity, and the false-positive rate as we did. Some of these include outbreak size, lack of reporting of isolates, duplicate isolate reports, and under-reporting of outbreaks. Constraints on public health resources may limit investigation of small outbreaks of SE. Therefore, we did not include these in the calculation of sensitivity. Under-reporting of isolates could cause the algorithm to miss an outbreak, regardless of its size. Under-reporting of known SE outbreaks could also inflate our estimates of specificity. An outbreak detection algorithm must have high specificity (i.e., few false flags). The algorithm can be adjusted to achieve better specificity, which would benefit state health departments that may choose to investigate small clusters. Seasonal shifts in the incidence of Salmonella can interfere with the sensitivity of the outbreak detection algorithm. In our study, we examined only unusual clusters of Salmonella that were above the normal seasonal patterns. Thus, we may have missed smaller outbreaks that were obscured by seasonality. For example, we could have overlooked an outbreak of three cases if it occurred in a season with a high background number of reported cases. The ability of the algorithm to detect outbreaks rapidly is also affected by the speed with which serotyping is done and the results reported by state public health laboratories. In early spring 1995, we implemented the algorithm on a weekly basis, looking for unusual clusters at the state, regional, and national levels among Salmonella isolate data reported each week from state public health laboratories to CDC. An international outbreak of Salmonella serotype Stanley was flagged in May 1995 (Figure 2). S. Stanley is an unusual serotype in the United States, with only 219 cases reported in 1994. The ensuing epidemiologic investigation implicated alfalfa sprouts as the vehicle of infection (18). Rapid detection of this outbreak concluded in identification of a new vehicle of salmonellosis and prompted development of prevention measures. 1. Centers for Disease Control and Prevention. Addressing emerging infectious disease threats, a prevention strategy for the United States. Atlanta: CDC, 1994. 2. Martin SM, Bean NH. Data management issues for emerging diseases and new tools for managing surveillance and laboratory data. Emerg Infect Dis. 1995;1:124–8. DOIPubMed 3. Williams SM, Parry BR, Schlup MMT. Quality control: an application of the CUSUM. BMJ. 1992;304:1359–61. DOIPubMed 4. Tillett HE, Spencer IL. Influenza surveillance in England and Wales using routine statistics. J Hyg (Lond). 1982;88:83–94. DOIPubMed 5. Montgomery DC. Introduction to statistical quality control. New York: John Wiley and Sons; 1985. 6. Banks J. Principles of quality control. New York: John Wiley and Sons; 1989. 7. Lucas JM. The design and use of V-Mask control schemes. Journal of Quality Technology. 1976;8:1–12. 8. Lucas JM. Counted data CUSUM's. Technometrics. 1985;27:129–44. DOI 9. Stroup DF, Williamson GD, Herndon JL. Detection of aberrations in the occurrence of notifiable diseases surveillance data. Stat Med. 1989;8:323–9. DOIPubMed 10. Watier L, Richardson S. A time series construction of an alert threshold with application to S. bovismorbificans. Stat Med. 1991;10:1493–509. DOIPubMed 11. Farringtion CP, Andrews NJ, Beale AD, Catchpole MA. A statistical algorithm for the early detection of outbreaks of infectious disease. J R Stat Soc [Ser A]. 1996;159:547–63. DOI 12. Centers for Disease Control and Prevention. Salmonella surveillance. 1993-1995 Annual Tabulation Summary. Atlanta: CDC; 1996. 13. McWhorter-Murlin AC, Hickman-Brenner FW. Identification and serotyping of Salmonella and an update of the Kauffmann-White Scheme. Atlanta: CDC; 1994. 14. SAS Institute Inc. SAS/QC software: reference, version 6. 1^st ed. Cary (NC): SAS Institute Inc.; 1989. 15. Goel AL, Wu SM. Determination of A.R.L. and a contour nomogram for CUSUM charts to control normal mean. Technometrics. 1971;13:221–30. DOI 16. Lucas JM, Crosier RB. Fast initial response for CUSUM quality-control schemes: give your CUSUM a head start. Technometrics. 1982;24:199–205. DOI 17. Mishu B, Koehler J, Lee AL, Rodrigue D, Brenner FH, Blake P, Outbreaks of Salmonella enteritidis infections in the United States, 1985-1991. J Infect Dis. 1994;169:547–52.PubMed 18. Mahon B, Ponka A, Hall W, Komatsu K, Dietrich S, Siitonen A, An international outbreak of Salmonella infections caused by alfalfa sprouts grown from contaminated seed. J Infect Dis. 1997;175:876– 82. DOIPubMed Suggested citation: Hutwagner LC, Maloney EK, Bean NH, Slutsker L and Martin SM. Using Laboratory-Based Surveillance Data for Prevention: An Algorithm for Detecting Salmonella Outbreaks. Emerg Infect Dis [serial on the Internet]. 1997, Sep [date cited]. Available from http://wwwnc.cdc.gov/eid/article/3/3/97-0322.htm
{"url":"http://wwwnc.cdc.gov/eid/article/3/3/97-0322_article.htm","timestamp":"2014-04-18T06:13:34Z","content_type":null,"content_length":"87150","record_id":"<urn:uuid:b25fbae9-441a-449a-9a0c-90c701761001>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Best Response You've already chosen the best response. quadratic formula In: \[ax^2 + bx + c = 0\] \[x _{1,2} = (-b \pm \sqrt{b^2-4ac}) / 2a\] plug it in \[x _{1,2} = (-(-2) \pm \sqrt{(-2)^2-4(1)(4)}) / 2(1)\] which equals 2 plus or minus sqrt(4-16) = 2+2(i)sqrt(3) or 2-2(i)sqrt(3) \[i = \sqrt{-1}\] Best Response You've already chosen the best response. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4dbf80f4b0ab8b0bd606878b","timestamp":"2014-04-18T23:26:16Z","content_type":null,"content_length":"31007","record_id":"<urn:uuid:97ef6ce4-7921-41e7-bec5-b6624aff9394>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
Size, Error, and Confidence in The Statistics Sampler In “The Statistics Sampler’’ (Bibliography No. 5), I developed a sequence of activities and lessons to introduce some basic ideas of sampling. The major mathematical underpinning for the lessons was the Central Limit Theorem. Except for counting, tallying, ratio, percent, graphing, and mean, however, NO formal mathematical skills or concepts were required of students. Intuitive ideas of random sampling, of the effects of changing sample size, and of confidence in predictions were the main outcomes expected of students. In this Unit, I am extending the topic of sampling through activities which make more specific the concepts of, and relationships among, confidence limits, error tolerance, and sample size. Two related formulas are given so that the student, using a calculator, can apply the ideas, developed informally through the activities, to similar situations. It is assumed that the student has the pre-requisite arithmetic skills and the base experience in sampling such as was developed in “The Statistics Sampler.’’ Also required is the ability to substitute values into a formula and to evaluate the result using a calculator. As in “The Statistics Sampler’’ I would expect the Unit to lie within the grasp and capability of most regular education students from Grade 7 through Grade Here is the kind of problem I want to solve: 1. Before I buy an ad for my new shoelaces on MTV, I want to know the percent of high school students who watch MTV more than one hour per week. Obviously I can’t poll every high school student. In order, then, to predict the percent on the basis of a sample, I first need to know (a) How many high school students do I need in my sample? (b) If I want my answer accurate to plus or minus 10%, how will that affect the sample size? How will that affect the confidence level? (c) I want to be very confident about my answer. I can’t afford the expense of being 100% confident, though. Maybe 95% “sure’’ is good enough. How will that affect sample size? How will that affect Here are two more similar problems: 2. Sandra “Dunk-em’’ Smith is trying to decide whether to run for Student Government president in her high school. It would help her decide if she knew approximately what percent of the over 2000 students in her school know who she is. Three of her friends are willing to take a poll of a random sample of students. To be 95% sure that they have predicted an answer within plus or minus 5%, how many students should they poll? 3. Willie B. Ready works at Awesome Auto Parts. He has noticed that some of the air filters from a particular supplier are ripped. He thinks there may be as many as 2 defective filters out of every 10. He wants to be quite sure about that, however, before he goes to the trouble of changing suppliers. He decides to take a random sample of the air filters received next month to get an approximation to how many are defective. He wants to be 95% confident the percent of defective filters he predicts is accurate to within plus or minus 3%. How many does he need to sample? The objective of this Unit is for students to be able to answer questions like the three just posed. The formal mathematics is not really terribly difficult. We’re trying to predict P, the percent or proportion of a large population which has a certain characteristic. In the three problems above, the characteristics are (1) watches MTV more than one hour per week, (2) has heard of “Dunk-em’’ Smith, and (3) is a defective air filter. We will make the prediction of P based on the percent, p, of “successes’’ (a “success’’ is a member of the population having the characteristic) in the random sample. In other words, we’ll get P G {p ± E}; that is, we will predict P as being within the range of p plus or minus an error tolerance, E. The prediction is one in which we will have some specific degree of confidence. The degree of confidence will depend on E, how wide an error tolerance we will accept, and the size of our sample. The larger the sample, the higher our degree of confidence in the prediction. The wider the error tolerance, the higher our degree of confidence. E, then, is related both to the degree of confidence and to the sample size. It is also related to the percent or proportion of successes in our sample. E=z Ãpq/n z = standard normal coefficient at the stated confidence level p = proportion of successes in the sample of size n q = 1 - p (the proportion of failures in the same sample) n = number of members in the sample The more general statistical formula is (figure available in print form) where µ = the population mean X = the sample mean za = the standard normal coefficient corresponding to the confidence level a s (or s) = the standard deviation of the population, if known (or the standard deviation of the sample) n = the size of the sample Several references in the Bibliography, especially Devore and Peck (2), Edwards (3), and Mason (7), provide the details connecting the general case to the population proportions of binomial Teachers should be aware of several issues I am ignoring here. The issues seem to me to cloud the fundamental concepts we want the students to grasp. Those fundamental concepts are : (1) We can predict the population parameters from sample values. (2) The size of the sample, the error tolerances, and the degree of confidence are interrelated. (3) Smaller error tolerances lower the degree of confidence OR increase the sample size. Higher degrees of confidence require larger error tolerances OR lower degrees of confidence. (4) We can, for a given predicting need, specify two of the values we have been discussing and calculate the third value. There are simple formulas we can use. “...issues I am ignoring...’’ Yes. (1) Does the form of the population distribution matter? Maybe. The Central Limit Theorem, however, guarantees us that regardless of the distribution of the original sample, the distribution of sample means (which is what we’re basing prediction on) tends to the normal distribution as the sample size increases. Therefore the binomially distributed cases we are concerned with in this Unit can be treated as normal distributions. (2) Suppose the population is relatively small? Or the sample size is small? Or the ratio of sample size to population is relatively large? Or we do or do not know the population variance? Then we would need to worry about correction factors, t-tests and other hypothesis-testing tools, and whether or not the standard deviation is defined in terms of n or n-l. The application problems posed and attacked in this Unit do not get into such detail complications. If your class, or a student, wants to tackle such situations after the points in this Unit are understood, then by all means explore the potential problems and solutions. All of the references in the Bibliography agree that if both np ³ 5and nq ³ 5 then the simple procedures presented here are appropriate. Each of the following Lessons should take from two to five class periods depending on the task efficiency in sampling, sophistication of discussion, the skill levels for charting, graphing, finding percent, rounding off, combining smaller samples into larger ones, etc. The Lesson outlines deal with content, not management or individual differences or testing. All of the Lessons are described in a format similar to that of “The Statistics Sampler:’’ A. Objective B. The experimental question — a question which involves statistical sampling C. Issues and some possible resolutions — a mini-lecture, a series of questions which should arise, a conversation between teacher and class. There is occasionally a direct comment to the teacher in brackets — or a lesson continued for illustration with mydata. This section really defines the activity. D. Observations and discussion to Objective — more questions or mini-lecture or summary or dialogue relating specifically to the stated Objective or bridging to the next activity. The lesson numbering continues from “The Statistics Sampler.’’ Let’s work with the language a bit. We said “For sample size 50, 69% of our samples have ‘’percent red’’ within E = 4 of the population value.’’ Said another way, if we took just one sample of 50, there would be about a 69% chance that it would have a “percent red’’ value within E = 4 of the actual population value. Or another way, if we took just one sample of 50, we would be 69% confident that it would have a “percent red’’ value within E = 4 of the actual population value. I can hear you now! “Hold it! Hold it!’’ you say. “If we take just onesample, we won’t know what the population value is. So what good does it do us to be 69% “confident’’ our value is within E = 4 of it>’’ Well, look at it from the other side. If my value is within 4 of some other value, then isn’t the other value within 4 of mine? If the other value is 13 and my value is 16, we are within 4 of each other. If my value is 9, we are still within 4 of each other. If I get 16, I’ll simply say that the other value, the value I want to predict, is between 16 - 4 = 12 and 16 + 4 = 20. 13 qualifies, doesn’t it?! If I get 9, I’ll predict 5 to 13. 13 still qualifies! And if I am 69% confident my value is within 4 of the true one, then I am 69% confident the true value is within 4 of mine. When we started this experiment with the little cubes, we pretend the cubes were residents of the planet Colsquar, and red residents (cubes) liked the red records we wanted to sell. We wanted to predict the percent of the population which was red. Now pretend something different. Pretend that the colored cubes are air filters. Red cubes are defective. Willie B. Ready of Awesome Auto Parts wants to predict what percent are defective with 95% confidence and error tolerance E = 3. How many filters does he need in his sample? He can get to E = 3 for a sample of 10, but only at the 44% confidence level. For a sample of 50, he can get to E = 3 with 69% confidence. Obviously, he’ll have to sample some number more than 50. But we don’t know, yet, a simple way to find that number. Before we describe a simple way, however, let’s go through on more model experiment to be sure we have a good idea of this whole sampling process. Lesson 10. A. Objective — same as Lesson 9. B. The experimental questions — What percent of the population is beans? How close can i get? How confident am I? C. Issues, and some possible resolutions — [Materials and procedure. A box of several thousand objects — two different kinds or colors of objects. I used dried beans and peas, less than two small packages in all, which just filled a one-quart container and, I estimated, approached 4000 objects. To mix the beans and peas thoroughly, I dumped them into a large container which I shook vigorously! Sampling was a bit less efficient than for the colored cubic centimeters; since beans and peas are different in size and shape, I couldn’t count out the same number of objects for each sample without compromising randomness. I scooped out a level teaspoonful for each sample, getting generally 25 to 30 beans and peas. It became important, then, to have a calculator to find the “percent’’ of beans in each sample (and later, to find the totals of combined samples). Theoretically, one should take successive samples with replacement to meet a condition of randomness. For such a large population, it shouldn’t make a noticeable difference to sample without replacement, however, For the class that raises the issue, and if you have time, it might be worthwhile to try both methods to compare results. The greatly reduced copies of Worksheets included here illustrate my results. Use full-size Worksheets with the class! The particular combinations used to generate larger samples are, of course, not important.] Here are several thousand beans and peas. We could use them as models of air filters, with beans (or peas) the defective ones. Or as high school students, with beans (or peas) students who know “Dunk’em’’ Smith. Or who watch MTV more than one hour per week. In any case, we want to take samples so we can predict the percent of the entire population which is beans. On your Worksheet, write the Research Question, “What percent is beans?’’ And in parentheses, record your estimate (guess) right now based on looking at the top layer (these are well-mixed). We’ll need to refer to your estimates later. With the teaspoon, each of you take one sample. Count the total, and the number of beans, and record on your Worksheet. Then we’ll list all of your samples on one set of Worksheets. [Here are my Worksheets for 26 samples] (figure available in print form) From these 26 samples, where N averages about 27, what would you be willing to predict about the percent of the total which is beans? Less than 50%? More than 90%? Probably in the 60’s or high 50’s, or low 70’s? How confident are you? What error tolerance will you accept? Let’s graph the data; perhaps it will be easier to see what’s happening... (figure available in print form) After our experience with the cubes, we would expect that combining the small samples into larger ones would give us a clearer picture and a narrower range. Let’s do that here, combining four small samples into new samples averaging about 110 in size. [Here are my results.] (figure available in print form) And let’s graph these on the same scale we had before. (figure available in print form) What are you willing to predict now? Let’s combine again — combining groups of five of the second set of samples into a new set of 26 samples averaging about 550 in size. [Here are my results.] (figure available in print form) (figure available in print form) And graphing as before, but with one modification since the percents cluster so tightly... Let’s keep the same scale, but break each interval in half so we see each percent value. (figure available in print form) Now what are you willing to predict? As we did in Lesson 9, we’ll make a Table summarizing these results in terms of N, the percent E, and the “confidence level.’’ [Here are my results.] (figure available in print form) D. Observations and discussion to Objective — The Table on the previous page makes it clear, for the samples I took, that with a sample size of 27, 85% of the samples lie within E = 14 of the population (sample mean) percent of 68. Or, to change the point of view again, 85% of the time the population percent will be within E = 14 of whatever my sample percent is. When the sample size is increased to 110, 85% of the time the population percent will be within E = 14 of whatever my sample percent is. When the sample size is increased to 110, 85% of the time the population percent will be within E = 4 of whatever my sample percent is! And when the sample size gets up to 550, 88% of the time I can predict within E = 2 of the population percent. [You may want to view the “confidence level’’ in a more technically correct way from the error side. In the last case, for example, one would say that only 12% (100-88) of the time will I have a sample more than E = 2 off by chance. Or...the population percent is different from my sample value, say 67%, 67 2, only 12% of the time; therefore I have no reason to reject the hypothesis P = 67 2 at the 12% level. But I think such a degree of technicality requires a far more sophisticated and formal background in probability, the normal curve, and hypothesis testing than is appropriate at the level of these lessons and than is necessary to establish the basic concepts as we have been doing. For a similar, intuitive, non-technical approach using box and whisker plots for 90% confidence instead of the histogram and Table techniques here, refer to Information from Samples by Landwehr et al. (Bibliography reference 6).] Lesson 11. A. Objective — students will be able to apply 2 given formulas to solve problems such as 1, 2, and 3 posed near the beginning of this Unit. B. The experimental question—see Problem 1: What percent of the high school population watches MTV more than one hour per week? How large a sample do I need to answer the question within a given error tolerance with 95% confidence? C. Issues, and some resolutions — With the bean/pea population as a model and with some sampling, we essentially worked toward an answer to the experimental question by trial and error. We discovered that all our combinations of samples reaching about 550 in size would give us a predicted percent within E = 3. So we would be willing to claim 95% confidence! Or, for N approximately 110, our sample gave us a value within E = 6, 96% of the time. I have chosen to concentrate on the 95% confidence level because it is a very common level used by experimenters and pollsters. Other levels sometimes used are 90%, 99%, and 99.9%. [These correspond, of course, to = 0.05, 0.10, 0.01, and 0.001 in formal statistics.] Here is a formula we can use to answer our MTV question: We will predict that P, the percent of the population we want to know, is p, the percent of the population in our sample, plus or minus E, the error tolerance. In symbols, P is p +- E. Or P is in the interval from p - E to p + E. And we will make this prediction with 95% confidence. But how do we know what E is? Or the sample size, N? E = 1.96 times the square root of p times q divided by N E = 1.96 Ã pq/N E is the error tolerance. 1.96 is a factor mathematicians calculate from the 95% confidence level we said we’d use. [It is, of course, z0.05;] If we wanted only 90% confidence, then the factor would be 1.65; if we wanted 99% confidence, the factor would be 2.58.) p = the percent of what we want in our sample. q = 1 - p or the percent of everything else in our sample. N = the size of our sample, the number of people or answers or objects in our sample. Let’s use this formula with our bean/pea population. For my particular sample B, we had N = 26, p = 62%. Then we would predict (figure available in print form) P lies within 0.62 ± 1.96 times 0.095 = 0.62 ± 0.19. P lies between 0.62-0.19 and 0.62-0.19 OR between 0.43 and 0.81 with 95% confidence. When we took lots of samples of size about 27, we found P was about 68%, or 0.68. Is that between 0.43 and 0.81? Of course it is! Let’s try this for F. p = 75% or 0.75 Then q = 0.25 (figure available in print form) P is within 0.75 ± 0.16 P is between 0.59 and 0.91 with 95% confidence. Let’s try it for sample A. p = 88% or 0.88 Then q = 0.12 (figure available in print form) P is within 0.88 ± 0.11 P is between 0.77 and 0.99 with 95% confidence. Did you say, “No, P was 0.68. That is NOT between 0.77 and 0.99.’’? Well, we didn’t claim 100% confidence, did we?! 95% “confident’’ means 5% of the time we’re wrong! This was one of those cases where we were wrong! Let’s try two more. Use samples of about N = 110. For my sample 1, p = 71% or 0.71 Then q = 0.29 N = 113 (figure available in print form) P is within 0.75 ± 0.08 P is between 0.67 and 0.83 with 95% confidence. Notice, since N is larger than before, how much smaller E is. For my sample 8, p = 65% or 0.65 Then q = 0.35 N = 111 (figure available in print form) P is within 0.65 ± 0.08 P is between 0.57 and 0.73 with 95% confidence. Let’s go back to the beginning. We wanted to predict what percent of high school students watch more than one hour of MTV a week. We pretended the beans were those students and the beans and peas together were all high school students. Actually, we would conduct a survey, trying to pick students at random, couldn’t we. But how many students should we pick in our sample? There is a way to use the formula we’ve just worked with to answer the question. We’ll stick with the beans/peas model. P, we said, was within p ± E. (figure available in print form) E = 1.96 Ã pq/N. Suppose we decide our error tolerance in advance. Then we can solve for N as long as we have a guess about p. [If your class can do the solution, do it. Otherwise simply present the following.] (figure available in print form) In words, N equals 1.96 divided by E, then squared or multiplied by itself, times p times p. Remember when we guessed a percent, p, for beans way back at the beginning of Lesson 10? We’ll use that number for p now. And let’s agree we want E = 0.06 at the 95% confidence level. (figure available in print form) N = 256 A sample of 256 should do it. Suppose we had guessed p = 0.70. (figure available in print form) N = 224. Close, but a little less than the 256 we had before. Suppose we set E = 0.10, and guessed p = 0.60. Do you expect N to be larger or smaller? Why? Let’s calculate N. (figure available in print form) N = 92. Did you expect N to be smaller because we made E larger? Let’s make E = 0.04, and keep our guess at p = 0.60. What do you expect will happen to N now? (figure available in print form) N = 576. Did you expect N to be larger because we made E smaller? What do we do if we have no idea at all about what p might be? The safest solution is to use p = 0.50. That will give the largest value of N for a given error tolerance. D. Observations and discussion to objectives — [Don’t try the foregoing without calculators! And you may have to teach calculator use for the specific formulas, too!] Here are two more problems. Let’s try them to see how to summarize what we’ve learned. Recall problem 2. Sandra “Dunk-em’’ Smith may run for Student Government president. First, however, she wants an estimate of what percent of the students in her school know who she is. She’d like to have 95% confidence in a prediction within E = 0.10. How many students should be polled? (figure available in print form) “Dunk-em’’ thinks 75% know who she is. Her campaign manager says to use 50% because it will give a “safer’’, larger number of students to sample. Try it both ways! (figure available in print form) N = 72 “Dunk-em’’ decides to play it safe. Her campaign workers poll a sample of 96 students. 62 of them know who “Dunk-em’’ is. What is the prediction for the percent of all students? (figure available in print form) (figure available in print form) (figure available in print form) P is within 0.65 ± 0.10 P is between 0.55 and 0.75. “Dunk-em’’ is now 95% confident that between 55% and 75% of the students at her school know who she is. Now she can decide whether to run for Student Government president. What would you decide? [One approach to Willie B. Ready’s air filter problem (Problem 3 at the beginning): For 95% confidence and E = 0.03 and a guess of p = 0.20, we get (figure available in print form) N = 683 air filters. Willie figures it would take two months’s worth of air filters to get that many. So he changes his E to 0.10. (figure available in print form) N = 61 air filters. He goes with it. He gets 5 defective ones. So he calculates (figure available in print form) P is within 0.08 ± 0.07 P is between 0.01 and 0.15. Willie has estimated that between 1% and 15% of the air filters are defective. What would you do? Change suppliers? Warn the supplier that you will change if there is no improvement? Ignore it? Statistics help us predict. But the important decisions we base on the predictions can not be made by the statistics. Human beings make those decisions! The series of Lessons is concluded. Hopefully, students have met the objectives. The base of understanding in real problems, in concrete experience, should prepare the students both for a clearer understanding of general statistical data as well as for the further study of statistics. (figure available in print form) 1. Anderson, David R, Dennis J. Sweeney and Thomas A. Williams. Statistics: Concepts and Applications. St. Paul, MN. West Publishing Company. 1986. Chapter 10 bears most directly on this Unit. It will lead to other Chapters. Readable text for college or senior high school. Wide variety of applications. Superb problems. 2. Devore, Jay and Roxy Peck. Statistics: The Exploration and Analysis of Data. St. Paul, MN. West Publishing Company. 1986. Chapters 7 and 8 apply to this Unit. Slightly more “mathematical’’ than Anderson (1). Aimed at college. Good problems. 3. Edwards, Allen L., Statistical Analysis. New York. Holt, Rinehart and Winston, Inc. 1969. Chapters 9 and 10 apply to this Unit. Readable and without heavy theory. Good examples. 4. Fehr, Howard F, Lucas N.H. Bunt and George Grossman. An Introduction to Sets, Probability and Hypothesis Testing. Boston, MA. D.C. Heath and Company. 1964. Chapter 4 through 6 apply to this Unit. Good examples are cited. The theory is aimed at Grade 12 or college students. 5. Howell, David B. “The Statistics Sampler.’’ Unit 4 of The Measurement of Adolescents, Curriculum Units by Fellows of the Yale-New Haven Teachers Institute, 1985, Volume VIII. Pre-requisite for both teacher and students to this Unit. 6. Landwehr, James M, Jim Swift and Ann E. Watkins. Information from Samples. A Booklet “prepared for the American Statistical Association—National Council of Teachers of Mathematics Joint Committee on the Curriculum in Statistics and Probability.’’ Development copyright by the authors. 1984. An important, readable, doable unit on essentially the same content as this Unit, but from a graphical approach. Wonderful material. 7. Mason, Robert D. Statistical Techniques in Business and Economics. Homewood, Il. Richard D. Irwin, Inc. 1978. Good applications. Chapter II takes a relatively informal approach to sampling and confidence limits. 8. McGhee, John W. Introductory Statistics. St. Paul, MN. West Publishing Company. 1985. Chapter 8 applies to this Unit. Consistent with (1) and (2). 9. Runyon, Richard P. and Audrey Haber. Fundamentals of Behavioral Statistics. Reading MA. Addison-Wesley Publishing Company. 1984. Another (usually) readable, non-theoretical college text. Contents of 1986 Volume V | Directory of Volumes | Index | Yale-New Haven Teachers Institute
{"url":"http://yale.edu/ynhti/curriculum/units/1986/5/86.05.06.x.html","timestamp":"2014-04-18T05:31:35Z","content_type":null,"content_length":"33900","record_id":"<urn:uuid:4432a373-4bdf-493d-b711-e85906848664>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
South Kingstown, RI Math Tutor Find a South Kingstown, RI Math Tutor ...I have 29 years experience in the chemical industry. This has included specialty electronic chemical R&D and more recently Pharmaceutical R&D. I have worked in specialty chemicals manufacturing in management and been responsible for the work of a number of chemists. 10 Subjects: including calculus, geometry, algebra 1, algebra 2 ...In addition, I am a classically trained chef! I offer cooking lessons, catering, and custom meal prep. Looking forward to working with you.As a personal chef, I offer cooking lessons, catering, and custom meal prep through my own business. 32 Subjects: including algebra 1, C++, computer science, logic ...I assign homework designed to help the student. If the student or parent is not satisfied, I will refund your money for the lesson. While I have a 24-hour cancellation policy, I offer makeup 24 Subjects: including algebra 1, biology, English, geometry ...I have been helping students achieve their educational goals for many years as a classroom teacher, and as a tutor. My love of children and learning has led me into teaching. Along with my certifications, I am trained as a Montessori teacher, and use this as my primary teaching method. 16 Subjects: including prealgebra, SAT math, reading, algebra 1 Hi, my name is Curtis. I have been teaching and tutoring mathematics for over twenty years. Starting in high school in Connecticut and continuing through my college days at Cornell, I have enjoyed helping others to better understand mathematics in order to improve their test-taking experiences. 23 Subjects: including prealgebra, ACT Math, discrete math, logic Related South Kingstown, RI Tutors South Kingstown, RI Accounting Tutors South Kingstown, RI ACT Tutors South Kingstown, RI Algebra Tutors South Kingstown, RI Algebra 2 Tutors South Kingstown, RI Calculus Tutors South Kingstown, RI Geometry Tutors South Kingstown, RI Math Tutors South Kingstown, RI Prealgebra Tutors South Kingstown, RI Precalculus Tutors South Kingstown, RI SAT Tutors South Kingstown, RI SAT Math Tutors South Kingstown, RI Science Tutors South Kingstown, RI Statistics Tutors South Kingstown, RI Trigonometry Tutors Nearby Cities With Math Tutor Charlestown, RI Math Tutors East Matunuck, RI Math Tutors Jamestown, RI Math Tutors Kingston, RI Math Tutors Middletown, RI Math Tutors Narragansett Math Tutors Newport, RI Math Tutors North Kingstown Math Tutors Peace Dale Math Tutors Portsmouth, RI Math Tutors Richmond, RI Math Tutors Saunderstown Math Tutors Somerset, MA Math Tutors Wakefield, RI Math Tutors West Kingston Math Tutors
{"url":"http://www.purplemath.com/South_Kingstown_RI_Math_tutors.php","timestamp":"2014-04-16T19:21:55Z","content_type":null,"content_length":"24092","record_id":"<urn:uuid:3e2f9961-52fc-40c7-81fa-d226bea5e6df>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
Section 2.10 - Theoretical Methods The previous sections included transport methods for which there is a scientific or engineering consensus they are possible to build, even if not built yet. For completeness, this section includes theoretical transport methods for which: a consensus does not exist, there is no known method to implement it, or even contradicts established physics. They are sorted alphabetically, since there is no reliable way to rank or organize these methods. Although there is not a firm technical basis for these methods, to be listed here, they need at least some theoretical support in the form of published papers or other documentation. Ideas which only appear in fictional works or have no theoretical support can be found in Appendix 1: Fictional Methods 77 Alternate SpacetimeEdit Alternate Names: - Subspace, Hyperspace, Alternate Dimensions Type: Theoretical Description: - Spacetime in relativity theory is the 4 dimensional environment of three physical dimensions and one time dimension. Travel in ordinary spacetime is limited to the speed of light as far as we know. This method uses the idea that there is some other spacetime which can be reached from ours. If it has different properties than ours, it could allow transport or communication faster or more efficiently. If an alternate spacetime is in relative motion to ours, which is not constrained by the speed of light, then rapid travel could be possible by translating to it, and then translating back at the destination. There are theories that our Universe actually consists of more than 4 dimensions, such as string or M-theory, but the other dimensions are compacted to the quantum scale, or unreachable. As of 2012, there is no evidence for these theories, although a considerable number of scientific papers have been written on the subject, and some searches for observable effects are underway. Status: Theoretical as of 2012 Drosher and Houser, Space Propulsion Device Based on Heim's Quantum Theory, 2004. AIAA Paper 2004-3700. Assumes an extension of General Relativity using quantized higher dimensional space. 78 AntigravityEdit Alternate Names: Type: Theoretical Description: Antigravity is the reduction or opposition to the normal force of gravity, which is attractive under most conditions. One method of producing it would be with a negative mass. If such existed, the formula for gravitational force would produce a repulsion rather than attraction. No material with negative mass is known to exist. By Einstein's mass-energy relation ( E=mc^2 ), negative mass would also represent negative energy. Other methods of producing repulsion have been proposed, but suffer a similar lack of observable support except for one item - Dark Energy. The Universe as a whole appears to be expanding at an accelerating rate. The cause of this is hypothesized to be a cosmological constant, a pressure that exists throughout the Universe tending to expand spacetime. Since there is no known way to change the pressure caused by Dark Energy, which is distributed evenly in all directions, it is not useful as a transport method. Status: Theoretical as of 2012 79 Modified Newtonian Dynamics (MOND)Edit Alternate Names: Type: Theoretical Description: This method assumes some violation of Newton's laws of motion are possible. Either an action without an equal and opposite reaction, which produces a reactionless thruster, or higher order terms in the motion equations that would allow an unbalanced force. While such formulas are easy to write, they do not have support from actual observations. A resonant extraction of Casimir forces from the quantum background has been proposed as a way to produce thrust. While the Casimir force is well observed, using it in a way that generates reactionless thrust is not. Status: Theoretical as of 2012 80 Quantum Black Hole EngineEdit Alternate Names: Type: Theoretical Description: In theory, all black holes will emit particles as if it were a black body of a certain temperature. This is known as Hawking Radiation after physicist Stephen Hawking, who first described it. The temperature varies inversely with the size of the event horizon, so smaller black holes are hotter and emit higher energy particles. The emission of a black body changes as the 4th power of temperature, while the area of an event horizon changes more slowly. Therefore small black holes emit more energy, and ones small enough to emit useful amounts of energy are themselves particle sized, and thus called Quantum Black Holes. If new matter is added to the black hole at a rate sufficient to offset the emission losses, effectively 100% conversion of matter to energy can be achieved. The particles or gamma rays thus emitted are directed for thrust or used for power generation. Black holes, quantum or otherwise, are very massive, so the utility of such for propulsion is questionable for anything smaller than an asteroid sized spaceship. Although there is a good amount of theory about quantum black holes, there is not a consensus that they actually exist beyond theory. The difficulty is in how to form sub-stellar mass holes. Holes can be manipulated by adding a net charge and then using electrostatic or magnetic fields. Status: Stellar and larger mass highly condensed objects have been observed, and are presumed to be black holes. Quantum mass black holes have not been observed. 81 TachyonsEdit Alternate Names: Type: Theoretical Description: Tachyons are hypothesized particles which travel faster than the speed of light. They would either allow higher exhaust velocity for an engine, or by some sort of conversion, possibly by quantum tunneling, convert an entire vehicle into tachyons so it would travel faster than light. Some searches for tachyons have been made, but they have not been observed in nature. Status: Theoretical as of 2012. 82 Warped SpaceEdit Alternate Names: Type: Theoretical Description: Travel through spacetime is restricted by current theory to the speed of light. Spacetime itself is not limited in this way, and in fact current Inflationary Cosmology theory assumes a faster than light expansion in the early history of the Universe. This method assumes that spacetime itself is distorted locally around a vehicle in such a way that apparent travel to outside observers is faster than to internal passengers. An example of such is the Alcubierre drive, but current theory does not indicate how to actually generate such a space warp. Note that in General Relativity theory, gravity is caused by a warp of spacetime, and passengers appear to themselves to not accelerate, while outsiders see them in accelerated motion. The difficulty is gravitational fields are not mobile, being attached to the large masses which cause them, so their utility for space transport is limited to methods like Gravity Assist, where you can make use of the difference in motion between two large objects. Status: Theoretical as of 2012. 83 WormholesEdit Alternate Names: Type: Theoretical Description: A Wormhole is a hypothetical region of spacetime shaped to connect two distant points. If the connection is shorter than the non-wormhole path, traversing it would save time. Creation of wormholes in theoretical papers usually involves black holes or Exotic Matter, matter with unusual properties such as negative mass. While many such papers have been written about wormholes, it is not known if the theory matches reality. Therefore we do not know whether wormholes are possible or what their properties might be. Status: Theoretical as of 2012. Last modified on 20 June 2013, at 04:24
{"url":"http://en.m.wikibooks.org/wiki/Space_Transport_and_Engineering_Methods/Theoretical_Methods","timestamp":"2014-04-18T15:42:53Z","content_type":null,"content_length":"24163","record_id":"<urn:uuid:9a65420f-2f99-48ae-9680-4e9916a5d6c2>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear mappings and vector spaces proof February 16th 2011, 11:33 AM Linear mappings and vector spaces proof let $T:U\rightarrow{V}$ be a linear mapping from vector spaces U to V and let X be a subspace of U. Show that $T(X)=\{v\epsilonV\epsilonV|v=Tx\ \mbox{for some}\ x\epsilonX\}$ is a subspace of V I can understand why this is, it seems pretty trivial but I am not sure how you would go about proving it. Thanks for any help February 16th 2011, 11:38 AM I presume you mean $T(X)= \{v| v= Tx for some x\in X\}$. You don't "prove it"- that is the definion of T(X) February 16th 2011, 11:40 AM I think you meant $T(X)=\{v\epsilonV\epsilonV|v=Tx\ \mbox{for some}\ x\in X\},$ right? (Note the capital X on the LHS.) I think I'd need a bit more background in order to understand your problem, because this is probably how I would define the set on the LHS. How does your book or professor define the LHS February 16th 2011, 11:57 AM oh sorry I was meant to prove that $T(X)$ is a subspace of V sorry about that February 16th 2011, 12:00 PM So, you could just show that it's closed under scalar multiplication and vector addition, and that the candidate subspace T(X) contains the zero vector. Then you're done, right? So how does this look for you? February 16th 2011, 12:06 PM Yeah but as T is defined as a linear mapping then these are satidfied? and as U is a subspace it contains 0 which maps to 0 so I am done? or am I missing something? Thanks for the help February 16th 2011, 12:16 PM Well, I think you should write out the equations that show this. I agree that the linearity of T gets the job done, but that's precisely what you're asked to show. So, how would you write it out? February 17th 2011, 05:08 AM Suppose u and v are in T(X). That is, there exist x in X such that u= T(x) and there exist y in X such that v= T(y). Now u+ v= T(x)+ T(y)= T(x+ y). Suppose u is in T(X) and a is any scalar. Then there exist x in X such that u=T(x). Now au= aT(x)= T(ax). Do you see how those prove that u+ v and au are in T(X)?
{"url":"http://mathhelpforum.com/advanced-algebra/171503-linear-mappings-vector-spaces-proof-print.html","timestamp":"2014-04-23T18:05:10Z","content_type":null,"content_length":"8220","record_id":"<urn:uuid:079bb8ad-3749-4b96-bb47-2972d0e1d06b>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
Play The Game The 4D MBox game, just like the 4D game, is a very simple game of chance. The two games are in fact almost identical except for one thing, i.e. with 4D MBox game, the sequence of the 4D Number is immaterial. Therefore, 1234 will match 4321, and 1080 will match 0018. To play, follow the steps below: 1. Pick a 4-digit number, or 4D Number, from the 10,000 possible numbers of 0000 to 9999, e.g. 0138, 1012, 4318, 7766 or 9991. 2. Choose whether to play the Big Forecast or the Small Forecast or both. 3. Decide on the bet amount for each Forecast chosen. The minimum bet for each Forecast is RM1. You win when one or more of the permutations of the 4D Number you picked match one or more of the winning numbers drawn. For example, if you picked 1234 and 4321 is drawn as one of the winning numbers, you win as the sequence of the 4D Number is immaterial and 4321 is one of the 24 valid permutations of 1234. See the section How to Win with 4D MBox Game for more details. On the right is an example of a 4D MBox ticket. The notation M6 beside the first 4D Number indicates that an MBox Bet of 6 permutations has been chosen. Similarly, the notation M24 beside the second 4D Number indicates that an MBox Bet of 24 permutations has been chosen. Each draw, as shown in the actual draw results for 5 Apr 2011 below, has 23 winning 4D Numbers. Of these 23 winning numbers drawn, • 6652 is the winning number for the 1st Prize • 8070 is the winning number for the 2nd Prize • 4509 is the winning number for the 3rd Prize • The 10 numbers in red are the winning numbers for the 10 Special Prizes, and • The 10 numbers in yellow are the winning numbers for the 10 Consolation Prizes These 5 different categories of Prizes carry different cash prizes. See Prize Money of 4D MBox Game below for more details. Please note that in all draws, the same set of Draw Results is used by all 3 games, namely the 4D game, 4D MBox game and 4D Jackpot game. As earlier explained, when playing 4D MBox game, you can choose to play the Big Forecast or the Small Forecast. Big Forecast - Easier to win but Pays Lower Prizes The Big Forecast gives you a greater chance to win prizes as there are a total of 23 winning numbers to match with and 5 categories of prizes to win from. The table below explains how one can win each of the 5 categories of prizes available for the Big Forecast. Prize Category How To Win 1st Prize The 4D Number picked must be one of the permutations of the winning number drawn for the 1st Prize 2nd Prize The 4D Number picked must be one of the permutations of the winning number drawn for the 2nd Prize 3rd Prize The 4D Number picked must be one of the permutations of the winning number drawn for the 3rd Prize Special Prize The 4D Number picked must be one of the permutations of any one of the ten winning numbers drawn for Special Prizes Consolation Prize The 4D Number picked must be one of the permutations of any one of the ten winning numbers drawn for Consolation Prizes Small Forecast - More Difficult to Win but Pays Higher Prizes The Small Forecast gives you a lesser chance to win prizes as there are only 3 winning numbers to match with and 3 categories of prizes to win from. But, for the same bet amount, it pays higher prizes compared to the Big Forecast. The table below explains how one can win each of the 3 categories of prizes available for the Small Forecast. Prize Category How To Win 1st Prize The 4D Number picked must be one of the permutations of the winning number drawn for the 1st Prize 2nd Prize The 4D Number picked must be one of the permutations of the winning number drawn for the 2nd Prize 3rd Prize The 4D Number picked must be one of the permutations of the winning number drawn for the 3rd Prize As shown in the tables below, the prizes of 4D MBox game depend not only on the Forecast played and the category won, but they also depend on the winning 4D Numbers and how many permutations they each have. Prize Money for Big Forecast Prize Category for Big Forecast MBox 24(24 Permutations) MBox 12(12 Permutations) MBox 6 (6 Permutations) MBox 4 (4 Permutations) 1st Prize RM105 RM209 RM417 RM625 2nd Prize RM42 RM84 RM167 RM250 3rd Prize RM21 RM42 RM84 RM125 Special Prize RM8 RM15 RM30 RM45 Consolation Prize RM3 RM5 RM10 RM15 Prize Money for Small Forecast Prize Category for Big Forecast MBox 24(24 Permutations) MBox 12(12 Permutations) MBox 6 (6 Permutations) MBox 4 (4 Permutations) 1st Prize RM146 RM292 RM584 RM875 2nd Prize RM84 RM167 RM334 RM500 3rd Prize RM42 RM84 RM167 RM250
{"url":"http://www.magnumit.com/magnum4d/modules/our-games-4d-mbox.aspx","timestamp":"2014-04-19T14:31:44Z","content_type":null,"content_length":"45959","record_id":"<urn:uuid:2e3a2089-7a4f-4920-b7d0-dd1cc2a70bbf>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometry help. May 7th 2008, 07:35 AM Trigonometry help. These are not really H.W questions but i have an exam coming very soon and i have some problems: i) Express as a single trigonometric function sinxcosxsec˛x (i keep on going in circles in this one) ii) A square B, has a diagonal 2x. A square C has twice the area of square A. Find in terms of x, the perimeter of the squre C. Help would be much appreciated. May 7th 2008, 08:01 AM i) $sin\,x\, cos\,x\,sec^2x = \frac{sin\,x\,cos\,x}{cos^2\,x} = \left(\frac{sin\,x}{cos\,x}\right) \left( \frac{ {cos\,x}}{ cos\,x} \right)=tan \,x$ ii) A square has 4 equal sides. The diagonal cuts the square into two isosceles, right angled triangles. Let side of Square B = a. By pythagoras's theorem, $a^2 + a^2 = (2x)^2$ $2a^2 = 4x^2$ $a^2 = 2x^2$ $a = \sqrt{2}x$ Now Square C has area twice Square B. Area Square B = $a^2$ Area of Square C = $2a^2$ From the working above, we know $2a^2 = 4x^2$ Therefore Area of Square C is (Side C)˛ = $4x^2$ where Side C is a side of square C $Side \,C = \sqrt {4x^2} = 2x$ Perimeter = 4(Side C) = 4(2x) = 8x. Note: Thank you masters for pointing out that I haven't completed the question. Its very late in my time zone, and I misread the question. May 7th 2008, 08:12 AM If the area of square [LaTeX ERROR: Convert failed] , then each side = 2x. Therefore, the perimeter would be 4(2x) = 8x May 7th 2008, 09:18 AM Wow, thanks alot for the quick reply. I totally forgot that
{"url":"http://mathhelpforum.com/trigonometry/37522-trigonometry-help-print.html","timestamp":"2014-04-18T07:00:57Z","content_type":null,"content_length":"7391","record_id":"<urn:uuid:a4944aba-ad52-4b77-892c-d4e7ad2c413e>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
newbie programmer - needs help bad. 10-17-2004 #1 Registered User Join Date Oct 2004 Ok, I am an MIS major and am taking my first programming class. This is a "simple" program that we have been assigned and I can not afford to lose much more hair. Hopefully it is ok that I post all of this but these are the instructions and the program that I have written so far. Please, any help is great!! Write a program in C that will use a sentinel controlled loop to allow the user to input as many single digit integers (0,1,2,3,4,5,6,7,8,or 9) as wished. When the user enters –1 to end input, the program will print the smallest digit entered (other than –1), the largest digit entered, and the sum of the digits entered(other than –1). The screen should look something like the Enter a one digit integer (0,1,2,3,.,8,9 or -1 to quit): 3 Enter a one digit integer (0,1,2,3,.,8,9 or -1 to quit): 6 Enter a one digit integer (0,1,2,3,.,8,9 or -1 to quit): 5 Enter a one digit integer (0,1,2,3,.,8,9 or -1 to quit): 7 Enter a one digit integer (0,1,2,3,.,8,9 or -1 to quit): -1 The smallest digit entered was : 3 The largest digit entered was : 7 The total of the digits entered was: 21 Enter a one digit integer (0,1,2,3,.,8,9 or -1 to quit): 6 Enter a one digit integer (0,1,2,3,.,8,9 or -1 to quit): 6 Enter a one digit integer (0,1,2,3,.,8,9 or -1 to quit): 6 Enter a one digit integer (0,1,2,3,.,8,9 or -1 to quit): 6 Enter a one digit integer (0,1,2,3,.,8,9 or -1 to quit): 6 Enter a one digit integer (0,1,2,3,.,8,9 or -1 to quit): -1 The smallest digit entered was : 6 The largest digit entered was : 6 The total of the digits entered was: 30 Note: If the user enters –1 as the first entry (i.e. no digits from 0 to 9 were entered), the screen should look something like: Enter a one digit integer (0,1,2,3,.,8,9 or -1 to quit): -1 No digits were entered! This is what I have and where I am stuck: #include <stdio.h> int main() int total; int smallest; int largest; int counter; int integer; total = 0; counter = 0; printf("Enter a one digit integer (0,1,2,3,.,8,9 or -1 to quit): "); scanf("%d", &integer); while ( integer != -1) { total = total + integer; counter = counter + 1; printf("Enter a one digit integer (0,1,2,3,.,8,9 or -1 to quit): "); scanf("%d", &integer ); largest = integer; smallest = integer; if( integer > largest) largest = integer; if(integer < smallest) smallest = integer; while ( counter = -1){ printf("No digits were entered\nGoodbye!!\n"); printf( "The smallest digit entered was %d\n", smallest ); printf( "The largest digit entered was %d\n", largest ); printf( "The total of the digits entered was %d\n", total ); printf( "Goodbye!!\n" ); return 0; Shouldn't you initialize your high and low values so that when you try to compare them to something it actually works? For example, your "high number" should first be initialized to a low number, so the first time you compare against it, the assignment puts the first value entered as the new high number. The opposite is true for the low number. You also need to completely rethink your loop. You need all of the checks in a single loop. Not what you have. Hope is the first step on the road to disappointment. Thank you for your response. I am working on it now. highest = 0 smallest = 9 ?? I don't mean to be stupid, I'm just lost. highest = 0 smallest = 9 ?? I don't mean to be stupid, I'm just lost. That would work, acordingly to your objective. But you should place the comparisions wherelse. And you should then check if the inputed number is within your range. scanf can read almost any integer. And take a good look at while ( counter = -1){ printf("No digits were entered\nGoodbye!!\n"); Try to understand what would happen here. Your if () statments need to be inside your input loop. Otherwise all they'll ever compare is the final integer entered. Also your while ( counter = -1) will never be executed. Well, something will be executed. Just not what you are expecting. You really need to rethink your logic. Try flowcharting or pseudocoding it and walk it step by step. Last edited by Scribbler; 10-18-2004 at 12:20 AM. #include <stdio.h> int main() int total; int smallest; int largest; int counter; int integer; total = 0; counter = 0; largest = 0; smallest = 9; printf("Enter a one digit integer (0,1,2,3,.,8,9 or -1 to quit): "); scanf("%d", &integer); while( integer >=0) total = total + integer; counter = counter + 1; if (integer >= largest){ integer = largest; if (integer <= smallest){ integer = smallest; printf("Enter a one digit integer (0,1,2,3,.,8,9 or -1 to quit): "); scanf("%d", &integer ); printf( "The smallest digit entered was %d\n", smallest ); printf( "The largest digit entered was %d\n", largest ); printf( "The total of the digits entered was %d\n", total ); printf( "Goodbye!!\n" ); printf("No digits were entered\n\n\nGoodbye!!\n"); return 0; is this getting better? I've cleaned up your loop's indentation so it's easier to visually see what's happening. See if you can spot what's going to happen. PHP Code: while( integer >=0) total = total + integer; counter = counter + 1; if (integer >= largest) integer = largest; if (integer <= smallest) integer = smallest; printf("Enter a one digit integer (0,1,2,3,.,8,9 or -1 to quit): "); scanf("%d", &integer ); printf( "The smallest digit entered was %d\n", smallest ); printf( "The largest digit entered was %d\n", largest ); printf( "The total of the digits entered was %d\n", total ); printf( "Goodbye!!\n" ); printf("No digits were entered\n\n\nGoodbye!!\n"); Last edited by Scribbler; 10-18-2004 at 12:53 AM. that seems right to me..... Well, it's not: if (integer >= largest) integer = largest; if (integer <= smallest) The only way smallest will get updated is if it is both larger than largest, AND smaller than smallest (or equal to both, in which case, see below) Probably not going to happen, hmm? On an aside, there is no point in testing either of those for equality. Only update if it's bigger or smaller, not bigger-or-the-same and smaller-or-the-same. Because why bother updating if it's the same? Hope is the first step on the road to disappointment. so I need to give parameters for maximum and minimum? No. You just actually need to pay attention to what's going on, and rethink how you do it. number is whatever low is highest number allowed high is lowest number allowed while number is not -1 prompt for a number if number is not -1 if number lower than low set low to number if number higher than high set high to number See? It helps if you do something like that (we call it pseudocode). All it is is writing out the logic or steps of what you want to do. Then after you've done that, you turn it into actual code. Hope is the first step on the road to disappointment. #include <stdio.h> int main() int total; int smallest; int largest; int counter; int integer; total = 0; counter = 0; largest = 0; smallest = 9; printf("Enter a one digit integer (0,1,2,3,.,8,9 or -1 to quit): "); scanf("%d", &integer); while( integer >=0) total = total + integer; counter = counter + 1; if (integer > largest) largest = integer; if (integer < smallest) smallest = integer; printf("Enter a one digit integer (0,1,2,3,.,8,9 or -1 to quit): "); scanf("%d", &integer ); printf( "The smallest digit entered was %d\n", smallest ); printf( "The largest digit entered was %d\n", largest ); printf( "The total of the digits entered was %d\n", total ); printf( "Goodbye!!\n" ); printf("No digits were entered\n\n\nGoodbye!!\n"); return 0; nevermind...I copied wrong one no, I didn't...I'm delerious....I didn't know if I'd have to have and integer1 and integer2... 10-17-2004 #2 10-17-2004 #3 Registered User Join Date Oct 2004 10-17-2004 #4 Registered User Join Date Oct 2004 10-18-2004 #5 10-18-2004 #6 10-18-2004 #7 Registered User Join Date Oct 2004 10-18-2004 #8 10-18-2004 #9 Registered User Join Date Oct 2004 10-18-2004 #10 10-18-2004 #11 Registered User Join Date Oct 2004 10-18-2004 #12 10-18-2004 #13 Registered User Join Date Oct 2004 10-18-2004 #14 Registered User Join Date Oct 2004 10-18-2004 #15 Registered User Join Date Oct 2004
{"url":"http://cboard.cprogramming.com/c-programming/57954-newbie-programmer-needs-help-bad.html","timestamp":"2014-04-16T22:05:16Z","content_type":null,"content_length":"105906","record_id":"<urn:uuid:bbf19038-2910-4a89-8537-353e27e8f7c7>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Brevet US5249257 - Fuzzy regression data processing device DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Fuzzy Identification System Referring to FIG. 1, a fuzzy identification system according to the present invention is shown and comprises an input/output data input portion 1, a LSM (least square method) regression portion 2 using the method of least squares, and a fuzzy regression portion 3. Input and output data are applied to, input/output data input portion 1, and are regressed in the LSM regression portion 2 by the method of least square. The input and output data are also regressed in a fuzzy regression portion 3 in consideration of the regression formulas used in the LSM regression portion 2 as being the center, and the coefficients of the regression formulas used in the LSM regression portion 2 being used as the fuzzy coefficients, with all the data included. In other words, according to the fuzzy identification system of the present invention, a regression portion 2 is provided for identifying a system by regressing an input/output data using a regression formula based on the method of least squares, and a fuzzy regression portion 3 is provided for rendering coefficients of the regression formula as fuzzy values in such a manner as to include all the input/output data around said regression formula. Further details of the fuzzy identification system of the present invention will be described below. It is assumed that the data applied to input/output data input portion are: (y.sub.i, x.sub.i1, . . . , x.sub.in), wherein i=1, 2, . . . , N and it is also assumed that: X.sub.i =(x.sub.i1, . . . , x.sub.in). In the LSM regression portion 2, the linear regression model is given by the equation below: Y=α.sub.0 +α.sub.1 x.sub.1 +. . .+α.sub.n x.sub.n which is determined by the following steps. (1) A deviations between the estimated value Y.sub.i =Σα.sub.j x.sub.ij of the input data X.sub.i and the output data y.sub.i are obtained. (2) The deviations are squared and summed to obtain a sum S as expressed by the following equation: S=Σ(y.sub.i -Y.sub.i).sup.2 (3) The coefficient α.sub.i is so selected as to minimize the sum S. The regression formula of the regression model is shown in FIG. 2 by a solid line. Next, the fuzzy regression portion 3 is described. In the fuzzy regression portion 3, the system model is given by the equation below: Y=A.sub.0 +A.sub.1 x.sub.1 +. . .+A.sub.n x.sub.n in which A.sub.i is the fuzzy coefficient, as shown in FIG. 3, and has a triangle formation centered at coefficient α.sub.i of the regression formula obtained in the LSM regression portion 2, with the width of the left portion being C.sub.Li and the width of the right portion being C.sub.Ri. The widths C.sub.Li and C.sub.Ri are so determined as to include the input data X.sub.i with the estimated fuzzy Y.sub.i being greater than the degree h, and also to minimize the sum of the widths of the estimated fuzzy Y.sub.i. In other words, widths C.sub.Li and C.sub.Ri are so determined as to minimize the following equations: J.sub.L =ΣC.sub.Li x.sub.i J.sub.R =ΣC.sub.Ri x.sub.i. As has been described above, according to the fuzzy identification system of the present invention, in order to fuzzy regress the system from the vague input and output data, the center value of the fuzzy coefficient is regressed by the method of least square, and the fuzzy regression is also effected so as to include all the data within the width. Thus, even if the system is definitely fluctuating, the information carried in the data can be utilized effectively. Fuzzy Data Processing Device Referring to FIG. 4, a fuzzy data processing device according to the present invention is shown. In FIG. 4, a reference number 4 denotes a data input portion, 5 and 6 each denote a fuzzy regression model memory, 7 denotes a fitting degree calculator, and 8 denotes a fitting degree maximum detector. The fuzzy regression model memories 5 and 6 store different fuzzy regression models, such as shown in FIGS. 5a and 5b, in which FIG. 5a shows a relationship between input x and output y and FIG. 5b shows a relationship between input z and output y. Here, the fuzzy regression model can be either a regression model with the coefficient having a symmetrical fuzzy value as in the prior art or a regression model described above in connection with FIGS. 1, 2 and 3. The input/output data obtained under two different conditions differs as shown in FIGS. 5a and 5b, and therefore, the reliability usually differs. For example, the data obtained according to the condition of FIG. 5a is such that the amount of data deviation is relatively small and therefore, the reliability of the obtained data is relatively high. On the contrary, the data obtained according to the condition of FIG. 5b is such that the amount of data deviation is relatively large, and therefore, the reliability of the obtained data is relatively low. According to the fuzzy regression, the reliability of the data is in relation to the width of the fuzzy coefficient. Next, the steps for obtaining an estimated value y* are explained. Based on the input data X.sub.0 and Z.sub.0 applied to data input portion 4, fuzzy values Yx and Yz are obtained at fuzzy regression model memories 5 and 6, respectively. An example of the fuzzy values Yx and Yz are shown in FIG. 6. In the fitting degree calculator 7, the select minimum calculation: is carried out, in which Λ indicates the calculation of taking the smaller one of Yx and Yz. Thus, the fitting degree calculator 7 produces the grade of the overlapping portion between the two fuzzy values Yx and Yz, as indicated by the shading in FIG. 6. Then, in the fitting degree maximum detector 8, the estimated value y* is obtained at which the maximum of degree of the fitting of fuzzy values Yx and Yz, i.e., the grade peak point of the overlapping portion, is observed, by the following equation: ##EQU1## In this manner the final result y* is obtained. Since two fuzzy models obtained under two different conditions are used for obtaining one fuzzy model in consideration of width of each fuzzy model, the reliability of the obtained fuzzy model can be expressed by the width of the fuzzy values of the obtained fuzzy model. Since the final result is obtained from the fitting degrees of the two fuzzy models, the combined fuzzy model is obtained within the one of the two fuzzy models that has a high reliability. Thus, the combined fuzzy model will have a high reliability. As has been described above, according to the fuzzy data processing device of the present invention, the two fuzzy regression models under two different conditions are used to obtain one fuzzy regression model such that the maximum output of the fitting degree of the regression output from both fuzzy regression models is obtained. Thus, the preciseness of the estimation can be improved while considering the reliability of the two models. Abnormal Fuzzy Data Detecting Device Referring to FIG. 7, an abnormal fuzzy data detecting device according to the present invention is shown. When compared with the fuzzy data processing device shown in FIG. 4, the abnormal fuzzy data detecting device of FIG. 7 has a fitting degree comparator 9 instead of fitting degree maximum detector 8. By the input data X.sub.0 and Z.sub.0, estimated values Yx and Yz are obtained at fuzzy regression model memories 5 and 6, respectively. Then, in the fitting degree calculator 7, the select minimum calculation is carried out, so as to produce a signal representing a lower level of the two levels Yx and Yz for every y. In the fitting degree comparator 9, the peak point of the shaded portion is compared with a predetermined level. When the compared result is such that the peak point is less than the predetermined level, it is so determined that one of the two input data X.sub.0 and Z.sub.0 is abnormal. The detection of the abnormal data is further explained below. Referring to FIG. 8, it is assumed that when inputs data X.sub.0 and Z.sub.0 are applied under two different, but normal conditions, fuzzy regression outputs Yx and Yz are obtained. Since input data X.sub.0 and Z.sub.0 are normal data, the maximum h of the degree of fitting of the fuzzy values Yx and Yz will be greater than a predetermined level L, such as 0.2. This is because the fuzzy regression outputs Yx and Yz are closely located to each other such that a high percentage of triangle areas of the fuzzy regression outputs Yx and Yz are overlapping. It is also assumed that when input data X.sub.0 and Z.sub.0 ' are applied under two different conditions with input data Z.sub.0 ' being obtained under an abnormal condition, fuzzy regression outputs Yx and Yz' are obtained. Since input data Z.sub.0 ' are abnormal data, the maximum h, of the degree of fitting of the fuzzy values Yx and Yz' will be less than a predetermined level L, such as 0.2. This is because the fuzzy regression outputs Yx and Yz' are located relatively apart from each other such that a low percentage of triangle areas of the fuzzy regression outputs Yx and Yz' are The invention of FIG. 7 is applicable when one result is estimated based on two different inputs obtained under different conditions. When one of the two inputs is abnormal, an abnormal detection can be made by checking the degree of fitting of the regression results based on two different inputs. Although the present invention has been fully described in connection with the preferred embodiments thereof with reference to the accompanying drawings, it is to be noted that various changes and modifications are apparent to those skilled in the art. Such changes and modifications are to be understood as included within the scope of the present invention as defined by the appended claims unless they depart therefrom. These and other objects and features of the present invention will become clear from the following description taken in conjunction with the preferred embodiments thereof with reference to the accompanying drawings throughout in which like parts are designated by like reference numerals, and in which: FIG. 1 is a circuit diagram of a fuzzy identification system according to the present invention; FIG. 2 is a graph showing a result of the regression carried out by the circuit of FIG. 1; FIG. 3 is a graph showing a profile of a fuzzy coefficient of the regression formula used in the circuit of FIG. 1; FIG. 4 is a circuit diagram of a fuzzy data processing device according to the present invention; FIGS. 5a and 5b are graphs each showing a relationship between input and output data of the circuits employed in the circuit of FIG. 4; FIG. 6 is a graph showing a manner for obtaining an estimated value as carried out by the circuit of FIG. 4; FIG. 7 is a circuit diagram of an abnormal fuzzy data detecting device according to the present invention; and FIG. 8 is a graph showing a manner for detecting an abnormal condition. 1. Field of the Invention The present invention relates to a fuzzy identification system for identifying the relationship between input and output data of a system from indefinite input and output data, and also relates to a fuzzy data processing device for processing indefinite and deviating data using an identified model. 2. Description of the Prior Art Conventionally, the identification of a system from indefinite input/output data is realized by a fuzzy linear regression model, such as disclosed in an article "Linear Regression Model by Fuzzy Function" by H. Tanaka et al in Japanese Magazine "NIPPON KEIEI KOUGAKUSHI" vol. 25, 6, pp.162-174, 1982, or in an article "Linear Regression Analysis with Fuzzy Model" by H. Tanaka et al in IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. SMC-12, NO. 6, NOVEMBER/DECEMBER 1982. According to these articles, the deviations between the observed values and the estimated values in a system are taken up, and such deviations are considered to be due, not to the measurement errors, but to the indefiniteness of the system structure. Therefore, the system structure is represented by fuzzy linear functions whose parameters are given by fuzzy sets. For example, if the given data are expressed as: (y.sub.i, x.sub.i1, . . . , x.sub.in), wherein i=1, 2, . . . , N provided that y.sub.i represents a value of the ith output variable, the fuzzy linear regression model can be formulated by the following steps. (1) The fuzzy linear model is defined by the following equation: Y.sub.i =A.sub.0 +A.sub.1 x.sub.i1 +. . .+A.sub.n x.sub.in provided that the fuzzy coefficient A.sub.i has a triangle profile which is symmetrical with respect to the center line, and has a base with a width C.sub.i. (2) The fuzzy coefficient A.sub.i is so determined that the given data (y.sub.i, x.sub.i1, . . . , x.sub.in) are included within the estimated fuzzy value Y.sub.i having a degree greater than h. (3) The fuzzy coefficient A.sub.i is so determined that the sum of the width of the estimated fuzzy value Y.sub.i is made minimum. As described above, according to the prior art, the fuzzy linear regression model is formulated such that the fuzzy linear function having a fitting degree greater than a certain level with the minimum deviation is selected. Next, a prior art example for estimating an output values with respect to inputs using regression models under two different conditions is explained. In this example, the regression models utilize multiple regression based on the method of least squares, and utilize the following two regression formulas with respect to two different conditions, respectively. Y=b.sub.0 +b.sub.1 x.sub.1 +. . .+x.sub.n ( 1) Y=c.sub.0 +c.sub.1 z.sub.1 +. . .+z.sub.m ( 2) When the inputs obtained under two different conditions are (x.sub.1.sup.0, . . . , x.sub.n.sup.0) and (z.sub.1.sup.0, . . . , z.sub.m.sup.0), respectively, the estimated outputs Yx and Yz are obtained by substituting these inputs to formulas (1) and (2). From these two estimated outputs Yx and Yz, the final result Y* is obtained by taking an average between the two estimated outputs Yx and Yz, as shown below. Y*=(Yx+Yz)/2 (3) As understood from the foregoing, according to the prior art, an average is taken to obtain one result from two estimated values under two different conditions. However, with the prior art fuzzy linear regression model, the deviations between the observed and estimated values are assumed to depend on the indefiniteness of the system structure, and thus, the system coefficients are assumed to be the fuzzy coefficients. The fuzzy coefficients are determined so as to have the fitting degree greater than a certain level with the minimum deviation. Therefore, the center of the fuzzy coefficient is always the center of the width of the fuzzy data, and is not related to the given data. Therefore, when the system is positively fluctuating, the information carried in each data may be lost. Also, according to the prior art for estimating output values with respect to inputs using multiple regression models under two different conditions, average of the two estimated values is used as the final result. The reliability of each of the two estimated values is not always the same, but is forcibly assumed to be the same when the average is taken between the two estimated values. In other words, with the use of an average taking method, no consideration is taken to the reliability of each of the two estimated values. This results in a disadvantage in that when one of the two estimated values is abnormal while the other one is normal, the average of the two will contain abnormal information and further, it is not possible to detect the presence of such abnormal data. The object of the present invention is therefore to provide an improved fuzzy identification system which can identify a system without losing the information contained in the data even when the system is fluctuating. It is also an important object of the present invention to provide a fuzzy data processing device which can consider the reliability of each of two different sets of inputs obtained from two different conditions. It is another object of the present invention to provide an fuzzy data processing device of the above described type which can detect abnormal input data. In order to achieve the aforementioned object, a fuzzy identification system according to the present invention comprises: a regression portion for identifying a system by regressing an input/output data using a regression formula based on the method of least squares; and a fuzzy regression portion for rendering coefficients of the regression formula as fuzzy values in such a manner as to include all the input/output data around the regression formula. Furthermore, a fuzzy data processing device according to the present invention comprises: a data input portion for inputting data; a first fuzzy regression model memory means for storing a first fuzzy regression model in which all coefficients of the fuzzy regression model are denoted as fuzzy values, and with all the data obtained under a first condition being included in the first fuzzy regression model; a second fuzzy regression model memory means for storing a second fuzzy regression model in which all coefficients of the fuzzy regression model are denoted as fuzzy values, and with all the data obtained under a second condition being included in the second fuzzy regression model; a fitting degree calculating means for calculating a degree of fitting of the first fuzzy regression model to the second fuzzy regression model; and a maximum detector for detecting the maximum of the calculated fitting degree, whereby an estimated value is obtained. Moreover, an abnormal fuzzy data detecting device according to the present invention comprises: a data input portion for inputting data; a first fuzzy regression model memory means for storing a first fuzzy regression model in which all coefficients of the fuzzy regression model are denoted as fuzzy values, and with all the data obtained under a first condition being included in the first fuzzy regression model; a second fuzzy regression model memory means for storing a second fuzzy regression model in which all coefficients of the fuzzy regression model are denoted as fuzzy values, and with all the data obtained under a second condition being included in the second fuzzy regression model; a fitting degree calculating means for calculating a degree of fitting of the first fuzzy regression model to the second fuzzy regression model; and a fitting degree comparator for comparing the calculated fitting degree with a predetermined degree to detect an abnormal condition when the calculated fitting degree is smaller than the predetermined degree.
{"url":"http://www.google.fr/patents/US5249257?hl=fr","timestamp":"2014-04-17T04:42:38Z","content_type":null,"content_length":"71579","record_id":"<urn:uuid:5bf97d81-8ff3-4315-8031-8e0548aa7b5a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
Existence result for semilinear elliptic systems involving critical exponents In this paper we deal with the existence of a positive solution for a class of semilinear systems of multi-singular elliptic equations which involve Sobolev critical exponents. In fact, by the analytic techniques and variational methods, we prove that there exists at least one positive solution for the system. MSC: 35J60, 35B33. semilinear elliptic system; nontrivial solution; critical exponent; variational method 1 Introduction We consider the following elliptic system: where ( ) is a smooth bounded domain such that , , , are different points, , , , , , , . We work in the product space , where the space is the completion of with respect to the norm . In resent years many publications [1-3] concerning semilinear elliptic equations involving singular points and the critical Sobolev exponent have appeared. Particularly in the last decade or so, many authors used the variational method and analytic techniques to study the existence of positive solutions of systems of the form of (1.1) or its variations; see, for example, [4-8]. Before stating the main result, we clarify some terminology. Since our method is variational in nature, we need to define the energy functional of (1.1) on Then belongs to . A pair of functions is said to be a solution of (1.1) if , and for all , we have Standard elliptic arguments show that The following assumptions are needed: ( ) , where is the first eigenvalue of L, , are the eigenvalues of the matrix . The quadratic from is positively defined and satisfies Our main results are as follows. Theorem 1.1Suppose ( ) holds. Then for any solution of problem (1.1), there exists a positive constant such that Theorem 1.2Suppose ( ) holds. Then for any positive solution of problem (1.1), there exists a positive constant such that and Theorem 1.3Suppose ( ), ( ) hold. Then the problem (1.1) has a positive solution. 2 Preliminaries Using the Young inequality, the following best constant is well defined: where is the completion of with respect to the norm . We infer that is attained in by the functions For all , , , , by the Young and Hardy-Sobolev inequalities, the following constant is well defined on : where , , satisfies and , , for all small. Then for any , by [9] we have the following estimates: 3 Asymptotic behavior of solutions Proof of Theorem 1.1 Suppose is a nontrivial solution to problem (1.1). For all define It is not difficult to verify that and satisfy Let small enough such that and for . Also, let be a cut-off function. Set where . Multiplying the first equation of (3.1) by and the second one by respectively and integrating, we have By the Cauchy inequality and the Young inequality, we get Using Caffarelli-Kohn-Nirenberg inequality [10], we infer that Then . Now, from the Hölder inequality, we deduce that In the sequel, we have So, from (3.4) to (3.8) it follows that Take and to be a constant near the zero. Letting , we infer that and so Suppose is sufficiently small such that and is a cut-off function with the properties and in . Then we have the following results: where we used the Hölder inequality. From (3.9) in combination with (3.11), it follows that Denote , and , , where , and . Using (3.12) recursively, we get we have as . Note that the infinite sums on the right-hand side converge, then we obtain that , particularly, we have . Thus, where . The proof is complete.□ Proof of Theorem 1.2 Suppose is a positive solution to problem (1.1). For all , set It is easy to verify that Combining (3.13) with (3.14), we get Therefore, by the maximum principle in , we obtain Taking , we conclude for all . Similar result also holds for . Therefore, we have 4 Local -condition and the existence of positive solutions We first establish a compactness result. Lemma 4.1Suppose that ( ) holds. ThenJsatisfies the -condition for all Proof Suppose that satisfies and . The standard argument shows that is bounded in . Therefore, is a solution to (1.1). Then by the concentration-compactness principle [11-13] and up to a subsequence, there exist an at most countable set , a set of different points , nonnegative real numbers , , , and , , ( ) such that the following convergence holds in the sense of measures: By the Sobolev inequalities [10], we have We claim that is finite, and for any , or . In fact, let be small enough for any , and for , . Let be a smooth cut-off function centered at such that , for , for and . Then Then we have By the Sobolev inequality, ; and then we deduce that or , which implies that is finite. Now, we consider the possibility of concentration at points ( ), for small enough that for all and for and , . Let be a smooth cut-off function centered at such that , for and . Then Thus, we have From (4.1) and (4.2) we derive that , , and then either or . On the other hand, from the above arguments, we conclude that If for all and , then , which contradicts the assumption that . On the other hand, if there exists an such that or there exists a with , then we infer that which contradicts our assumptions. Hence, , as in .□ First, under the assumptions ( ), ( ), we have the following notations: where is a minimal point of , and therefore a root of the equation Lemma 4.2Suppose that ( ) holds. Then we have (ii) has the minimizers , , where are the extremal functions of defined as in (2.2). Proof The argument is similar to that of [6].□ Lemma 4.3Under the assumptions of ( ), we have Proof Suppose ( ) holds. Define the function Note that and as t is close to 0. Thus, is attained at some finite with . Furthermore, , where and are the positive constants independent of ε. By using (1.2), we have Note that From (4.3), Lemma 4.2 and Lemma 4.3, it follows that Proof of Theorem 1.3 Set , where Suppose that ( ) holds. For all , from the Young and Hardy-Sobolev inequalities, it follows that and there exists a constant small such that Since as , there exists such that and . By the mountain-pass theorem [14], there exists a sequence such that and , as . From Lemma 4.2 it follows that By Lemma 4.1 there exists a subsequence of , still denoted by , such that strongly in . Thus, we get a critical point of J satisfying (1.1), and c is a critical value. Set . Replacing respectively u, ν with and in terms of the right-hand side of (1.1) and repeating the above process, we can get a nonnegative nontrivial solution of (1.1). If , we get by (1.1) and the assumption . Similarly, if , we also have . There, . From the maximum principle, it follows that in Ω.□ Authors’ contributions Each of the authors, SK, MF and OKK contributed to each part of this work equally and read and approved the final version of the manuscript. Sign up to receive new article alerts from Boundary Value Problems
{"url":"http://www.boundaryvalueproblems.com/content/2012/1/119?fmt_view=mobile","timestamp":"2014-04-20T10:56:00Z","content_type":null,"content_length":"263088","record_id":"<urn:uuid:c0846c12-c6c6-4c60-b734-0a084dd641bb>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Interpolation FP1 Formula Re: Linear Interpolation FP1 Formula Yes, but there are lots of roads that lead to our school (and it has different buildings which you have to walk around on the street to get to). It was 5 minutes walking distance away from our Re: Linear Interpolation FP1 Formula How long does the entire walk take? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula My route is about a 35-minute walk, not sure how long hers is. Re: Linear Interpolation FP1 Formula I just sent her a reply. She said that she just started walking to school 6 months ago... I'm pretty sure she means she is doing it to lose weight but I don't think I should bring it up, so I won't ask 'why'... Re: Linear Interpolation FP1 Formula Could have been by accident that she used the same route as you. What else did she say? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula "I've taken that route since I started walking to school 6 months ago. My reference isn't finished yet, the deadline might have been extended. I forgot to ask about my predicted grades. Its probably AAB or A*AB, based on my AS grades. Did your tutor give you your predicted grades or did you get them off your subject teachers? Re: Linear Interpolation FP1 Formula Not a whole lot of clues in that. Odd that she has been taking that route for 6 months and you only bumped into her once. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Yes, I agree. I don't want to bring up her weight either, because that is always an iffy topic with girls. Re: Linear Interpolation FP1 Formula Yes, they always think they are chubby even when they are not. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula PJ is a bit chubby though. Re: Linear Interpolation FP1 Formula Then walking is the perfect thing for her. Burns calories and clears the head. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I agree, walking is a good way to burn calories. I might just be jumping to conclusions here though, can't assume she is doing it for weight loss. She may just prefer walking. (Though she probably wants to lose weight.) Re: Linear Interpolation FP1 Formula Maybe she lost her ride. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula For 6 months? From where she lives, she could easily take the bus... Re: Linear Interpolation FP1 Formula Maybe she began to hate the bus or maybe it is a question of finances. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I really doubt it is because of the finances. Travel is free for under 18s and she comes from a very wealthy family. Parents have PhDs, very high-paying jobs, etc... It's possible she hates the bus. But, I still think she might be trying to lose weight. But that is not certain. Re: Linear Interpolation FP1 Formula Has she ever mentioned to you that she is the athletic type? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula No. She said she has scoliosis which might explain why she isn't. Re: Linear Interpolation FP1 Formula Lots of people have that and it does not stop them at all. Knew a power lifter who had it and he was huge, worked out non stop. Still, the weight loss idea is the most reasonable. See you later, I need to do a bunch of chores. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Well, it is difficult to say and I cannot judge her in that respect, since I do not know what kind of pain she is in. I know that when I run for longer than 15 minutes I have to put ice packs on my knees, which is a shame because I have always wanted to compete in marathons. But I cannot, unless I doped myself up on lots and lots of painkillers. I agree, I am quite sure of the weight loss idea. I wonder how much male attention she gets. Okay, see you later. Re: Linear Interpolation FP1 Formula You can ask her about her condition. Most people after they get over the shock are quite willing to talk about their problems. Ever seen her with any guys? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I could ask her about it, that might get her talking again. Hmm, I don't see her with guys that often. She says she is not as social as her sister (who is several years older then she is), and plus she went to an all-girls school, just like C, F and H. Re: Linear Interpolation FP1 Formula That seems to ruin them. Makes them shy or something. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Some turned out fine. F's sister has a boyfriend (she went to a girls school too) and IY hangs around with boys a fair amount too (IY also went to an all-girls school). Re: Linear Interpolation FP1 Formula So, is she shy or disinterested? Any clue? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=234232","timestamp":"2014-04-19T09:30:37Z","content_type":null,"content_length":"35632","record_id":"<urn:uuid:e2559a0b-b6d5-429e-8801-c80c5cc456c1>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
Jainism Articles and Essays -Dr. Mahavir Raj Gelra The beginning of the studies of mathematics in India is regarded a thousand years before Christian era. This was the period when the Jain knowledge prophesied by the Lord Parshwanath was flourishing. Jain arithmetic finds its parallels in Vedic mathematics. During Vedic period, the sacrificial altars were constructed in various safe geometrical shapes to confine the sacred fire. The shapes of the pits used to be in geometrical segments, e.g., triangle, quadrilateral, oval, spherical, circular etc. This is indicative of rich wealth of geometric knowledge prevailing then. Jain mathematics has been extensively used in the explanations and discussions on the six categories of matter existing in the Lok. Jain Arithmetic has two basic branches - (i) Geometry - Jains developed the basic geometry and used it to explain the shape and extent of the universe, its centre (ruchak-pradesh), heaven, hell, etc. Besides curved directions. Krishna-rajji (black-hole equivalents) etc. are also described using geometry. In Sthanang and Uttaradhayyan Sutra, five basic shapes (sansthan) are described - 1. Sphere 2. Circle 3. Triangle 4. Quadrilateral 5. Rectangle In Jain writings, the reference to triangular, rectangular and hexagonal 'earths' are described in the descriptions of Krishna-rajji. These shapes are described in detail in the relevant chapters of this book. However, it is necessary to mention here that the geometry of Vedic and Jain origins are quite similar and seem to have their genesis in the Indian mathematics. (ii) Arithmetic - In the chapters of time, speed, karma etc., basic mathematical quantities of numerate, innumerate and infinity find wide spread mention. With the help of quantitative analysis, Jains have calculated the distances, time, speed, life-span of animates, etc. Units of measurement In the studies of six basic mattereals of the universe in the Jain literature, classifications into the minimal, medium and maximal have been done while applying the mathematics of the numerate, the innumerate and the infinite. First of all, we shall understand the finest parts of the six substances from the following table, before going in for detailed mathematical discussion, as they form the basis of development of mathematics - Jain laureates have presented the quantitative analysis of six forms of substances (dravya) using certain basic indivisible units as tabulated below - S.No. Substance Unit 1. Pudgal (Particle/Matter) Parmanu (Atom) 2. Kaal (Time) Samay 3. Dharma, Adharma, Aakash, Jiva Pradesh 1. Parmanu It is the smallest unit used to describe the matter or substance. In this entire universe (Lok), pudgals are classified in only two types - • Microphysical or massless (Sukshma) • Macro or massive (Sthula) These two pudgals manifest distinct physical properties as below - Micro (Sukshma) Macro (Sthula) Massless With Mass Motion Unhindered by the presence of matter Motion obstructed by the presence of other macro particles Speed beyond that of light is achievable Speed limited to that of light Describing the properties of waves and particles, famous scientist, Heisenberg had stated that these two did not follow same set of physical laws. This commonality between the science and Jain Agamas is amazing as the two are separated more than two thousand years on the time scale. Micro and macro pudgals are convertible into each other but once converted, their properties change to such an extent that the physical laws applicable to micro state are no longer valid in the macro state and vice-versa. Jains, therefore, insist that the micro pudgals (also known as Nishchay Parmanu)constituents an intangible world perceptible only by the intelligence, and simultaneously, macro pudgals (also known as Vyavhar Parmanu) constitute a tangible world observable by our sensory organs (Indriya). In micro form, pudgals remain weight less and travel unrestricted in the space. In this state, the pudgal is understood to be in the form of pure energy. Light, temperature, gravity, magnetism, electrostatic bonds etc., are manifestations of micro world. As described in Anuyog Dwara, infinite micro pudgals integrate to form a macro pudgal. This basic entity is called Vyavhar Parmanu. Therefore, the Jain literature gives us following classification of basic building blocks of the 1. Sukshma or Nishchay (Deterministic) Parmanu 2. Sthula or Vyavhar (Behavioural) Parmanu Vyavhar Parmanu is a specially coined definition in the sutras of Anuyog Dwara. This is because the human behaviour (Vyavhar) is entirely dependent on the macro-pudgals as the latter alone comes within the realms of our sensory perception. As discussed earlier, macro pudgals cannot travel unrestricted. In other words, they keep on interacting with other particles during motion. This property of interaction makes the macro pudgals tangible and perceptible by our sensory organs. We have no measurements available for micro or sukshma parmanu as they are rendered ethereal or intangible. On the other hand, Jain literature has full set of units to describe the ascending order of complexity of Sthula or Vyavhar Parmanu - ∞ Nishchay Parmanu = 1 Vyavhar Parmanu (Both Nishya and Vyavhar Parmanu are beyond the realm of sensory perception) 8 Vyavhar Parmanu = 1 TrusParmanu (From Trus Parmanu onwards, Sthula substances are within the domain of sensory perception) 8^2 Vyavhar Parmanu = 1 RathParmanu 8^3 Vyavhar Parmanu = 1 Balagra 8^4 Vyavhar Parmanu = 1 Liksha 8^5 Vyavhar Parmanu = 1 Uka 8^6 Vyavhar Parmanu = 1 Yav 8^7 Vyavhar Parmanu = 1 Angul 24 Angul = 1 Hath 4 Hat = 1 Dhanushya 2000 Dhanushya = 1 Guvyut 4Guvyut = 1 Yojan 1.1. Significance of Numeral Eight (8) In above table there is striking importance of numeral eight. We can notice the use of multiples of eight up to the measurement of Angul, after which the units assume different multiples. This must have been done with a definite purpose. If we bifurcate the dimensions, we have a clear demarcation - 1. Measurements before Angul 2. Measurements after Angul If we put all the pieces of mathematical jigsaw puzzle together, a clear picture emerges. In the Jain canonical texts, geographical extent and relative positions of heavenly bodies (cosmology) are described in the units of Angul and beyond. This indicates that the unit Angul is utilized for linear, single dimensional measurements. Whereas, all the units smaller than Angul are indicative of Volume of the particle as a whole. This is inferred from the fact that the Jain mathematics considers 2 as the smallest number. Numeral 8 is derived from 23. Therefore, increment in the multiples of 8 suggests that the units are indicative of increasing volume. Writing in equation form - 2^3 = 2x2x2 = 8 This is clearly a three dimensional measurement. The only possible explanation to this demarcation in units could be that at infinitesimal and minute levels, individual linear dimension has no significance as the particles retain their spherical shapes which can be better described in volume terms rather than length. 1.2. Electron, Proton and Quark Physicists of the current generation find themselves back at the square one as far as identification of tiniest particle is concerned. Scientists are still puzzled by the behavioural observation of infmitesimally small particles. Initially, an atom was considered as the smallest building block. But, soon electrons, protons and neutrons were discovered. Later on quarks were experimentally detected during the transitional phases but they were not found to exist independently. Recently, a new particle comprising five quarks has been identified by the scientists, which they believe existed since the time of big-bang. However, the tiniest particle is so enigmatic that its discovery still looks elusive. Jain Literature can provide a helping hand to the modern scientists in this subject. Mahapragya writes that the Sukshma (Nishchay) Pudgal, as described in the Jain Agams, is indivisible, indestructible, and imperishable basic constituent of the substance. This leads to very important conclusion - If a particle breaks to disappear as energy (Sukshma Pudgal), it can be treated as the smallest particle (Sthula Pudgal) of the universe. Scientists have so far been able to find small particles which disintegrate further into smaller particles but have not been able to isolate such a particle which when broken disappears entirely in the form of energy. The day we can find such a particle, we can surely claim to have found the basic building block of this universe. Since all our efforts to break particles down the smallest one have not yielded results so far, we must now attempt an alternative method. If we can concentrate the energy to a miniscule space it will integrate to result in a particle which will be the smallest particle. 2. Kaal (Time) We have, so far, endeavoured to know the smallest particle of substance. In Jain belief, Time is an independent entity. What is the smallest unit of time? All the activities of Pudgals are space-time related. Accordingly, the time factor in the micro (sukshma) world is 'samay'. Infinite such 'samay' constitute one 'avalika'. Avalika is the smallest unit of time in the macro (sthula) world. We have seen earlier that - ∞ Nishchay Parmanu = 1 Vyavhar Parmanu ∞ samay = 1 avlika The factor of infinity (∞) in both these equations suggests that there exists a quantum jump from micro to macro level. As in the case of smallest particle, scientists are still searching an activity or phenomenon which is accomplished in the smallest period of time. They have found visible light, x-rays and gamma rays in which the wavelengths are as low as a millionth of a centimeter. This means the wave activities are taking place at the Nano- and Pico-second (one-millionth of micro second) scale. In a science magazine, 'Nature', some Austrian scientists have claimed to observe fastest ever happening in which the event is said to happen in one-hundredth of an 'ato-second'. The 'ato-second' is so small a unit that to bring it at par with a second will take 30 million years. Scientists employed the motion of electrons to measure this event. Researchers excited the electrons with the help of far ultra-violet light beam. According to the Professor Farank Cruise of Vine Technique University, some electrons were accelerated to such an extent that they detached from the parent atom permanently. These electrons were topographically photographed by Fucycle Laser. These photographs revealed the activities taking place in the time frame of one-hundredth of an 'ato-second'. This research has paved the way to manufacture highly accurate clocks. At present the accurate clocks are working at microwave frequencies, with the advent of ato-second phenomenon future clocks of extremely high accuracy and stability will work on optical frequencies obtained from lasers. Again, the ancient Jain literature seems to be quite in consonance with the modern science. As mentioned earlier the Jains have stipulated one 'samay' as the time taken by the activities of massless sukshma pudgal. We have christened this unit as 'Timon'. We shall now ponder upon the units of space. Up till now we have discovered that the particles and time-scales of micro and macro levels are different and the two can become mutually equivalent only by using the factor infinity. Can the space too, be described in micro and macro units? Jain Agams have explained the parmanu-samay-akash as interrelated entities and one can be defined with the help of another. 3. Aakash - Parmanu After having known the units of the substance and time, parmanu and samay respectively, it is necessary to know the units of space. Is there any equivalence in the minima and maxima of pudgal, time and space respectively? It is interesting to study the Jain canonical literature where the examples are given concerning the units of space. 1. It has been mentioned that a theoretical unit 'aakash-pradesh' is the space occupied by one parmanu (dion). It must be observed here that the dion is the smallest massless derivative of the 2. A second mention of 'aakash-pradesh' which seems to be more practical is the space occupied by infinite parmanus bundled as a shukshma pudgal (quadon). These two statements sound paradoxical and are being keenly examined by Mahapragya as follows: "The two mentions of 'aakash-pradesh' actually differentiate the micro and the macro worlds. As the dion (parmanu) acquires practical utility only after infinite of them consolidate to form a quadon, the space occupied by a parmanu is of limited use when we discuss the shukshma or the micro world only. The real space co-ordinates are constructed only by the unit-space occupied by the packet of infinite parmanus forming a quadon (skandh)." 4. Aakash-Kaal 'A dion (paramanu), if moves with slowest speed, travels from one space unit (Akash Pradesh) to the adjacent one only. On the other hand, if it travels at its fastest speed, it gets transferred from one end of the universe to the other (a distance of 14 Rajju or innumerate Yojana).' This statement of Jain Agams actually constitutes the Theory of Relativity. The space-time (Aakash-Kaal) linearity is affected by the speed of the object. Einstein proved from his mathematical calculations that an astronaut who travels to a distant star at a speed of 70% of the speed of light, he will not only experience the slow passage of time, but will also experience the distance being lowered by virtue of his high speed. We have dions (paramanu) and octons (Sthula Pudgal) as far as particles are concerned; we have samay (unit of finer time) and Avlika ( unit of real time) but, no description is available as micro space point and macro space point. In an indirect mention, however, in the Acharanga Niryukti the difference in space units and paramanu-kaal is highlighted. It says that if the space points contained in a finger-width of space are exhausted by taking out one space point in each consecutive instants, it will take innumerate ascending and descending periods of time to evacuate that region. This comparison of region with time manifests the nature of space. Hence the finger width measure of space can be called as macro space and the space point can be called as part of the With Jain standpoint it is proper to recognize that the separate units are requisite for micro and macro domains. These units cannot be interchangeably applied. An innumerate number of dions form a practically mentionable and usable entity in the macro domain. In conclusion, we derive following postulates for matter, time and space - 1. Almost infinite dions combine to form an atom. This implies that a finite combination of dions can form quadons (skandh) but even then they are not useful in practical or macro world. It is the distinctiveness of Jain philosophy which states that only a collection infinite dions is of practical utility. This collection, named as octon makes the basic building block of the macro domain. The analogy between the octons of Jain philosophy and the quanta of modern science is quite remarkable. Modern science has developed its quantum mechanics on the basic principle that the energy is transacted in the forms of packets called quanta. 2. Like matter, practical unit of kaal is Avalika which is said to be constituted by the elapse of innumerate samay (the smallest unit of time in micro domain). Here again, any finite accumulation of samay cannot result in Avalika. These findings caution us that any evaluation of event is possible only after ascertaining the domain of the event - micro or macro. Until then, the results could be erroneous. A massive factor of innumerate (near infinity) is responsible for this stark difference. 3. Similar explanations are forwarded for the defining the space units. Matter and soul travel in definite pathways in the space. Seven such pathways are mentioned in the Bhagwati Sutra- Track Shape Straight Line (Rizu-Ayat) Right Angle (Ektovakra) Double Right Angle (Dvitovakra) Single Split (Ektokhaha) Double Split (Dvitakhaha) Circular (Chakrawala) Semi-circular (Ardhachakrawala) Entire universe (lok), including the irregular discontinuities at the boundaries can be traversed by the combination of these tracks. This, however, must be kept in mind that the space changes its characteristics with the speed of travel. We can only employ space co-ordinates to assign the direction and trajectory of motion of matter particles (like dions, quadons and octons) and souls. Highlight of these discussions about the ancient Jain literature is that the six fundamental entities (mattereals) are to be described separately in the micro and macro domains. No comments:
{"url":"http://jainology.blogspot.com/2010/12/jain-arithmetic.html","timestamp":"2014-04-21T12:23:49Z","content_type":null,"content_length":"127006","record_id":"<urn:uuid:495af036-8c53-4893-a1fa-6c07f6785a66>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
Roger Cotes Born: 10 July 1682 in Burbage, Leicestershire, England Died: 5 June 1716 in Cambridge, Cambridgeshire, England Click the picture above to see a larger version Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index Roger Cotes' mother was Grace Farmer, who came from Barwell in Leicestershire, and his father was Robert Cotes who was the rector of Burbage. Roger had a brother Anthony one year older than himself, and a sister Susanna who was one year younger. He attended Leicester School and by the age of twelve his teachers had already realised that he had an exceptional mathematical talent. His uncle, the Reverend John Smith, was keen to give Roger every chance to develop these talents and so Roger went to live with him so that he might be personally tutored. Roger later attended the famous St Paul's School in London, but he continued to be advised by his uncle and the two exchanged letters on mathematical topics during the time that Roget spent at school in London. Roger matriculated at Trinity College, Cambridge, on 6 April 1699 as a pensioner, meaning that he did not have a scholarship and paid for his own keep in College. He graduated with a B.A. in 1702 and remained at Cambridge where he was elected to a fellowship in 1705. In January 1706 he was nominated to be the first Plumian Professor of Astronomy and Experimental Philosophy. This was a remarkable achievement for Cotes who, at that time, was on 23 years of age. His exceptional abilities had been fully appreciated, however, by many at Cambridge such as William Whiston with whom he had quickly formed a friendship. Both Newton and Whiston recommended Cotes for the Chair, as did Richard Bentley who was master of Trinity College. There were some, however, who opposed his appointment, the most high profile of whom was Flamsteed, the astronomer royal. By the time that Cotes was formally elected as Plumian Professor on 16 October 1707 he had, in the previous year, been elected to a more prestigious fellowship as well as being awarded his M.A. Meli gives the background to the establishment of the chair in [2]:- Cotes was the first occupant of the Cambridge chair established by Thomas Plume (1630 - 1704), archdeacon of Rochester, who bequeathed nearly £2000 to maintain a professor and erect an astronomical observatory. Plans for an observatory at Trinity had already been drafted by Bentley before Plume's bequest. The observatory was eventually housed over the king's or great gate at Trinity College, together with living quarters for the Plumian professor. It is not entirely clear how successful Cotes was in his role as an observational astronomer. In the first place there are somewhat contradictory accounts of the quality of the instruments in the Cambridge observatory. Cotes designed a transit telescope to add to a collection of instruments which had been purchased or donated. For example Newton donated a clock which still survives at Trinity College. Bentley, the master of Trinity College we mentioned above, claimed that the Observatory had "the best instruments in Europe" but an assistant who worked there wrote to Flamsteed saying "I saw nothing there that might deserve your notice". The truth is probably somewhere in between, since it would be natural for the master of Trinity to boast of the facilities, while the assistant, who only worked there for a short time, was probably trying to please Flamsteed. In terms of the observations that Cotes made, perhaps the most significant was the total eclipse on 22 April 1715. However, Halley describes this event in a paper in the Philosophical Transactions of the Royal Society where he says that Cotes:- ... had the misfortune to be opprest by too much company, so that though the heavens were very favourable, yet he missed both the times of the beginning of the eclipse and that of total darkness. Cotes himself wrote a letter to Newton concerning the eclipse in which he explained that his assistant had discovered a method to determine the mid-point of the eclipse and he [3]:- ... called out to me, "Now's the middle", though I knew not at that time what he meant. None of this speaks very highly of Cotes' dedication as an observer, but nevertheless he did note some important facts concerning this eclipse and other astronomical events. However, his mathematical abilities put him second only to Newton from his generation in England. Before going on to look at his mathematical contributions let us note that he was elected a fellow of the Royal Society on 30 November 1711, was ordained a deacon on 30 March 1713, and was ordained a priest on 31 May 1713. From 1709 until 1713 much of Cotes' time was taken up editing the second edition of Newton's Principia. He did not simply proof-read the work, rather he conscientiously studied it, gently but persistently arguing points with Newton. For example in [6] a discussion is considered which took place between Cotes and Newton in 1711 concerning the velocity of water flowing from a hole in a cylindrical vessel. During the discussion they gave various approximations to the fourth root of 2 which is approximately 1.189207115. Newton gave the following rational approximations (we add decimal values to see their accuracy) ^6/[5] = 1.200000000 ^13/[11] = 1.181818182 ^25/[21] = 1.190476190 while Cotes gave ^44/[37] = 1.189189189. At the beginning of the correspondence between the two men the tone is very friendly. However, toward the end of the task there are signs that they are cooling towards one another (see [3] for details of these letters). In particular although Newton thanked Cotes in the first draft of a preface he wrote to this edition, he deleted these thanks for the final publication. Cotes himself wrote an interesting preface of his own in which he explained how the study of natural philosophy had developed. First, Cotes explained, came Aristotle's method which involved naming hidden properties. Then, according to Cotes, came the ideas that all matter was homogeneous. He saw these methods as improvements, yet still retaining certain of the weaknesses of Aristotle's approach. Although he does not specifically name Descartes and Leibniz here, it is clearly an attach on their ideas. Finally says Cotes, comes the method based on first conducting experiments without having preconceived ideas, and then deducing how the world works from the results. These were the methods of Newton which led to establishing how the basic forces of nature operated. Cotes only published one paper in his lifetime, namely Logometria , published in the Philosophical Transactions of the Royal Society for March 1714, which he dedicated to Halley. It contains (in the words that Cotes used himself in a letter to Newton [3]):- ... a new sort of construction in geometry which appear to me very easy, simple and general. In this Cotes explained gave a method of finding rational approximations as convergents of continued fractions, and the author of [6] suggests that this explains how he found the approximation ^44/ [37]to the fourth root of 2 which we mentioned above. Cotes was particularly pleased with his rectification of the logarithmic curve as he made clear in a letter to his friend William Jones in 1712. In particular his work on logarithms led him to study the curve r = a/q which he named the reciprocal spiral. Cotes extended the work of Varignon when he rectified the Archimedean spiral and the parabola of Apollonius, a problem first proposed by Fermat, showing that both have the same integral. His work here was based on the formula ln(cos q + i sin q) = i q. Jones urged Cotes to publish his work in the Philosophical Transactions of the Royal Society, but Cotes resisted this, wishing to support Cambridge and publish with Cambridge University Press. His early death was to prevent publication in his lifetime. Cotes discovered an important theorem on the nth roots of unity, gave the continued fraction expansion of e, invented radian measure of angles, anticipated the method of least squares, published graphs of tangents and secants, and discovered a method of integrating rational fractions with binomial denominators. His substantial advances in the theory of logarithms, the integral calculus, in numerical methods particularly interpolation and table construction of integrals for eighteen classes of algebraic functions led Newton to say:- ... if he had lived we might have known something. According to Edleston [3], Cotes died of a:- ... fever attended with a violent diarrhoea and constant delirium. He was buried four days later in the chapel of Trinity College. Some of the work which Cotes hoped to publish with Cambridge University Press was published eventually by Thomas Simpson in The Doctrine and Application of Fluxions (2 Vols, London, 1750). Robert Smith edited Cotes' major posthumous work, the Harmonia mensurarum which appeared in 1722. It is fitting at this point to explain who Robert Smith was, and how he interacted with Cotes. In fact he was the son of Cotes' uncle, the Reverend John Smith, and had become friends with Cotes when his uncle had him live in his house as a boy. Later Robert Smith was Cotes' assistant when he was Plumian Professor, and eventually succeeded him as Plumian professor. It was Smith who, many years after Cotes' death, when he was master of Trinity College, had a bust of Cotes erected. This bust, which is shown above, is now in the Wren library. Let us return to Cotes' posthumous work the Harmonia mensurarum. As well as reprinting Logometria it contains the three mathematical works: 1. Aestimatio errorum in mixta mathesis. 2. De methodo differentiali Newtoniana. 3. Canonotechnia. The first concerns plane and spherical triangles and was much used by astronomers. It contains an early study of the theory of errors. The second develops Newton's methods of interpolation and was particularly useful in studying orbits of comets. The third work studies numerical integration and also includes further contributions to interpolation. In 1738, 22 years after Cotes died, Smith published the lectures which Cotes had given on experimental physics Hydrostatical and pneumatical lectures. Article by: J J O'Connor and E F Robertson List of References (11 books/articles) A Poster of Roger Cotes Mathematicians born in the same country Honours awarded to Roger Cotes (Click below for those honoured in this way) Plumian chair 1707 - 1716 Fellow of the Royal Society 1711 Cross-references in MacTutor Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © February 2005 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-history.mcs.st-andrews.ac.uk/Biographies/Cotes.html","timestamp":"2014-04-16T10:37:35Z","content_type":null,"content_length":"24171","record_id":"<urn:uuid:26378d04-7791-44fe-9c68-33d44c250717>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
Pacific, WA Algebra 2 Tutor Find a Pacific, WA Algebra 2 Tutor ...Having spent 6+ years in undergraduate studies and now preparing to advance to medical school, I have seen many different approaches to studying, and together we can find one that benefits your child the most. I feel communication is the strongest skill required for good tutoring. I have been h... 25 Subjects: including algebra 2, chemistry, physics, geometry ...I specialize in giving struggling students the skills to be successful in the classroom. By far, my favorite subjects are math and science, but I love to show the tricks for test taking for ASVAB, TEAS, MCAT, Compass, SAT and ACT exam. I specialize in identifying the roadblocks to your success and getting you to your goal. 46 Subjects: including algebra 2, reading, English, algebra 1 ...I am familiar with wireless configuration and security, home networking setup, domain experience with active directory. I have 10+ years as a Computer Support Technician including installing and configuring all versions of Outlook for client users. I am currently employed as a Computer Support ... 30 Subjects: including algebra 2, chemistry, English, physics ...I discovered being a DJ with a solid math background came in handy! It gave me a knack for connecting with kids on a personal level because we could talk about popular music and videos. It also enabled me to take math problems, reveal the big picture, and break down those problems into easy to understand pieces so kids could understand what's up! 13 Subjects: including algebra 2, reading, Spanish, algebra 1 ...I have tutored high school level Algebra I for both Public and Private School courses. I also volunteer my time in the Seattle area assisting at-risk students on their mathematics homework. I have worked as a mathematics teacher in Chicago and I thoroughly enjoy teaching the subject. 27 Subjects: including algebra 2, chemistry, reading, writing Related Pacific, WA Tutors Pacific, WA Accounting Tutors Pacific, WA ACT Tutors Pacific, WA Algebra Tutors Pacific, WA Algebra 2 Tutors Pacific, WA Calculus Tutors Pacific, WA Geometry Tutors Pacific, WA Math Tutors Pacific, WA Prealgebra Tutors Pacific, WA Precalculus Tutors Pacific, WA SAT Tutors Pacific, WA SAT Math Tutors Pacific, WA Science Tutors Pacific, WA Statistics Tutors Pacific, WA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Algona, WA algebra 2 Tutors Auburn, WA algebra 2 Tutors Bonney Lake algebra 2 Tutors Covington, WA algebra 2 Tutors Dieringer, WA algebra 2 Tutors Dupont, WA algebra 2 Tutors Edgewood, WA algebra 2 Tutors Federal Way algebra 2 Tutors Fife, WA algebra 2 Tutors Graham, WA algebra 2 Tutors Jovita, WA algebra 2 Tutors Maple Valley algebra 2 Tutors Milton, WA algebra 2 Tutors Puy, WA algebra 2 Tutors Sumner, WA algebra 2 Tutors
{"url":"http://www.purplemath.com/Pacific_WA_algebra_2_tutors.php","timestamp":"2014-04-20T04:11:34Z","content_type":null,"content_length":"24059","record_id":"<urn:uuid:84b60f98-425e-4b79-b529-c8713857df7d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Question about quadratic congruence Replies: 7 Last Post: Mar 15, 2011 2:36 AM Messages: [ Previous | Next ] Question about quadratic congruence Posted: Mar 11, 2011 1:39 AM Exercise 6 on page 26 of Birkhoff and MacLane's "A Survey of Modern Algebra" (2nd edtn), asks to show that x^2 cannot be congruent to 35 mod 100. I think I can show this to be true, but I wonder if my way is the best one. I am wondering if anyone has a nicer/cleaner way to solve this I start off by supposing there is an integer x such that x^2 is congruent to 35 mod 100, which would imply that there exists an integer n such that x^2 = 100*n + 35. Since 100*n + 35 = 5*(20*n+7), this implies that 5|x^2 and since 5 is prime this implies further that 5|x, hence x can be written as x=5y. Substituting this form for x back into the previous eqn implies 5*y^2 = 20*n + 7 implies 5*(y^2-4*n) = 7 implies 5|7, which we know is false. What bothers me I guess is that my approach seems very to be of a somewhat "ad hoc" nature. What if the problem had 63 or 64 or 32 instead of 35. Would I have to go through the same type of argument and would this type of argument even always work? I looked up solving quadratic congruences on the net and Wolfram's site talks about something called "excludents", which seems to be something like what I am doing, but it's not totally clear to me. Thanks for any insights/help you can provide, Date Subject Author 3/11/11 Question about quadratic congruence qindars@gmail.com 3/11/11 Re: Question about quadratic congruence Jose Carlos Santos 3/14/11 Re: Question about quadratic congruence qindars@gmail.com 3/15/11 Re: Question about quadratic congruence qindars@gmail.com 3/15/11 Re: Question about quadratic congruence Tim Little 3/11/11 Re: Question about quadratic congruence Rob Johnson 3/11/11 Re: Question about quadratic congruence renu2010 3/13/11 Re: Question about quadratic congruence Ilmari Karonen
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2244159","timestamp":"2014-04-19T00:51:10Z","content_type":null,"content_length":"25433","record_id":"<urn:uuid:23e22d86-97b3-49bd-9cdd-1a40beb51940>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Constructive bounds for a Ramsey-type problem Noga Alon Michael Krivelevich For every fixed integers r, s satisfying 2 r < s there exists some = (r, s) > 0 for which we construct explicitly an infinite family of graphs Hr,s,n, where Hr,s,n has n vertices, contains no clique on s vertices and every subset of at least n1- of its vertices contains a clique of size r. The constructions are based on spectral and geometric techniques, some properties of Finite Geometries and certain isoperimetric inequalities. 1 Introduction The Ramsey number R(s, t) is the smallest integer n such that every graph on n vertices contains either a clique Ks of size s or an independent set of size t. The problem of determining or estimating the function R(s, t) received a considerable amount of attention, see, e.g., [14] and some of its references. A more general function was first considered (for a special case) by Erdos and Gallai in [11]. Suppose 2 r < s n are integers, and let G be a Ks-free graph on n vertices. Let fr(G) denote the maximum cardinality of a subset of vertices of G that contains no copy of Kr, and define, following [12], [8]: fr,s(n) = min fr(G), where the minimum is taken over all Ks-free graphs G on n vertices.
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/956/2410913.html","timestamp":"2014-04-20T07:53:52Z","content_type":null,"content_length":"8326","record_id":"<urn:uuid:7ba14a88-42f0-44e2-9dcc-62d94b4d5519>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Can you Afford to Hire a New Office Employee? Originally published: 12.01.12 by Ruth King Consider how much cost savings and new revenue will result. Last month I wrote about how to determine whether you can afford to hire a field employee. This month I’ll give you the calculations to determine whether you can afford to hire a new office employee. I define an office employee as one who works in the warehouse or the office. This person supports the field personnel but does not generate revenues for your company. Here’s the calculation: 1. Determine the hourly wage that you will pay the office employee. 2. Convert the hourly salary into a total yearly wage. 3. Add benefits (FICA, Medicare, etc.). 4. Determine the gross margin of the department where the office employee will work. If the company is not departmentalized, use the company gross margin. 5. Enter the data into the formulas. To determine the break-even sales that must be generated to support this employee: Sales required = Total Cost To determine profitable sales that must be generated to support this employee: Sales required = Total Cost GM – P% GM = Gross margin of the department this employee will work in; if your company does not departmentalize your P&L, it is the gross margin of the company. P% = the desired profit of the company Note: Last month the calculation used 1-GM as the divisor. This is because you know direct cost. This month, you have overhead cost, so you divide by the GM. Let’s assume you are considering hiring a dispatcher: 1. Her hourly wage will be $15 per hour. 2. This translates to $31,200 per year (assuming no overtime). 3. Assume her benefits are 30% of her salary, equaling $9,360. 4. Gross margin of the company is 35%. 5. Desired company profit is 10%. Break-even sales required = $40,560 = $115,885.71 Profitable sales required = $40,560 = $162,240.00 The company does not departmentalize its profit-and-loss statement, so the calculation relies on the company gross margin. The company must generate a minimum of $115,885.71 in additional sales just to break even on the new dispatcher’s salary. If it wants to maintain its 10% desired profit, then the company must generate an additional $163,340 just to afford the dispatcher. A different way of looking at the same dollar values needed: Can a new dispatcher save the company at least $115,995.71; or more realistically, $162,240? If the technicians and other field personnel are $162,240 more productive because of the dispatcher, then you would hire the dispatcher. An easy way to determine the answer: Assume that you have three technicians. Each one can handle an additional call per day, or has less overtime per day, because the dispatcher routes the technicians efficiently. Marketing activities can generate the additional call per technician per day. Your average service ticket is $250. This is an additional $750 per day (averaged over the year) or $195,000 per year. Since $195,000 is higher than $162,240 needed, you would hire the dispatcher. You can also look at the calculation another way: Assume that the dispatcher can keep the technicians out of the office. They are at their first call at 8 a.m. rather than wasting 30 minutes each morning. This is non-billable time that they put on their time cards that you have to pay for. Assume that each technician earns $30 per hour, including benefits. This is $45 you save each day. The $45 per day translates into $11,700 in yearly saved cost. Then you must add the additional revenues that each technician can generate in 30 minutes. Assume that the average service ticket is $250, which includes on average one hour of labor. Then each service technician would be generating an additional $125 per day or $32,500 per year. Three technicians would generate an additional $97,500 per year. Adding $97,500 in additional revenues plus $11,700 in savings equals $109,200, which is close to break even. If you think that the dispatcher can’t keep the technicians out of the office or that the company can’t generate the revenues required so that each service technician can do an additional call per day, then it is best to wait until business picks up enough so that she can be productive for you. This quick calculation helps you determine whether you really can afford to hire a non-revenue producing, office employee. Ruth King has over 25 years of experience in the hvacr industry and has worked with contractors, distributors, and manufacturers to help grow their companies and to become more profitable. She is president of HVAC Channel TV and holds a Class ll (unrestricted) contractors license in Georgia. Ruth has authored two books: The Ugly Truth about Small Business and The Ugly Truth about Managing People. Contact Ruth at ruthking@hvacchannel.tv or 770-729-0258. Articles by Ruth King Your Accounts Receivable to Accounts Payable Ratio: Do You Have a Collection Problem? Use the accounts receivable to accounts payable ratio to determine if you have enough money to pay your bills. View article. Your Acid Test - Do You Have Too Much Inventory? Your Acid Test - Do You Have Too Much Inventory? Track This Indicator to Get a Clearer Financial Picture View article. Do You Have Enough Cash to Pay Your Bills on a Long-Term Basis? Check Your Liquidity Ratios View article. Why I Hate KPIs Comparing KPI ratios to industry standards alone can be a risky move. View article. Finishing and Implementing Your 2014 Three Page Business Plan In the past two months you’ve created your 2014 goals with your employees and your 2014 marketing plan. This month you’ll complete your three page business plan by estimating your 2014 budget. I’ll end this series with some implementation suggestions so that you make 2014 your best year ever. View article.
{"url":"http://www.hvacrbusiness.com/hiring-office-employee.html","timestamp":"2014-04-17T15:43:45Z","content_type":null,"content_length":"16935","record_id":"<urn:uuid:ce7a0005-6862-4109-9332-45b29ba68039>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
Dielectric Function so how should this be done for the FD distribution. Should it be E(p + q) ~ p^2 + q^2 ? also hw = h*c*q would only hold for a free solution of maxwell equations in the medium. [/QUOTE] but how do we know the relationship between q and w if we donīt yet know the refractive index?[/QUOTE] first note that p and q are vectors. Also E(p+q)=(p+q)^2/2m. In general there is no relation between w and q! Such a relation only exists for free solutions of the Maxwell equations. These can be derived from the Maxwell equations [itex]\epsilon (q,\omega)\ omega^2 E(q,\omega)=q^2 E(q,\omega)[/itex]. Even for light in a non-metallic medium the refractive index is not only a function of w but also a function of k, i.e. there may be light waves with the same frequency but different values of q. E.g. a left and a right circularly polarized wave will have a slightly different refractive index in a chiral medium and will propagate with different wavevectors. In most isolators this effect (spatial dispersion) is very weak, but in a metal, where electrons are free to travel over larger distances, it can be quite pronounced.
{"url":"http://www.physicsforums.com/showthread.php?p=3743121","timestamp":"2014-04-20T05:54:12Z","content_type":null,"content_length":"52335","record_id":"<urn:uuid:c0813901-5a5a-423a-b8eb-982876a8e688>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
BISC413 Lab 2 BISC413 Lab 2, August 30: Cat data analysis On Tuesday you collected phenotype data on about 100 cats from one location. Today you'll use the phenotype data to estimate allele frequencies using the Hardy-Weinberg relationship, test the fit of observed genotype frequencies to those expected from Hardy-Weinberg, and see whether cats from North American cities are similar to cats from the home cities of the first European settlers. Estimate allele frequencies For the longhair and white loci, you only have two phenotypes, one caused by a recessive allele and one caused by a dominant allele. You will therefore need to estimate the allele frequencies using the Hardy-Weinberg relationship, which says that the frequency of a recessive homozygote is equal to the frequency of the recessive allele squared. Therefore, to estimate the frequency of the recessive l allele at the longhair locus, take the square root of the proportion of longhaired cats. For example, if 9% of the cats in your sample have long hair, you'd take the square root of 0.09, which equals 0.30. You'd estimate that the allele frequencies are 0.30 l and 0.70 S. Of course, you don't know this for sure, as you don't know how many of the short-haired cats are Sl and how many are SS. Do this calculation for your data for longhair and for white. Remember that the W allele, which causes white hair, is dominant. This is a good reminder that "dominant" does not refer to how common an allele is, it tells you what phenotype the heterozygous genotype has. For the Orange locus, you can distinguish all the genotypes, so you don't need to estimate the allele frequency in your sample, you can count it directly. Males are either OY, an orange or cream colored cat, or oY, which is black, brown or gray. Count the number of O alleles and the number of o alleles in the males. Females are either OO (orange or cream), oo (black, brown or gray), or Oo (calico or tortoiseshell). Count the number of O and o alleles in females, too. Add the total number of each allele in males and females to get your estimate of the allele frequencies. For the spotting locus, use two different methods of estimating the allele frequency. First, lump together those with less than and greater than 50% white into one category. Treat the absence of white as a recessive phenotype caused by the s allele, and estimate the allele frequencies the same way you did for longhair and white. Next, count those cats with more than 50% white as SS, those with some white but less than 50% as Ss, and those with no white as ss. Count the number of alleles the same way you did for Orange in females. Write your four allele frequencies and the sample sizes on the board. The allele frequencies should have three decimal places. Test fit to Hardy-Weinberg proportions For Orange in females and spotting in all cats, use the Hardy-Weinberg relationship to predict the proportion of each homozygous genotype and the heterozygous genotype. Read about the chi-square test of goodness-of-fit, download the spreadsheet from that page, and test how well your observed data fit the expected. Note that your "degrees of freedom" is based on an intrinsic hypothesis, so enter "1" under "intrinsic." Record the P-value. Test relationship between climate and allele frequencies Go to Weather Underground and enter the name of your city. Scroll down to the "History and Almanac" section and change the date to Aug. 15. Record the normal "Max Temp" and "Min Temp." Change the date to Jan. 15 and record the maximum and minimum temperatures. Convert the temperatures to degrees Celsius, if necessary. Record the information on the chalkboard. Genetic distance Calculate the genetic distance between your city and its paired city on the other side of the Atlantic. The genetic distance you'll use is just the average of the absolute values of the differences in allele frequency. Also calculate the difference between your city and its nearest neighboring city on the same side of the Atlantic. Lab report On Tuesday, Sept. 4, you must turn in a lab report on this week's work. It must be typed. The following information should be identical between your and your partner's reports, but it must be in both • A table containing the phenotype frequencies you counted. • A table containing the allele frequencies you counted. • A table containing estimated genotype frequencies for Orange and spotting, along with the observed genotype frequencies and the P-value for the difference between expcted and observed. • The genetic distances you calculated between your city and its trans-Atlantic sister city, and the distance between your city and its neighbor on the same continent. If you didn't finish this in class, e-mail me and I'll send you the data you need for this. Your report should also contain the following, which should NOT be identical with your partner's report: • A description of any methods you used when collecting the data that went beyond what was in the instructions for the lab. This will primarily consist of how you dealt with questions like bad photos, kittens, siblings, etc. • A description of your ideas for better ways to collect accurate data. This can include both better ways to use pet shelter photos, plus other ways you can think of to collect cat data from different places. • A description of your most interesting result. "Interesting" in this case could mean the goodness of fit test with the lowest P-value, the genetic distances, or something else you found interesting. Pick something that is interesting, even if none of your P-values are significant. • Ideas for further research on your most interesting result. Suggest every possible explanation for the interesting result that you can think of, and describe further experiments someone could do to test those explanations. There is no minimum or maximum length for the lab report, but you must include everything listed above. I will compile all of the genetic distances and present the results in Tuesday's lab. I'll also show the correlations between allele frequencies and temperature. Return to the Genetics Lab syllabus Return to John McDonald's home page This page was last revised August 30, 2012. Its URL is http://udel.edu/~mcdonald/geneticslab2.html
{"url":"http://udel.edu/~mcdonald/geneticslab2.html","timestamp":"2014-04-19T22:06:31Z","content_type":null,"content_length":"7848","record_id":"<urn:uuid:9a7d67dc-b8fb-45b9-bab0-91a7c332e1ef>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
PhET Simulation: Motion in 2D published by the Physics Education Technology Project Available Languages: English, Spanish This is an interactive simulation created to help beginners differentiate velocity and acceleration vectors. The user can move a ball with the mouse or let the simulation move the ball in four modes of motion (two types of linear, simple harmonic, and circular). Two vectors are displayed -- one green and one blue. As the motion of the ball changes, the vectors also change. Which color represents velocity and which acceleration? Editor's Note: This simulation was designed with improvements based on research of student interaction with the PhET resource "Ladybug Revolution". The authors added two new features for the beginning learner: linear acceleration and harmonic motion. This item is part of a larger and growing collection of resources developed by the Physics Education Technology project (PhET), each designed to implement principles of physics education research. Please note that this resource requires Java Applet Plug-in. Subjects Levels Resource Types Classical Mechanics - High School - Instructional Material - Motion in Two Dimensions - Lower Undergraduate = Activity = 2D Acceleration - Middle School = Interactive Simulation = 2D Velocity - Informal Education Intended Users Formats Ratings - Learners - application/java - Educators - text/html Access Rights: Free access © 2007 University of Colorado, Physics Education Technology Additional information is available. acceleration, circular motion, motion, simple harmonic motion, two-dimensional motion, vectors, velocity Record Cloner: Metadata instance created November 15, 2007 by Alea Smith Record Updated: January 10, 2014 by Caroline Hall Last Update when Cataloged: November 15, 2007 Other Collections: AAAS Benchmark Alignments (2008 Version) 4. The Physical Setting 4F. Motion • 3-5: 4F/E1a. Changes in speed or direction of motion are caused by forces. • 6-8: 4F/M3a. An unbalanced force acting on an object changes its speed or direction of motion, or both. 11. Common Themes 11B. Models • 6-8: 11B/M4. Simulations are often useful in modeling events and processes. Next Generation Science Standards Crosscutting Concepts (K-12) Patterns (K-12) • Graphs, charts, and images can be used to identify patterns in data. (6-8) Science and Engineering Practices (K-12) Analyzing and Interpreting Data (K-12) • Analyzing data in 9–12 builds on K–8 and progresses to introducing more detailed statistical analysis, the comparison of data sets for consistency, and the use of models to generate and analyze data. (9-12) □ Analyze data using computational models in order to make valid and reliable scientific claims. (9-12) Developing and Using Models (K-12) • Modeling in 9–12 builds on K–8 and progresses to using, synthesizing, and developing models to predict and show relationships among variables between systems and their components in the natural and designed worlds. (9-12) □ Use a model to provide mechanistic accounts of phenomena. (9-12) Using Mathematics and Computational Thinking (5-12) • Mathematical and computational thinking at the 9–12 level builds on K–8 and progresses to using algebraic thinking and analysis, a range of linear and nonlinear functions including trigonometric functions, exponentials and logarithms, and computational tools for statistical analysis to analyze, represent, and model data. Simple computational simulations are created and used based on mathematical models of basic assumptions. (9-12) □ Use mathematical representations of phenomena to describe explanations. (9-12) Common Core State Standards for Mathematics Alignments High School — Number and Quantity (9-12) Vector and Matrix Quantities (9-12) • N-VM.1 (+) Recognize vector quantities as having both magnitude and direction. Represent vector quantities by directed line segments, and use appropriate symbols for vectors and their magnitudes (e.g., v, |v|, ||v||, v). ComPADRE is beta testing Citation Styles! <a href="http://www.compadre.org/portal/items/detail.cfm?ID=6095">Physics Education Technology Project. PhET Simulation: Motion in 2D. Boulder: Physics Education Technology Project, November 15, (Physics Education Technology Project, Boulder, 2007), WWW Document, (http://phet.colorado.edu/en/simulation/motion-2d). PhET Simulation: Motion in 2D (Physics Education Technology Project, Boulder, 2007), <http://phet.colorado.edu/en/simulation/motion-2d>. PhET Simulation: Motion in 2D. (2007, November 15). Retrieved April 18, 2014, from Physics Education Technology Project: http://phet.colorado.edu/en/simulation/motion-2d Physics Education Technology Project. PhET Simulation: Motion in 2D. Boulder: Physics Education Technology Project, November 15, 2007. http://phet.colorado.edu/en/simulation/motion-2d (accessed 18 April 2014). PhET Simulation: Motion in 2D. Boulder: Physics Education Technology Project, 2007. 15 Nov. 2007. 18 Apr. 2014 <http://phet.colorado.edu/en/simulation/motion-2d>. @misc{ Title = {PhET Simulation: Motion in 2D}, Publisher = {Physics Education Technology Project}, Volume = {2014}, Number = {18 April 2014}, Month = {November 15, 2007}, Year = {2007} } %T PhET Simulation: Motion in 2D %D November 15, 2007 %I Physics Education Technology Project %C Boulder %U http://phet.colorado.edu/en/simulation/motion-2d %O application/java %0 Electronic Source %D November 15, 2007 %T PhET Simulation: Motion in 2D %I Physics Education Technology Project %V 2014 %N 18 April 2014 %8 November 15, 2007 %9 application/java %U http://phet.colorado.edu/en/simulation/motion-2d : ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications. Citation Source Information The AIP Style presented is based on information from the AIP Style Manual. The APA Style presented is based on information from APA Style.org: Electronic References. The Chicago Style presented is based on information from Examples of Chicago-Style Documentation. The MLA Style presented is based on information from the MLA FAQ. PhET Simulation: Motion in 2D: Is Required By PhET Teacher Activities: Vectors Simulations Lab An editor-recommended virtual lab, authored by a high school teacher specifically for use with the Motion in 2D simulation. relation by Caroline Hall See details... Know of another related resource? Login to relate this resource to it. Related Materials Similar Materials
{"url":"http://www.compadre.org/portal/items/detail.cfm?ID=6095","timestamp":"2014-04-18T05:41:10Z","content_type":null,"content_length":"41731","record_id":"<urn:uuid:a009bd55-e223-4de4-a356-105f54409d1a>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
H. Curry (1958) has observed that there is a close correspondence between axioms of positive implicational propositional logic, on the one hand, and basic combinators on the other hand. For example, the combinator The following notion of construction, for positive implicational propositional logic, was motivated by Curry’s observation. More precisely, Curry’s observation provided half the motivation. The other half was provided by W. Tait’s discovery of the close correspondence between cut elimination and reduction of .
{"url":"http://kreisel.fam.cx/webmaster/file/Howard1969-ThtFormulaeAsTypesNotationOfConstruction/sect0001.html","timestamp":"2014-04-18T00:48:41Z","content_type":null,"content_length":"5803","record_id":"<urn:uuid:fdfd76e6-4e9f-49b0-98e8-5af67d2e7bd3>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
tending to infinity for standard deviation "Aidy" wrote in message <ivv05r$1of$1@newscl01ah.mathworks.com>... > Hi matt, > basically, I have a function that defines the vector x. > Now this function produces a vector x which is valid from [0 , 0] to [Inf_x, Inf_y]. > I have realized that from my function, I can get different vectors as my answer. > I have realized that from my function, I can get different vectors as my answer. Now as my x_coord and y_coord of my 2D vector increases , I also see a trend where generally the standard deviation will also increase. But what do x_coord and y_coord signify? Are they the statistical means of x(1) and x(2)? The standard deviation of a series of random vectors can behave independently of its mean. There is no way you can draw conclusions about the behavior of the standard deviations from the behavior of their means. For example, in this series x(t)=[3,2]*t +randn(1,2)*t The mean of x(t) is [3*t,2*t] and therefore it goes to infinity as t--->infinity. The standard deviation is [t,t] and so it also goes to infinity as t--->infinity Conversely, this time series x(t)=[3,2]*t +randn(1,2)/t has mean [3*t,2*t] going to infinity, but the standard deviation [1/t, 1/t] goes to 0.
{"url":"http://www.mathworks.com/matlabcentral/newsreader/view_thread/310553","timestamp":"2014-04-19T14:54:11Z","content_type":null,"content_length":"57853","record_id":"<urn:uuid:04a935cf-a7af-40d7-a824-c91ea2968e65>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
Bender, Ed - Department of Mathematics, University of California at San Diego • last revision March 16, 1995; file:"log.tex" 4/24/2003 Log-Concavity and Related Properties of the Cycle Index Polynomials • 0 1 Laws for Maps Edward A. Bender1, Kevin J. Compton2, L. Bruce Richmond3 • Intersections of Randomly Embedded Sparse Graphs are Poisson Edward A. Bender • TEXed on 1 21 1999 Almost All Rooted Maps Have Large Representativity • Journal of Combinatorial Theory, Series A 107 (2004) 117125 Asymptotics of combinatorial structures with • The map asymptotics constant tg Edward A. Bender • The Fraction of Subspaces of GFqn with a Speci ed Number of Minimal Weight Vectors is Asymptotically Poisson • Asymptotics of Permutations with Nearly Periodic Patterns of Rises and Falls • /res/papers/maps/FDEG.TEX 4/24/2003 The number of degree restricted rooted maps on the sphere • Locally Restricted Compositions I. Restricted Adjacent Differences • Asymptotic enumeration of labelled graphs by genus Edward A. Bender • Asymptotics of Some Convolutional Recurrences Edward A. Bender • Locally Restricted Compositions II. General Restrictions and Infinite Matrices • Coefficients of Functional Compositions Often Grow Smoothly • A Discontinuity in the Distribution of Fixed Point Sums • Asymptotics for the Probability of Connectedness and the Distribution of Number of Components • Multivariate Asymptotics for Products of Large Powers • A Multivariate Lagrange Inversion Formula for Asymptotic Calculations • Periodic Sorting Using Minimum Delay, Recursively Constructed Merging Networks • The asymptotic number of labeled graphs n vertices, q edges, and no isolated vertices • Admissible Functions and Asymptotics for Labelled Structures by Number of Components • /Rod, July 17, 1991; file "acg2.tex" 4/24/2003 Asymptotic Properties of Labeled Connected Graphs • Seminaire Lotharingien de Combinatoire 52 (2004), Article B50h Irreducible compositions • Submap Density and Asymmetry Results for Two Parameter Map Families
{"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/11/332.html","timestamp":"2014-04-16T19:46:01Z","content_type":null,"content_length":"10289","record_id":"<urn:uuid:cbed3fed-192b-4874-ad34-a2ba5a1c887e>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
> > resources 4teaching Here we will suggest sites that include useful information for mathematics teaching all over the world. suggested site: Klein Project Blog Connecting mathematical worlds The Klein Project aims to build a community for learning about the connections between school mathematics and contemporary research in the mathematical sciences. One vehicle for this is a “Klein vignette”, a short piece of writing about a particular piece of mathematics. Vignettes are intended to give teachers a sense of connectedness between the mathematics of the teachers’ world and contemporary research and applications in the mathematical sciences. Thus it will start with something with which the teacher is familiar and move towards a greater understanding of the subject through a piece of interesting mathematics. It will ultimately illustrate a key principle of mathematics. Suggested site: Teacher package: Mathematics in sport This teacher package brings together all our articles that have to do with sport, from cricket to football and from the sport itself to sporting architecture and infrastructure. We have grouped our articles in the following categories: Mathematics in The physics of sport sport Sporting strategy Architecture and infrastructure Predicting results and sporting stats Scoring and ranking Betting and odds Suggested site: Mike de Villiers/Dynamic Math Learning (Homepage, Dynamic Geometry Sketches ) Website which deals mainly with mathematics and mathematics education aiming at secondary and primary school mathematics teaching and learning, although some aspects are relevant to undergraduate Michael de Villiers is presently professor at the University of KwaZulu-Natal, South Africa mathematics. The website is organized as follows: brief CV, publications for sale, Geometer's Sketchpad & other Key Curriculum Press materials, downloadable articles and materials, some (non-mathematical) poetry and prose, links to math sites, interactive dynamic geometry (Sketchpad & Cabri) worksheets Keywords: dynamic geometry, Sketchpad, transformation geometry, Euclidean geometry, proof, math applications, math modeling, math investigations, Boolean Algebra, Logic, math software, videos, books, posters, manipulatives, puzzles, math problem solving, math competitions, Fermat point, Theorems of Viviani, Napoleon, Miquel, Simson, Neuberg, Van Aubel, Fathom, statistics, math duality, math education research, Van Hiele, etc. Have a math question? Suggested site: ASK DR MATH http:// mathforum.org/dr.math/ You can go to this page and ask your question. Maybe Dr Math is in and will be able to answer your question... Ask Dr. Math is a question and answer service for math students and their teachers. In the meantime you can see the list of the most frequently asked questions in this Dr Math FAQ, or read some Selected answers to common questions. Watch the talks "Changing the Culture of Homework" and "As Suggested site: mathtube.org Geometry is Lost - What Connections are Lost? What Reasoning is Lost? What Students are Lost? Does it Matter?" Mathtube is a project of the Pacific Institute for the Mathematical Sciences (PIMS) to make mathematical seminar and lecture materials easily available online. Since its creation in 1996, PIMS has collected and maintained an archive of videos and lecture notes covering many areas of the mathematical sciences. These materials represent a uniquely important resource and include contributions from some of the worlds most distinguished contemporary mathematicians, for example the PIMS distinguished lecturer series. Suggested site: Babylonian Maths - 4000 years ago, children in school were learning maths just as they do now. But what maths did they learn and how did they learn it? In this resource pack, Dr Eleanor Robson, shows us how we can find out about an ancient civilisation through the objects they left behind. She demonstrates clay tablets on which Babylonian children worked at their multiplication tables - in base 60! Through the video clips and follow-up resources, we can find out how they did arithmetic and how they learnt their tables. Eleanor also demonstrates the difference between how we generally draw a triangle now and then, and how the Babylonian style of writing - cuneiform - relates to their triangles. Disclaimer: The listed websites may be of interest to teachers -- ICMI bears no responsibility for their content.
{"url":"http://www.mathunion.org/icmi/links/4teaching/","timestamp":"2014-04-18T03:41:09Z","content_type":null,"content_length":"30527","record_id":"<urn:uuid:a4e19d4e-dddb-4244-8cc6-5e240e071750>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
North Providence Prealgebra Tutor Find a North Providence Prealgebra Tutor ...While grammar is not one of the more exciting subjects to learn, it is KEY to achieving success in EVERY field of one's choosing. Because of that, I take it very seriously and strive to help my students become excellent grammarians. I have tutored students in their pre-algebra courses. 19 Subjects: including prealgebra, reading, English, grammar ...At the elementary level, I worked with students on basic phonics instruction, reading and math. I love the challenge of figuring out the best way a student can learn, mostly by their input! My desire is to empower them and help them take responsibility for their own education. 30 Subjects: including prealgebra, reading, English, geometry ...I received both my bachelors and masters degree from Rhode Island College. I have taught students with all level of abilities. I have taught grades 6 through 12, this included general math, pre-algebra, algebra, geometry, trigonometry and analysis. 10 Subjects: including prealgebra, geometry, algebra 2, algebra 1 ...I earned another Masters degree, this time in Clinical Research Administration. I am still currently working for a biotech company and enjoying my career. However, I do greatly miss working with children and young adults, and as such, I've gotten involved in various paid and volunteer opportunities to fulfill this desire. 24 Subjects: including prealgebra, reading, English, SAT math ...When I tutor this subject, I try to relate it to real life settings. I have alot of experience tutoring SAT math. Just this past year, I tutored three kids for their math SATs. 13 Subjects: including prealgebra, calculus, geometry, algebra 1
{"url":"http://www.purplemath.com/North_Providence_Prealgebra_tutors.php","timestamp":"2014-04-20T04:06:49Z","content_type":null,"content_length":"24297","record_id":"<urn:uuid:07135548-6b70-4377-838d-33da2172f8e6>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
Generating SAT instances from first-order formulas "... Finding countermodels is an effective way of disproving false conjectures. In first-order predicate logic, model finding is an undecidable problem. But if a finite model exists, it can be found by exhaustive search. The finite model generation problem in the first-order logic can also be translated ..." Cited by 1 (0 self) Add to MetaCart Finding countermodels is an effective way of disproving false conjectures. In first-order predicate logic, model finding is an undecidable problem. But if a finite model exists, it can be found by exhaustive search. The finite model generation problem in the first-order logic can also be translated to the satisfiability problem in the propositional logic. But a direct translation may not be very efficient. This paper discusses how to take the symmetries into account so as to make the resulting problem easier. A static method for adding constraints is presented, which can be thought of as an approximation of the least number heuristic (LNH). Also described is a dynamic method, which asks a model searcher like SEM to generate a set of partial models, and then gives each partial model to a propositional prover. The two methods are analyzed, and compared with each other. - Proc. 7th Int’l Conf. on Theory and Applications of Satisfiability Testing , 2004 "... Abstract. The finite model generation problem in the first-order logic is a generalization of the propositional satisfiability (SAT) problem. An essential algorithm for solving the problem is backtracking search. In this paper, we show how to improve such a search procedure by lemma learning. For ef ..." Cited by 1 (1 self) Add to MetaCart Abstract. The finite model generation problem in the first-order logic is a generalization of the propositional satisfiability (SAT) problem. An essential algorithm for solving the problem is backtracking search. In this paper, we show how to improve such a search procedure by lemma learning. For efficiency reasons, we represent the lemmas by propositional formulas and use a SAT solver to perform the necessary reasoning. We have extended the first-order model generator SEM, combining it with the SAT solver SATO. Experimental results show that the search time may be reduced significantly on many problems. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1012761","timestamp":"2014-04-24T10:12:14Z","content_type":null,"content_length":"15164","record_id":"<urn:uuid:48704dde-4764-40ab-a117-0edd403e7d86>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
Determinant of block matrix up vote 4 down vote favorite I have a 4n$\times$4n matrix, which can be written as \begin{pmatrix} 0 & A &B &C \cr D& 0& E & F \cr G& H & 0 & J \cr K& L& M& 0 \end{pmatrix} each entry being an n$\times$n matrix with vanishing determinant. Is there a rule for checking if the full matrix has zero determinant? How about the special case \begin{pmatrix} 0 & A &B &C \cr -A^T & 0& E & F \cr -B^T & E^T & 0 & J \cr -C^T & F^T & J^T & 0 \end{pmatrix} still with vanishing determinants for each n$\times$n matrix? (The n is the dimension of an SU group -- I can probably work out the SU(2) or n=3 case by brute force, but I would like to know if there is some method that does not require explicit calculation.) Many thanks in advance for any help or suggestion. linear-algebra matrices 1 In your special case, do you want minus signs on $E^T$, $F^T$ and $J^T$ as well? – Neil Strickland Jun 8 '11 at 17:04 No, actually E,F,J are antisymmetric, so $E^T = -E$ etc (for n=3, which makes the determinant vanish). A,B,C are not antisymmetric, they only have vanishing determinants (one row vanishes). For higher n I am not absolutely certain what I will get in the special case. – Amitabha Lahiri Jun 8 '11 at 17:28 1 Just out of curiousity, is there any motivation behind this question? I am not being negative, really just curious. – Vladimir Dotsenko Jun 8 '11 at 17:30 Yes, I found this problem while trying to count the degrees of freedom in a particular system. – Amitabha Lahiri Jun 8 '11 at 17:41 If the matrices commuted (perhaps most of the pairs instead of all of them), then you could reduce the problem to the determinant of a nxn matrix product. Or if e.g. A B and C were simultaneously diagonalizable, you could then check if say the first n rows had full rank. Apart from that, I can only suggest the standard methods without shortcuts. Gerhard "Ask Me About System Design" Paseman, 2011.06.08 – Gerhard Paseman Jun 8 '11 at 17:58 show 1 more comment 1 Answer active oldest votes It would be nice if the rule for determinants for $2\times2$ matrices generalized to the case of $2n\times 2n$ matrices: $\det \begin{pmatrix} A & B \cr C & D \end{pmatrix} =\det A \det D - \det B\det C$, but this is sadly not true. up vote 2 down vote Nonetheless, the familiar Laplace expansion theorem for minors of order $n-1$ does have a generalization to minors of any order, including, in this case, minors of order $2n$ of a $4n \ times 4n$ matrix, see http://www.proofwiki.org/wiki/Laplace's_Expansion_Theorem This might help. Thanks. I'll have a look. – Amitabha Lahiri Jun 9 '11 at 1:59 If I can work with 3n$\times$3n minors, whose determinants are be the cofactors for the $n\times n$ matrices along the top $n$ rows, that would be good. The proof does not mention anything about the commutativity of the submatrices. I assume it works even when none of the submatrices commute? – Amitabha Lahiri Jun 9 '11 at 6:52 1 @Amitabha: Yes this works regardless of commutativity, and works for any size minor. – Stopple Jun 9 '11 at 14:19 add comment Not the answer you're looking for? Browse other questions tagged linear-algebra matrices or ask your own question.
{"url":"http://mathoverflow.net/questions/67270/determinant-of-block-matrix?sort=newest","timestamp":"2014-04-16T07:15:14Z","content_type":null,"content_length":"59966","record_id":"<urn:uuid:198e4a50-891d-413f-8e47-a71d5443a0f3>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00409-ip-10-147-4-33.ec2.internal.warc.gz"}