content
stringlengths
86
994k
meta
stringlengths
288
619
VLookup 2nd 3rd & 4th occurrence. VLookup 2nd 3rd & 4th occurrence. Author Message Hi All, I am trying to match the first, second, third and fourth occurrence of a time in using VLookup. Here's what I have:- A B C The times in column A range from 0000 to 2330 and are repeated up to 4 The data in column's B,C,D,Etc changes with each occurrence. C1 = what I want to match A1:B6 = My table I have found this in this NG:- Which match's the second occurrence, How do I adapt it to match the 3rd & Or is there an easier way of doing this? I do need to pull all of the data in the matched row from column B through to column IH Many Thanks for you help. Mark :o) Sun, 17 Apr 2005 02:25:40 GMT >I am trying to match the first, second, third and fourth occurrence of a >time in using VLookup. >Here's what I have:- > A B C >1 0800 10 0800 >2 0830 10 >3 0800 11 >4 0830 11 >5 0800 12 >6 0830 12 >The times in column A range from 0000 to 2330 and are repeated up to 4 >times. The data in column's B,C,D,Etc changes with each occurrence. >C1 = what I want to match >A1:B6 = My table >I have found this in this NG:- >Which match's the second occurrence, How do I adapt it to match the 3rd & The formula above can be generalized, but you're better off filtering rather than looking up. Enter the following array formula in cell E1. Fill E1 down until it evaluates #NUM!. If there are M matches of ValToBeMatched in MyTable, then the Nth match (1 <= N <= M, from top to bottom) is given by Public Service Announcement: Don't attach files to postings in this newsgroup. Sun, 17 Apr 2005 03:25:52 GMT I can not get the array you advised of to work As I could not get the array working I continued with the lookup as you said "The formula above can be generalized" I though it could not be too hard, I have extended my table from A1:B6 to A1:B16, same data just more of it. This is what I posted last night and finds the second match So what I though would work is if where "A1" is in this part MATCH($C$1,A1:A16,0), If I inserted another indirect match it would find the second match add 1 to the row and the find the 3rd match. All I get back is #N/A Could you shed any light on where I am going wrong or how you would do it. If you have time would you mind sending a partial sheet where you have it working, this will get rid of any typo's I create :o) Many Thanks it is most appreciated. > >I am trying to match the first, second, third and fourth occurrence of a > >time in using VLookup. > >Here's what I have:- > > A B C > >1 0800 10 0800 > >2 0830 10 > >3 0800 11 > >4 0830 11 > >5 0800 12 > >6 0830 12 > >The times in column A range from 0000 to 2330 and are repeated up to 4 > >times. The data in column's B,C,D,Etc changes with each occurrence. > >C1 = what I want to match > >A1:B6 = My table > >I have found this in this NG:- > >=VLOOKUP($C$1,INDIRECT("A"&MATCH($C$1,A1:A6,0)+1&":B6"),2,FALSE) > >Which match's the second occurrence, How do I adapt it to match the 3rd & > >forth? > ... > The formula above can be generalized, but you're better off filtering > than looking up. Enter the following array formula in cell E1. > =INDEX(MyTable,SMALL(IF(INDEX(MyTable,0,1)=ValToBeMatched, > ROW(MyTable)-CELL("Row",MyTable)+1),ROW()-ROW($E$1)+1),2) > Fill E1 down until it evaluates #NUM!. If there are M matches of > ValToBeMatched in MyTable, then the Nth match (1 <= N <= M, from top to > bottom) is given by > =INDEX(MyTable,SMALL(IF(INDEX(MyTable,0,1)=ValToBeMatched, > ROW(MyTable)-CELL("Row",MyTable)+1),N),2) > -- > Public Service Announcement: > Don't attach files to postings in this newsgroup. Mon, 18 Apr 2005 03:43:04 GMT I can not get the array you advised of to work As I could not get the array working I continued with the lookup as you said "The formula above can be generalized" I though it could not be too hard, I have extended my table from A1:B6 to A1:B16, same data just more of it. This is what I posted last night and finds the second match So what I though would work is if where "A1" is in this part MATCH($C$1,A1:A16,0), If I inserted another indirect match it would find the second match add 1 to the row and the find the 3rd match. All I get back is #N/A Could you shed any light on where I am going wrong or how you would do it. If you have time would you mind sending a partial sheet where you have it working, this will get rid of any typo's I create :o) Many Thanks it is most appreciated. > >I am trying to match the first, second, third and fourth occurrence of a > >time in using VLookup. > >Here's what I have:- > > A B C > >1 0800 10 0800 > >2 0830 10 > >3 0800 11 > >4 0830 11 > >5 0800 12 > >6 0830 12 > >The times in column A range from 0000 to 2330 and are repeated up to 4 > >times. The data in column's B,C,D,Etc changes with each occurrence. > >C1 = what I want to match > >A1:B6 = My table > >I have found this in this NG:- > >=VLOOKUP($C$1,INDIRECT("A"&MATCH($C$1,A1:A6,0)+1&":B6"),2,FALSE) > >Which match's the second occurrence, How do I adapt it to match the 3rd & > >forth? > ... > The formula above can be generalized, but you're better off filtering > than looking up. Enter the following array formula in cell E1. > =INDEX(MyTable,SMALL(IF(INDEX(MyTable,0,1)=ValToBeMatched, > ROW(MyTable)-CELL("Row",MyTable)+1),ROW()-ROW($E$1)+1),2) > Fill E1 down until it evaluates #NUM!. If there are M matches of > ValToBeMatched in MyTable, then the Nth match (1 <= N <= M, from top to > bottom) is given by > =INDEX(MyTable,SMALL(IF(INDEX(MyTable,0,1)=ValToBeMatched, > ROW(MyTable)-CELL("Row",MyTable)+1),N),2) > -- > Public Service Announcement: > Don't attach files to postings in this newsgroup. Mon, 18 Apr 2005 03:43:31 GMT >I can not get the array you advised of to work >As I could not get the array working I continued with the lookup as you >"The formula above can be generalized" I though it could not be too hard, Did you enter the array formula by typing in the formula, holding down [Ctrl] and [Shift] keys then pressing the [Enter] key? What result were you getting that didn't work? Note: I tested my formula on the sample data in your original post, and it works for me. >I have extended my table from A1:B6 to A1:B16, same data just more of it. >This is what I posted last night and finds the second match >So what I though would work is if where "A1" is in this part >MATCH($C$1,A1:A16,0), If I inserted another indirect match it would find >second match add 1 to the row and the find the 3rd match. The problem is that you'd need to do this recursively to find the 3rd and subsequent matches. That is, the 3rd match would have to be found as Your (reformatted) formula is The problem with it is the term If the 1st match were in A3, the INDIRECT function would give the range A4:B16, but the term as a whole would be A4:B16:A16, which is syntactically valid but semantically meaningless. If you change this term to either the from my formula or you'd get the desired result. While this may fix the problem finding the 3rd match, the 4th match requires yet another level of INDIRECT(MATCH()) calls. This approach doesn't scale well given Excel's limit of 7 nested function calls. The filter approach I suggested before does scale reasonably well, but it *MUST* be entered as an array formula. Public Service Announcement: Don't attach files to postings in this newsgroup. Mon, 18 Apr 2005 07:48:04 GMT HI Again :o) I am sure I entered it as an array "crtl + shift & enter" I will try the array again today, "checking for typo's", Another idea I have had is, as there is going to do around 100,000 formula's in this sheet, +-a few, I could get a macro to workout where the next match is and then get the macro to write the formula.. This would keep the size of the workbook small and make changes easier to implement. This is my second option if I can not get the array to work. Mark :o) > ... > >I can not get the array you advised of to work > >As I could not get the array working I continued with the lookup as you > said > >"The formula above can be generalized" I though it could not be too hard, > >Duh! > Did you enter the array formula by typing in the formula, holding down > [Ctrl] and [Shift] keys then pressing the [Enter] key? What result were > getting that didn't work? > Note: I tested my formula on the sample data in your original post, and it > works for me. > >I have extended my table from A1:B6 to A1:B16, same data just more of it. > >This is what I posted last night and finds the second match > >=VLOOKUP($C$1,INDIRECT("A"&MATCH($C$1,A1:A16,0)+1&":B16"),2,FALSE) > >So what I though would work is if where "A1" is in this part > >MATCH($C$1,A1:A16,0), If I inserted another indirect match it would find > the > >second match add 1 to the row and the find the 3rd match. > ... > The problem is that you'd need to do this recursively to find the 3rd and > subsequent matches. That is, the 3rd match would have to be found as > =VLOOKUP($C$1, > INDIRECT("A"& > MATCH($C$1, > INDIRECT("A"& > MATCH($C$1,A1:A16,0)+1 > &":A16"),0)+1 > &":B16"), > 2,FALSE) > Your (reformatted) formula is > =VLOOKUP($C$1, > INDIRECT("A"& > MATCH($C$1, > INDIRECT("A"& > MATCH($C$1,A1:A16,0)+1 > &":B16"):A16,0)+1 > &":B16"), > 2,FALSE) > The problem with it is the term > INDIRECT("A"&MATCH($C$1,A1:A16,0)+1&":B16"):A16 > If the 1st match were in A3, the INDIRECT function would give the range > A4:B16, but the term as a whole would be A4:B16:A16, which is > valid but semantically meaningless. If you change this term to either the > INDIRECT("A"&MATCH($C$1,A1:A16,0)+1&":A16") > from my formula or > INDIRECT("A"&MATCH($C$1,A1:A16,0)+1):A16 > you'd get the desired result. > While this may fix the problem finding the 3rd match, the 4th match > yet another level of INDIRECT(MATCH()) calls. This approach doesn't scale > well given Excel's limit of 7 nested function calls. The filter approach I > suggested before does scale reasonably well, but it *MUST* be entered as > array formula. > -- > Public Service Announcement: > Don't attach files to postings in this newsgroup. Mon, 18 Apr 2005 16:19:24 GMT Relevant Pages 1. Return 1st,2nd,3rd,4th from Date 2. Postions - 1st 2nd 3rd 4th etc 3. Microsof publisher How to print 2nd 3rd and 4th quarter of page 4. Defferences between 3rd and 4th generation language 5. VLOOKUP's 4th argument 6. Replace 2nd occurrence of a value 7. how to: subtracting 4th field in file1.csv from 4th field file2.csv 8. Resolve to 2nd or 3rd email 9. Get 1st, 2nd, and 3rd Letter From a String 10. OFFSET with array 2nd or 3rd argument
{"url":"http://www.office-archive.com/14-excel/9ae17154c6d259ff.htm","timestamp":"2014-04-17T06:52:10Z","content_type":null,"content_length":"25043","record_id":"<urn:uuid:de5690d5-90ae-4bcd-985a-8001e6adac76>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
Sunol Algebra 2 Tutor ...I quickly became the most popular tutor because of my patience, dedication, and instructional ability. My emphasis was and still is on understanding the fundamental concepts as problem solving naturally follows. Teaching and drilling the “how” of problem solving mechanics are important, but rea... 24 Subjects: including algebra 2, chemistry, reading, physics ...I have a MS in Engineering from San Jose State Univ. I have had 10 Years of Math education & have taught 2 very successful kids of my own as well as other kids. I'm committed to helping students develop mathematical thinking rather than 'memorizing all the rules'. 10 Subjects: including algebra 2, geometry, algebra 1, Hindi ...I'm told I make physics easy and fun to learn. Precalculus, in essence, is just a review of algebra 2 with some elaborations and a few extensions and additions (and applications) I've tutored students in almost all bay area school districts and high schools, with their various textbooks, in prec... 9 Subjects: including algebra 2, calculus, physics, geometry ...He helped me to thoroughly understand concepts and catch up in Precalculus. I would definitely recommend his tutoring services to anyone seeking a patient and concise math tutor. I worked for many years professionally with statistics and next to precalculus and calculus it is my most tutored topic. 41 Subjects: including algebra 2, calculus, geometry, statistics ...I've taken and received an A- in both Linear Algebra and Intermediate Linear algebra. It was my favorite class that I took in my Math major, and I would feel very comfortable tutoring it. I tutored this subject informally with peers in the math center on campus. 35 Subjects: including algebra 2, reading, calculus, geometry
{"url":"http://www.purplemath.com/Sunol_Algebra_2_tutors.php","timestamp":"2014-04-18T21:54:29Z","content_type":null,"content_length":"23690","record_id":"<urn:uuid:8b432c61-1fd4-43c3-80a3-e3760702e3ba>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: 2SLS with random effects correcting for autocorrelation [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: 2SLS with random effects correcting for autocorrelation From Helene Ehrhart <Helene.Ehrhart@u-clermont1.fr> To statalist@hsphsun2.harvard.edu Subject Re: st: 2SLS with random effects correcting for autocorrelation Date Thu, 26 Mar 2009 10:19:38 +0100 Thank you Nicola for these helpful suggestions. I'll try them! Quoting nicola.baldini2@unibo.it: No suggestions that 100% fit your problem. Among second best solutions, you may leave the random effects. In such a case, an uncommon choice is to try the -fmivreg- from http://www.antonisureda.com/other/stuff/files/ which uses the Fama and MacBeth (1973) two step procedure, with instrumental variables and Newey-West standard errors. In the first step, for each single time period a cross-sectional regression is performed. Then, in the second step, the final coefficient estimates are obtained as the average of the first step coefficient estimates. Fama, Eugene F., and James D. MacBeth, 1973. Risk, Return, and Equilibrium: Empirical tests, Journal of Political Economy 81, 607-636. Other options (not involving the fixed effects) are first differencing (-xitvreg2- and -xtabond2-) or cross-sectional commands (-ivreg2-, -newey2-), all of them are available from SSC. P.S. I'll NOT receive/read any email but the Digest. At 02.33 21/03/2009 -0400, Helene Ehrhart wrote: Dear all, I would like to estimate an equation with instrumental variables using random effects and correcting for autocorrelation. I already tried many ways to do that in stata but no command was able to meet all the three requirements : - - xtivreg , re does not allow for autocorrelation correction - - xtivreg2 does not allow for random effects - - xtdata to transform data so that it corresponds to random effects and then estimation with ivreg28 but then the option bw(1) is not possible since the data are transformed - - estimating the first stage with xtreg ,re ; taking the predicted dependant value and using it as regressors for the 2nd stage using xtregar ,re which corrects for autocorrelation. This works but the standard errors should then be corrected by bootstrapping to correct the bias of using a predicted value as regressor. Unfortunately, traditional bootstrap using a random sample does not maintain the autocorrelation structure. So this method should also be eliminated. If anyone already faced this problem, estimating in 2SLS with random effects correcting for autocorrelation, I would really appreciate to know what is the proper way to do that in Stata. Hélène Ehrhart. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-03/msg01366.html","timestamp":"2014-04-20T17:01:53Z","content_type":null,"content_length":"8627","record_id":"<urn:uuid:c8b9ff73-3af1-44af-b22a-b09c9f72f9d1>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] 4x^23+769=347389 March 13th 2008, 03:33 PM #1 Mar 2008 [SOLVED] 4x^23+769=347389 Hi i have this problem.... Well this is a made up problem so i can attempt to get the real one my self.. I feel sort of dumb because i don't know this one... My problem is- I don't know what to do with the ^23... how do i get it over the equals..... IF that is what you do....... 4x= ????? x= what ever ????/4 $4x^{23} = 646620$ You divide both sides by 4 FIRST. As with any algebra problem, you try to isolate x as much as possible before performing whatever operation you need. $x^{23} = 161555$ Now, using your calculator you would take the 23rd root of both sides to get rid of the exponent just like you would take the square root of both sides if the x was squared. $\sqrt[23]{x^{23}} = \sqrt[23]{161555}$ $x \approx 1.684$ You would need to know how to use your calculator to use that function. If not, you can always use the relation: $a^{\frac{m}{n}} = \sqrt[n]{a^{m}} = \left(\sqrt[n]{a}\right)^{m}$ So going back to $x^{23} = 161555$ , raise both sides by $\frac{1}{23}$: $\left(x^{23}\right)^{\frac{1}{23}} = 161555^{\frac{1}{23}}$ That may be easier to punch in your calculator? Thank you! We are not allowed to use cal....... March 13th 2008, 05:02 PM #2 March 14th 2008, 11:39 AM #3 Mar 2008
{"url":"http://mathhelpforum.com/algebra/30940-solved-4x-23-769-347389-a.html","timestamp":"2014-04-18T10:10:11Z","content_type":null,"content_length":"32779","record_id":"<urn:uuid:0952e425-d275-466b-abee-1263be0d5381>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
finding an orthonormal basis April 22nd 2008, 04:31 PM finding an orthonormal basis find an orthonormal basis of polynomials of degree 2 over the real space, with inner product <p,q> = integral from (0,1) of p(x)q(x)dx. if you could direct me as to how to insert an integral symbol that would be great as well! April 22nd 2008, 09:53 PM You start with a known basis say $\{1, x, x^2\}$ then apply the Gramm-Schmidt process to generate an orthogonal basis, and then normalise the new basis. (To typeset mathematics this site uses LaTeX, see the tutorial here) April 22nd 2008, 10:11 PM can you explain the normalization of the orthonormal basis? i have found the gramm-schmidt applied to the 1, x, and x^2. what must i do to normalize? ps. you're a baller. April 22nd 2008, 10:19 PM For any vector u, the normalised vector e is given by: $\mathbf{e} = {\mathbf{u}\over \|\mathbf{u}\|}$ April 22nd 2008, 10:23 PM $u_1=1$ the constant function $u_2=x - \frac{\langle x, u_1 \rangle}{\langle u_1, u_1 \rangle}u_1$ where $\langle u_1, u_1 \rangle=\int_0^1 1 ~dx =1$, and $\langle x, u_1 \rangle=\int_0^1 x ~dx =1/2$, so: $<br /> u_2=x-1/2<br />$ Now repeat to find $u_3$: $u_2=x^2 - \frac{\langle x^2, u_1 \rangle}{\langle u_1, u_1 \rangle}u_1- \frac{\langle x^2, u_2 \rangle}{\langle u_2, u_2 \rangle}u_2$ $\{u_1, u_2, u_3 \}$ is an orthogonal basis, you now have to normalise them to get your orthonormal basis: $\{e_1, e_2, e_3 \}$, where: $e_i=\frac{u_i}{\langle u_i, u_i \rangle^{1/2}}$
{"url":"http://mathhelpforum.com/advanced-algebra/35596-finding-orthonormal-basis-print.html","timestamp":"2014-04-18T10:54:21Z","content_type":null,"content_length":"10069","record_id":"<urn:uuid:9c93122b-dfc9-44db-9d61-c22946d53a0d>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
another ln question, just need to know if this is correct May 28th 2011, 01:16 PM #1 Jan 2011 another ln question, just need to know if this is correct I have a problem in which I got x=e^-3 f(x) = x (ln x)^3 this would mean e^-3 ( -3)^3 giving -27/e^3? May 28th 2011, 01:21 PM #2
{"url":"http://mathhelpforum.com/calculus/181878-another-ln-question-just-need-know-if-correct.html","timestamp":"2014-04-16T11:10:02Z","content_type":null,"content_length":"33360","record_id":"<urn:uuid:df3ffd72-7835-4e81-baa3-5e7b13f21af6>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: July 1998 [00180] [Date Index] [Thread Index] [Author Index] Re: Can it be done - easily? • To: mathgroup at smc.vnet.net • Subject: [mg13279] Re: [mg13211] Can it be done - easily? • From: Wouter Meeussen <eu000949 at pophost.eunet.be> • Date: Fri, 17 Jul 1998 03:18:25 -0400 • Sender: owner-wri-mathgroup at wolfram.com you need the PolyGamma function : if you don't have them on your home, kitchen & garden calculator (;-)), then try In[ ]:=N[EulerGamma,24] Out[ ]=0.57721566490153286060651 and do a taylor series 'round infinity: In[27]:=ser=Series[PolyGamma[0,x],{x,\[Infinity],12}]//Normal Out[27]= 691/(32760*x^12) - 1/(132*x^10) + 1/(240*x^8) - 1/(252*x^6) + 1/(120*x^4) - 1/(12*x^2) - 1/(2*x) - Log[1/x] Check the quality of the approximation by comparing values between 2^1 and 2^12 : looks good enough to me. At 07:42 13-07-98 -0400, Barry Culhane wrote: >Myself and two workmates are software developers. One guy wanted a >formula to calculate a result for the following equation... > Z = sum of X/Y where X is a fixed number, and Y ranges from A-B in >fixed steps... > i.e... X=10000 ; Y=100,200,300...1000 > i.e... Z= 10000/100 + 10000/200 + ... 10000/1000 = 292.896 >He and I tried to figure out a simple formula to calculate it, but >couldn't. The third guy said it was *not* *possible* to derive a >formula - we think he's wrong, but can't prove it. MathCad can solve >it in the blink of an eye, even if the value of Y ranges from 1 to 1e6 >in steps of 1 !!! >Can anyone come up with a simple formula to give a reasonably accurate >result? It is too slow to actually divide X by Y for each value of Y >as there may be 1000 or even 100,000 values of Y. >Thanks in advance... >> Barry Culhane >> Schaffner Ltd, Limerick, IRELAND Dr. Wouter L. J. MEEUSSEN w.meeussen.vdmcc at vandemoortele.be eu000949 at pophost.eunet.be
{"url":"http://forums.wolfram.com/mathgroup/archive/1998/Jul/msg00180.html","timestamp":"2014-04-16T10:21:22Z","content_type":null,"content_length":"36182","record_id":"<urn:uuid:f9206607-b2b9-4d1b-94ad-33f0b4e2478a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
East Somerville, MA Math Tutor Find an East Somerville, MA Math Tutor ...From these perspectives I can help you grasp case studies and paper topics at a more fundamental level. I've been training public speakers for in-person, on radio, and on television occasions for decades. I especially value helping speakers master speaking in sound bites and reviewing with them videos of their speaking. 55 Subjects: including trigonometry, ACT Math, reading, algebra 1 ...I graduated from Tufts University with an undergraduate degree in Biopsychology and from Stony Brook University with a Master's degree in Physiology and Biophysics. I have extensive experience in tutoring high school math (algebra, trigonometry, pre-calculus, calculus) and science (biology, chem... 10 Subjects: including algebra 1, algebra 2, biology, chemistry ...I look at not only the current lesson, but also help students to relate prior learned concepts to the current lesson and help them anticipate where the current lesson will be applied to future concepts. For over 20 years I've effectively instructed and led people with Fortune 500 organizations T... 23 Subjects: including calculus, Microsoft Excel, Microsoft Word, Microsoft PowerPoint ...I recently (03/2013) passed the Massachusetts Test for Educator Licensing (MTEL) subject 09 test (which covers the standard math curriculum from grades 8 - 12, including trigonometry) with the maximum scores in each category. Physical Science is often the first high school level science class st... 12 Subjects: including linear algebra, algebra 1, algebra 2, calculus ...I went Johns Hopkins University for Undergrad, where I double majored in Economics and Psychology with a Minor in Business. After working for 5 years in Finance, I decided I needed a career change. I am currently student teaching Mathematics in a small private school while finishing up my Maste... 27 Subjects: including calculus, elementary (k-6th), geometry, physics Related East Somerville, MA Tutors East Somerville, MA Accounting Tutors East Somerville, MA ACT Tutors East Somerville, MA Algebra Tutors East Somerville, MA Algebra 2 Tutors East Somerville, MA Calculus Tutors East Somerville, MA Geometry Tutors East Somerville, MA Math Tutors East Somerville, MA Prealgebra Tutors East Somerville, MA Precalculus Tutors East Somerville, MA SAT Tutors East Somerville, MA SAT Math Tutors East Somerville, MA Science Tutors East Somerville, MA Statistics Tutors East Somerville, MA Trigonometry Tutors Nearby Cities With Math Tutor Beachmont, MA Math Tutors Cambridgeport, MA Math Tutors Charlestown, MA Math Tutors East Milton, MA Math Tutors East Watertown, MA Math Tutors Grove Hall, MA Math Tutors Kendall Square, MA Math Tutors Kenmore, MA Math Tutors Reservoir, MS Math Tutors Somerville, MA Math Tutors South Waltham, MA Math Tutors Squantum, MA Math Tutors West Lynn, MA Math Tutors West Somerville, MA Math Tutors Winter Hill, MA Math Tutors
{"url":"http://www.purplemath.com/east_somerville_ma_math_tutors.php","timestamp":"2014-04-19T15:16:37Z","content_type":null,"content_length":"24383","record_id":"<urn:uuid:963d107f-ee90-4597-bba4-721470a32cf9>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
h partitioning Results 1 - 10 of 65 - Experience , 1994 "... Unstructured meshes are used in many large-scale scientific and engineering problems, including finite-volume methods for computational fluid dynamics and finite-element methods for structural analysis. If unstructured problems such as these are to be solved on distributed-memory parallel computers, ..." Cited by 284 (7 self) Add to MetaCart Unstructured meshes are used in many large-scale scientific and engineering problems, including finite-volume methods for computational fluid dynamics and finite-element methods for structural analysis. If unstructured problems such as these are to be solved on distributed-memory parallel computers, their data structures must be partitioned and distributed across processors; if they are to be solved efficiently, the partitioning must maximize load balance and minimize interprocessor communication. Recently the recursive spectral bisection method (RSB) has been shown to be very effective for such partitioning problems compared to alternative methods. Unfortunately, RSB in its simplest form is rather expensive. In this report we shall describe a multilevel implementation of RSB that can attain about an order-of-magnitude improvement in run time on typical examples. Keywords: graph partitioning, domain decomposition, MIMD machines, multilevel algorithm, spectral bisection, sp... , 1995 "... this paper is organized as follows: Section 2 briefly describes the various ideas and algorithms implemented in METIS. Section 3 describes the user interface to the METIS graph partitioning and sparse matrix ordering packages. Sections 4 and 5 describe the formats of the input and output files used ..." Cited by 122 (5 self) Add to MetaCart this paper is organized as follows: Section 2 briefly describes the various ideas and algorithms implemented in METIS. Section 3 describes the user interface to the METIS graph partitioning and sparse matrix ordering packages. Sections 4 and 5 describe the formats of the input and output files used by METIS. Section 6 describes the stand-alone library that implements the various algorithms implemented in METIS. Section 7 describes the system requirements for the METIS package. Appendix A describes and compares various graph partitioning algorithms that are extensively used. , 1999 "... For twenty years, it has been clear that many datasets are excessively complex for applications such as real-time display, and that techniques for controlling the level of detail of models are crucial. More recently, there has been considerable interest in techniques for the automatic simplificati ..." Cited by 118 (7 self) Add to MetaCart For twenty years, it has been clear that many datasets are excessively complex for applications such as real-time display, and that techniques for controlling the level of detail of models are crucial. More recently, there has been considerable interest in techniques for the automatic simplification of highly detailed polygonal models into faithful approximations using fewer polygons. Several effective techniques for the automatic simplification of polygonal models have been developed in recent years. This report begins with a survey of the most notable available algorithms. Iterative edge contraction algorithms are of particular interest because they induce a certain hierarchical structure on the surface. An overview of this hierarchical structure is presented,including a formulation relating it to minimum spanning tree construction algorithms. Finally, we will consider the most significant directions in which existing simplification methods can be improved, and a summary of o... "... We investigate a method of dividing an irregular mesh into equal-sized pieces with few interconnecting edges. The method’s novel feature is that it exploits the geometric coordinates of the mesh vertices. It is based on theoretical work of Miller, Teng, Thurston, and Vavasis, who showed that certain ..." Cited by 102 (19 self) Add to MetaCart We investigate a method of dividing an irregular mesh into equal-sized pieces with few interconnecting edges. The method’s novel feature is that it exploits the geometric coordinates of the mesh vertices. It is based on theoretical work of Miller, Teng, Thurston, and Vavasis, who showed that certain classes of “well-shaped” finite element meshes have good separators. The geometric method is quite simple to implement: we describe a Matlab code for it in some detail. The method is also quite efficient and effective: we compare it with some other methods, including spectral bisection. , 1995 "... Recently, a number of researchers have investigated a class of algorithms that are based on multilevel graph partitioning that have moderate computational complexity, and provide excellent graph partitions. However, there exists little theoretical analysis that could explain the ability of multileve ..." Cited by 90 (14 self) Add to MetaCart Recently, a number of researchers have investigated a class of algorithms that are based on multilevel graph partitioning that have moderate computational complexity, and provide excellent graph partitions. However, there exists little theoretical analysis that could explain the ability of multilevel algorithms to produce good partitions. In this paper we present such an analysis. We show under certain reasonable assumptions that even if no refinement is used in the uncoarsening phase, a good bisection of the coarser graph is worse than a good bisection of the finer graph by at most a small factor. We also show that the size of a good vertex-separator of the coarse graph projected to the finer graph (without performing refinement in the uncoarsening phase) is higher than the size of a good vertexseparator of the finer graph by at most a small factor. - SIAM J. Sci. Comput , 1995 "... . The most commonly used p-way partitioning method is recursive bisection (RB). It first divides a graph or a mesh into two equal sized pieces, by a "good" bisection algorithm, and then recursively divides the two pieces. Ideally, we would like to use an optimal bisection algorithm. Because the opti ..." Cited by 84 (4 self) Add to MetaCart . The most commonly used p-way partitioning method is recursive bisection (RB). It first divides a graph or a mesh into two equal sized pieces, by a "good" bisection algorithm, and then recursively divides the two pieces. Ideally, we would like to use an optimal bisection algorithm. Because the optimal bisection problem, that partitions a graph into two equal sized subgraphs to minimize the number of edges cut, is NP-complete, practical RB algorithms use more efficient heuristics in place of an optimal bisection algorithm. Most such heuristics are designed to find the best possible bisection within allowed time. We show that the recursive bisection method, even when an optimal bisection algorithm is assumed, may produce a p-way partition that is very far way from the optimal one. Our negative result is complemented by two positive ones: First we show that for some important classes of graphs that occur in practical applications, such as well-shaped finite element and finite difference... - J. ACM , 1997 "... Abstract. A collection of n balls in d dimensions forms a k-ply system if no point in the space is covered by more than k balls. We show that for every k-ply system �, there is a sphere S that intersects at most O(k 1/d n 1�1/d) balls of � and divides the remainder of � into two parts: those in the ..." Cited by 74 (7 self) Add to MetaCart Abstract. A collection of n balls in d dimensions forms a k-ply system if no point in the space is covered by more than k balls. We show that for every k-ply system �, there is a sphere S that intersects at most O(k 1/d n 1�1/d) balls of � and divides the remainder of � into two parts: those in the interior and those in the exterior of the sphere S, respectively, so that the larger part contains at most (1 � 1/(d � 2))n balls. This bound of O(k 1/d n 1�1/d) is the best possible in both n and k. We also present a simple randomized algorithm to find such a sphere in O(n) time. Our result implies that every k-nearest neighbor graphs of n points in d dimensions has a separator of size O(k 1/d n 1�1/d). In conjunction with a result of Koebe that every triangulated planar graph is isomorphic to the intersection graph of a disk-packing, our result not only gives a new geometric proof of the planar separator theorem of Lipton and Tarjan, but also generalizes it to higher dimensions. The separator algorithm can be used for point location and geometric divide and conquer in a fixed dimensional space.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=725194","timestamp":"2014-04-20T20:01:23Z","content_type":null,"content_length":"35579","record_id":"<urn:uuid:41d16e88-d0ab-460b-baac-db89c5208d72>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
Major changes to the forecast package August 25, 2011 By Rob J Hyndman The forecast package for R has undergone a major upgrade, and I’ve given it version number 3 as a result. Some of these changes were suggestions from the forecasting workshop I ran in Switzerland a couple of months ago, and some have been on the drawing board for a long time. Here are the main changes in version 3, plus a few earlier additions that I thought deserved a mention. Box-Cox transformations A Box-Cox transformation can now be incorporated into the forecasting model when calling Arima(), auto.arima(), ets(), arfima(), tslm(), stlf(), rwf(), meanf(), splinef(). For example: fit <- Arima(lynx, order=c(2,0,0), lambda=0.5) The model is based on the transformed data, forecasts are calculated, and then the forecasts and prediction intervals are back-transformed. The point forecasts can be interpreted as medians after If the transformation is done outside the fitting function, the forecasts can still be back-transformed. For example: fit <- ar(BoxCox(lynx,0.5)) Back-transforming forecasts like this is now available in forecast.Arima(), forecast.ets(), forecast.fracdiff(), forecast.ar(), forecast.StructTS, forecast.HoltWinters(). I have also added a function for automatically choosing the Box-Cox parameter using either Guerrero’s (1993) method or the profile log likelihood method. For example: fit <- Arima(lynx, order=c(2,0,0), lambda=BoxCox.lambda(lynx)) Note that previously there was a lambda argument in the plot.forecast() function. This is no longer available (and so some old code may break). Instead, back-transform the forecasts within the forecast() function. Improved auto.arima() The auto.arima() function is widely used for automatically selecting ARIMA models. It works quite well, except that selection of A separate function for selecting the seasonal order has also been made visible. So you can now call nsdiffs() to find the recommended number of seasonal differences without calling auto.arima(). There is also a ndiffs() function for selecting the number of first differences. Within auto.arima(), nsdiffs() is called first to select ndiffs() is applied to diff(x,D) if x if Double-seasonal Holt-Winters The new dshw() function implements Taylor’s (2003) double-seasonal Holt-Winters method. This allows for two levels of seasonality. For example, with hourly data, there is often a daily period of 24 and a weekly period of 168. These are modelled separately in the dshw() function. I am planning some major new functionality to extend this to the various types of complex seasonality discussed in my recent JASA paper. Hopefully that will be ready in the next few weeks — I have a research assistant working on the new code. Sub-setting time series Occasionally you want to extract all the Novembers in a monthly time series (or something similar), but this has been fiddly to do up to now. So I’ve included a new function subset.ts. For example: Acf() and Pacf() These were actually added in v2.19 but I’ve not mentioned them anywhere so I thought it would be useful to say something here. The acf() function always includes a spike of length 1 at lag 0. This is pointless because the ACF at lag 0 is 1 by definition. It is also annoying because it forces the scale of the y-axis to include 1 which can obscure smaller correlations that might be of interest. The Acf() function works in the same way as the acf() function except that it omits lag 0. The Pacf() function is included for consistency only — it returns the same object and produces the same plot as Time series linear models Another recent addition, but not new in v3, is the tslm() function for handling linear models for time series. This works in the same way as lm() except that the time series characteristics of the data are preserved in the residuals and fitted values. Also, the variables trend and seasonal can be used without needing to be defined. For example: y <- ts(rnorm(120,0,3) + 20*sin(2*pi*(1:120)/12), frequency=12) fit1 <- tslm(y ~ trend + season) trend takes values y, and season is a matrix of seasonal dummy variables. The CV() function is another addition from earlier in the year. It implements the cross-validation statistic, AIC, corrected AIC, BIC and adjusted R^2 values for a linear model. For example: y <- ts(rnorm(120,0,3) + 20*sin(2*pi*(1:120)/12), frequency=12) fit1 <- tslm(y ~ trend + season) fit2 <- tslm(y ~ season) CV works with any lm objects including those produced by tslm() and lm(). Forecasting with STL This functionality was added earlier in the year, but it is so cool I wanted to mention it here. STL is a great method for decomposing time series into trend, seasonal and irregular components. It is robust and handles time series of any frequency. To forecast with STL, you seasonally adjust the data by subtracting the seasonal component, then forecast the seasonally adjusted data using a non-seasonal ARIMA or ETS model, then re-seasonalize the forecasts by adding back in the most recent values of the seasonal component (effectively using a seasonal naive forecast for the seasonal component). The whole procedure is handled effortlessly as fit <- stl(USAccDeaths,s.window="periodic") There is also a new function stlf() which does the STL decomposition as well as the forecasting in one step. plot(stlf(AirPassengers, lambda=BoxCox.lambda(AirPassengers))) STL decompositions are always additive, but the inclusion of the Box-Cox parameter as shown here allows non-additive decompositions as well. For data with high seasonal period (such as weekly data, hourly data, etc.), forecasting with STL is often the simplest approach. It also works amazingly well on a wide range of series. If you apply the forecast() function to a time series (rather than a time series model), the forecasts returned will be from stlf() when the seasonal period is 13 or more. Other changes A list of all changes to the forecast package is maintained in the ChangeLog. With so many changes and new functions, I’ve probably introduced new bugs. Please let me know if you find any problems and I’ll endeavour to fix anything ASAP. (Make sure you get v3.02 or later as v3.00 had some bugs.) for the author, please follow the link and comment on his blog: Research tips » R daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/major-changes-to-the-forecast-package/","timestamp":"2014-04-21T14:58:38Z","content_type":null,"content_length":"53615","record_id":"<urn:uuid:c2a11147-b6ea-4936-a356-fd79998d0ecd>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: How can I specify counters for matrices? Replies: 1 Last Post: Dec 7, 2012 11:26 AM Messages: [ Previous | Next ] Curious Re: How can I specify counters for matrices? Posted: Dec 7, 2012 11:26 AM Posts: 1,925 Registered: 12/6/04 "panagiotis " <pkoniavitis@yahoo.com> wrote in message <k9t1f8$50g$1@newscl01ah.mathworks.com>... > Hallo. > I want to give counters for matrices. I tried things like > for i=1:3 > matr 'num2str(i)' =zeros(6,6); > end > but did not work. > Let's say i want to create 3 matrices matr1 , matr2 & matr3 all 6x6 points. > If anyone knows or has an idea , is welcome. > Panagiotis DON'T DO THIS!!! See Q3.6 of the MATLAB FAQ at: for reasons why this is a VERY bad idea and alternatives (and how to do it if you REALLY must). The bottom line is to use cell arrays. Specificaly, see Date Subject Author 12/7/12 How can I specify counters for matrices? panagiotis 12/7/12 Re: How can I specify counters for matrices? Curious
{"url":"http://mathforum.org/kb/message.jspa?messageID=7933788","timestamp":"2014-04-20T03:52:32Z","content_type":null,"content_length":"18009","record_id":"<urn:uuid:45b16cb4-7d7c-4072-be4e-8a0b2c882c52>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
MATH 3703 Geometry for P-8 Teachers This test covers sections 11.2, 11.3, 11.4, & 11.5. It consists of 20 questions. ● Problems which involve finding missing measures of right triangles ● Conversion problems ● Area problems ● Surface area problems ● Volume problems ● Application problems How will you prepare for this test? Keep the following things in mind. 1. Be able to use area formulas to find the area of “normal” figures (such as a square) and “combination” figures (such as a semi-circle attached to a rectangle). 2. You should be able to find the area of a figure that is placed on a geoboard. 3. You should be able to make conversions among square units (area), cubic units (volume), units of capacity, units of mass, and temperature. 4. You should be able to recognize when to use the Pythagorean Theorem, 30-60-90 rules, and 45-45-90 rules. These problems may be embedded in word problems. 5. You should be able to state whether a triangle is a right triangle or not based on the converse of the Pythagorean Theorem. 6. You should be able to find the surface are and volume of “normal” solids and “combination” solids. The formulas will be provided but you must understand them so as to apply them. 7. Test problems are many times similar to those done in class or those assigned for homework. In addition, by doing the homework problems you are better prepared to handle new and unfamiliar problems that may appear on the test. Here is a list of the concepts we have covered in this chapter. Sec. 11.2 Area on a geoboard, converting square units, area formulas (rectangle, parallelogram, triangle, trapezoid, regular polygon, circle) Sec. 11.3 Pythagorean Theorem, Converse of the Pythagorean Theorem, 45-45-90 rules, 30-60-90 rules Sec. 11.4 Surface area (right prisms, cylinder, pyramid, cone, sphere) Sec. 11.5 Volume (prism, cylinder, pyramid, cone, sphere), converting cubic units, converting units of capacity, converting units of mass, converting temperature
{"url":"http://www.westga.edu/~srivera/3703/MATH%203703-rev11.htm","timestamp":"2014-04-25T01:13:49Z","content_type":null,"content_length":"10045","record_id":"<urn:uuid:e4ef748e-8ea8-40e1-aedb-0763713ca724>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
First Vancouver Meeting on Probability August 19-28, 1997 University of British Columbia, Vancouver, B.C., Canada Organizers: Martin Barlow and Edwin Perkins Financial Support: The Fields Institute and the CRM The format of this summer school will be similar to the Durham symposia. The school will be centred around six short courses, each of four lectures. The six main speakers are: • R. T. Durrett (Cornell) • H. Föllmer (Berlin) • T. Kurtz (Wisconsin) • J-F. Le Gall (Paris VI) • C. M. Newman (N.Y.U.) • D. W. Stroock (M.I.T.) The remainder of the time will be occupied by informal discussions and talks by participants. (Note: There will be a year of activity in probability at the Mathematical Sciences Research Institute (MSRI, Berkeley, California) in the academic year 1997-1998.) Tuesday, 19 August 1997 • 9:30-10:30, Tom Kurtz, Infinite Systems of Stochastic Differential Equations I • 10:30-11:00, Coffee • 11:00-12:00, Hans Föllmer, Probabilistic Problems in Finance I • 12:00-14:00, LUNCH • 14:00-15:00, Jean-François Le Gall, Superprocesses, Markov Snakes and Partial Differential Equations I 20-22 August 1997 • Lectures II-IV of the series indicated above (Tuesday the 19^th) Saturday and Sunday, 23-24 August 1997 Monday, 25 August 1997 • 9:30-10:30, Dan Stroock, Applications of Analysis to Pathspace I • 10:30-11:00, Coffee • 11:00-12:00, Charles Newman, Random Geometry of First Passage Percolation I • 12:00-14:00, LUNCH • 14:00-15:00, Rick Durrett, Stochastic Spatial Models I 26-28 August 1997 • Lectures II-IV of the series indicated above (Monday the 25^th) Other Sessions A limited number of contributed lectures and/or special sessions will be organized in the late afternoon. Please send abstracts, before April 30, to: • Tina Tang by e-mail; or • to Tina Tang by fax, at 604-822-0883; or • to Ed Perkins by mail, at Dept. of Mathematics, UBC, Vancouver, BC, V6T 1Z2.
{"url":"http://www.fields.utoronto.ca/programs/scientific/97-98/probability/","timestamp":"2014-04-19T12:18:03Z","content_type":null,"content_length":"3274","record_id":"<urn:uuid:6c656d19-70d5-4eb2-84b8-dd44e0b11953>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
Intermediate Algebra Instructor Information Name: Elizabeth Drake E-mail: elizabeth.drake@sfcollege.edu Office Location: Northwest Campus Office Number: N-215 Phone: (352) 381-3829 Office Hours: See Instructor's website Course Information Course Intermediate Algebra Number and MAT1033.022 MW 2-3:15 Meeting MAT1033.040 TH 2-3:15 Course The course includes the study of quadratic equations; rational exponents and their properties; radicals; rational expressions and equations; factoring (review); graphing linear and Description quadratic functions and interpreting graphs; solving systems of linear equations and inequalities; and applications. This course is a prerequisite for MAC1105, MGF1106, MGF1107, MGF1121 & STA2023, and as well as other science, nursing, and business courses. This course carries only elective credit. The general education requirements in Mathematics for the AA degree stipulate six hours with a grade of C or better, at least three of which must be in Group A. A course grade of D or D+ will allow the course to count as an elective, but neither Gordon Rule nor General Education credit will be given. Group A: MAC1105, MAC1114, MAC1140, MAC2233, MAC2311, MAC2312, MAC2313, MAP2302, MGF1107 Group B: MGF1106, PHI1100, STA2023 Prerequisite MAT0024 or MAT0020 or its equivalent (CPT Elementary Algebra score >= 72) Textbook (required) / Materials Intermediate Algebra w/Applications & Visualization (3rd ed.) (with “Hints on the Web” booklet and code for WebTutor) Author: Rockswold and Krieger Publisher: Pearson, Addison/Wesley ISBN: 0-321-56682-3 (Traditional PLUS MyMathLab) ISBN: 978-0-321-65251-5 (Student access kit ONLY for MyMathLab) A graphing calculator will be required for this course. Either the TI-83 or the TI-84 lines of calculators are recommended. The TI-84 plus will be used in class for demonstrations, and directions for the TI-84 are provided, so it is the recommended model. The directions for the TI-83 are almost identical, so it is also acceptable. If you have any calculator besides the TI-83 or TI-84, regular, plus, or silver editions, please check with your instructor. You may use your calculator on all assignments unless directed not to. Below are a few calculator help links to get you started. Well-equipped computers are available in the Math Lab, G-014. For any online course you are expected to have access to a working computer with a fast Internet connection. Open Campus: Angel Go to the Santa Fe homepage, click Open Campus, and follow the login instructions (you can also access open campus through your esantafe account). We will be using the following items: 1. Lessons (tab at top): click to find handouts (syllabus, practice tests, answer keys, etc) 2. Announcements: if any changes are made, posted here 3. Discussion Forums: post additional questions and answers, set up study groups with your classmates 4. Class Mail: use to email your classmates or your instructor 5. Calendar: after each class I will post what we covered and what you are expected to do before the next class. Course Evaluation To pass this class, you must successfully satisfy each of the following five requirements: 1. Attendance: Attendance will be taken daily. Your Attendance grade is based on days attended divided by total days of class time (a simple percentage). 2. Homework: You will be assigned homework on MyMathLab that corresponds to homework exercises in the textbook. You have an unlimited number of attempts at the MyMathLab homeworks, within a window of opportunity. Your Homework grades will be recorded at the end of this window of opportunity. However, Homework on MyMathLab will be available for you throughout the semester so you can continue to practice, even after the grade has been recorded. Working together on homework assignments is encouraged. Any changes to the homework schedule will be announced each class and posted on Angel; it is your responsibility to get the assignment each day. 3. Tests: Three unit tests will be given during the semester. The questions on the tests will be similar to those assigned for Homework. There are no makeups but you may replace a low or missing test score with your score on the Common Final Exam. The test dates are listed on the tentative schedule but may be changed if necessary. Before each test, you may prepare your own study guide for 5 bonus points on that test. A study guide is a complete, organized summary of your notes on the material to be covered on the test. I will be looking for formulas, definitions, explanations, and any how-to steps. The study guide must be handed in on the day of the test. Keep your study guides! They will provide you with a summarized version of the material that you can use to study for the final. You may not turn in your original class notes as a study guide. 4. Final Exam: The final exam will be on the date and at the time regularly scheduled for this class. See http://www.sfcollege.edu/information/finals.php for more information. The final exam is cumulative and everyone must take it. The final cannot be dropped or replaced, but may replace your lowest test grade. This is not a state exam and you do not need a certain score on the final to move on. The final is 25% of course grade. Your grade will be determined as follows: ┌───────────────────────────────────────────┐ ┌───────────────────┐ │ Grading Composition │ │ Grading Scale │ ├────────────────────────────────────┬──────┤ ├──────────────┬────┤ │ Attendance │ 10% │ │ 90.0 - 100.0 │ A │ ├────────────────────────────────────┼──────┤ ├──────────────┼────┤ │ Homework │ 25% │ │ 87.0 - 89.9 │ B+ │ ├────────────────────────────────────┼──────┤ ├──────────────┼────┤ │ Tests (including instructor final) │ 40% │ │ 80.0 - 86.9 │ B │ Grading ├────────────────────────────────────┼──────┤ ├──────────────┼────┤ │ Final Exam │ 25% │ │ 77.0 - 79.9 │ C+ │ ├────────────────────────────────────┼──────┤ ├──────────────┼────┤ │ Total │ 100% │ │ 70.0 - 76.9 │ C │ └────────────────────────────────────┴──────┘ ├──────────────┼────┤ │ 67.0 - 69.9 │ D+ │ │ 60.0 - 66.9 │ D │ │ 00.0 - 59.9 │ F │ Attendance Attendance (in an online class, this means logging in, responding to emails and posting messages on the Discussion Board) is important not only to your success but to the success of others in this course. Work together collaboratively to maximize your potential for success. Be prepared to help others when they have need and/or to receive help from others when you have need. │ Test │ Sections │ │ 1 │ 1.1 - 1.5, 2.1 - 2.4 │ Tests │ 2 │ 3.1 - 3.4, 4.1 - 4.3, 5.1 - 5.7 │ │ 3 │6.2 - 6.5, 7.1 - 7.3, 7.5, 8.1, 8.3, 8.4 │ │Final Exam │ all previous sections plus 7.6 │ SFC Resources for Help Several offices and student organizations on campus offer free tutoring to SFC students, as well as advice on test-taking and study skills. Here is some contact information you may find useful: • Office of Diversity, S-112, 395-5486 • Student Support Services, B-112, 395-5068 • Counseling Center, S-254, 395-5508 -- The Counseling Center offers information and workshops on a variety of topics, including math anxiety and test anxiety. • Math Lab, NW Campus: 395-4163, Downtown/Blount Center: 395-5858, Starke: 395-5850 • 24/7 Free Online Tutoring: www.smarthinking.com is accessible through an online tutoring button on your eSantafe account Course Policies Make Ups No make-ups are given during the semester. However, your score on the Common Final exam may replace one missing or lowest test score. Incompletes A grade of I (incomplete) will only be given at the instructor's discretion if you have successfully completed most of the course work and shown acceptable evidence that justifies your incomplete work. Academic The very nature of higher education requires that students adhere to accepted standards of academic integrity. See the Code of Student Conduct: http://dept.sfcollege.edu/rules/PDF/ Integrity Rule_7/7_23.pdf. You are permitted and encouraged to use your book and notes, and to work with each other or a tutor on your homework. You may use your study guides on the Unit Tests. However, no books, notes, or tutors are permitted for the Common Final. Discrimination SFC prohibits any form of discrimination or sexual harassment among students, faculty and staff. For further information, refer to the SFC Human Resources Policies website at http:// /Harassment dept.sfcollege.edu/rules/PDF/Rule_2/2_8.pdf. ADA I am available to discuss appropriate academic accommodations. Request for academic accommodations should be made during the first week of the semester (except for unusual circumstances). You must be registered with Disabilities Resource Center (DRC) in S-229 for disability verification and determination of reasonable academic accommodations. Desired Student Results Specific Learning Outcomes 1. Review □ Demonstrate an ability to factor algebraic expressions into primes using techniques of removing common factors, and factoring the difference of squares and trinomials □ Use the properties of inequalities and equivalent inequalities to solve linear inequalities in one variable and express the solutions graphically or in interval notation 2. Rational Expressions and Equations □ Evaluate rational expressions, and use prime factorization to reduce simple rational expressions (decreased emphasis) □ Use the properties of equalities and equivalent equalities to solve rational equations; apply to word problems involving ratios and proportions 3. Radicals and Rational Exponents □ Demonstrate the relationship between exponents and radicals □ Use the properties of radicals to simplify simple radicals □ Use the properties of equality to solve equations involving one radical expression 4. Quadratic Equations □ Recognize a quadratic equation; choose and apply the most efficient method to solve it □ Apply skills to word problems involving quadratic equations 5. Linear Equations and Inequalities in Two Variables □ Use tables and graphs as tools to interpret expressions, equations, and inequalities □ Locate the x and y intercepts graphically and algebraically and interpret them in the context of the problem □ Explain and determine the slope of a line as the ratio of change in the dependent variable with respect to change in the independent variable 6. Systems of Linear Equations and Inequalities and their Graphs □ Connect the solution set of a system of two linear equations in two variables with the graphs of the two equations □ Connect the solution set of a system of two linear inequalities in two variables with the graphs of the two inequalities 7. Introduction to Functions □ Recognize functions in table, graph, equation or verbal form □ Understand that for a function one input value results in one output value □ Determine the acceptability of a value to be used for the independent variable in an equation that defines a function □ Determine the domain and range of a relation from a graph □ Use and understand functional notation 8. Linear Functions and Their Applications □ Express linear and quadratic functions in table, graph, equation, or verbal form □ Make connections between the parameters of a function and the behavior of the function □ Recognize that a variety of problem situations can be modeled by the same type of function □ Use patterns and functions to represent and solve problems □ Extract and interpret information presented in a graph Important Dates Mon .............. Aug 24 ............................... Fall Full and Fall A classes begin ................................................................... Drop/Add: 8:00 a.m. to 6:30 p.m. Tue ................ Aug 25 ............................... Drop/Add: 8:00 a.m. to 4:00 p.m. Wed .............. Aug 26 ............................... Drop: 8:00 a.m. to 4:00 p.m. ( last day to ADD) Fri .................. Aug 28 ............................... Last day to DROP with no record and receive a refund for Fall Full Mon .............. Sept 7 .................................. COLLEGE CLOSED – Labor Day Holiday Thu ................ Oct 1 ..................................... Graduation application deadline Fri .................. Oct 16 ................................. COLLEGE CLOSED – UF Homecoming Sun ................ Nov 1 .................................. Daylight Savings Time ends – turn your clocks back one hour Tue ................ Nov 3 .................................. Last day to withdraw and receive a “W” for Fall Full Wed .............. Nov 11 ................................ COLLEGE CLOSED – Veteran’s Day Holiday Wed .............. Nov 25 ................................ NO EVENING CLASSES after 5 p.m. Thu-Sun ........ Nov 26-29 ........................... COLLEGE CLOSED – Thanksgiving Holidays Fri ................. Dec 4 ................................... Fall Full and Fall B classes end Mon-Thu ....... Dec 7-10 ............................. Final Exams for Fall Full and Fall B Tue ................ Dec 8 ................................... COMMON FINALS (MAT1033/MGF1106) – 3:30 to 5:30 p.m Thu ................ Dec 10 ................................ Fees due by 4 p.m. for Spring Full and Spring A classes Fri .................. Dec 11 ................................ GRADUATION ** It is your responsibility to drop or withdraw by the appropriate deadline. **
{"url":"http://home.ite.sfcollege.edu/~elizabeth.drake/syllabusmat1033.htm","timestamp":"2014-04-19T17:18:28Z","content_type":null,"content_length":"25799","record_id":"<urn:uuid:8642d4fa-cd1f-44bc-a229-e0bce449e694>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
Prove that the norm on L^2 space is continuous January 26th 2009, 05:39 PM #1 Nov 2008 Prove that the norm on L^2 space is continuous Prove that the norm, ${\left\lVert \cdot \right\rVert}$, considered as a function ${\left\lVert \cdot \right\rVert}:\mathfrak{L}^2([-L,L],\mathbb{F})\rightarrow \mathbb{R}$ is continuous. i.e. $f_n\rightarrow f\in{\mathfrak{L}^2([-L,L],\mathbb{F})}$, converges wrt the norm then, ${\left\lVert f_n \right\rVert}\rightarrow {\left\lVert f \right\rVert}\in{\mathbb{R}}$ This is true in any normed space, and is a simple consequence of the triangle inequality. Start with the fact that $\|f_n\|\leqslant \|f\| + \|f_n-f\|$, so that $\|f_n\| - \|f\| \leqslant\|f_n-f\ |$. The same reasoning with f and f_n interchanged then gives $\bigl|\|f_n\| - \|f\|\bigr| \leqslant\|f_n-f\|$. January 27th 2009, 12:16 AM #2
{"url":"http://mathhelpforum.com/advanced-algebra/70085-prove-norm-l-2-space-continuous.html","timestamp":"2014-04-20T14:08:29Z","content_type":null,"content_length":"34668","record_id":"<urn:uuid:df8a7989-8ada-423b-9733-d3f2b95afdb5>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Compress the finite state machine Sicco Tans <stans@lucent.com> 10 Mar 1999 00:31:12 -0500 From comp.compilers | List of all articles for this month | From: Sicco Tans <stans@lucent.com> Newsgroups: comp.compilers Date: 10 Mar 1999 00:31:12 -0500 Organization: Lucent Technologies , Merrimack Valley References: 99-03-010 99-03-017 Keywords: DFA > [ to compress a DFA ] > The first step is, obviously, to minimize the number of states in the > DFA. You can find a method for this in Aho, Sethi and Ullman's > compiler book, though the version described there isn't particularly > efficient. Does anyone have a more efficient way to minimize the number of states in the -Sicco Tans Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/99-03-041","timestamp":"2014-04-19T09:29:49Z","content_type":null,"content_length":"5345","record_id":"<urn:uuid:73ca701a-0e7d-4ee5-be57-0807a6eada5e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
ball hitting a swinging rod December 22nd 2010, 08:45 AM #1 Aug 2010 ball hitting a swinging rod A thin uniform metal bar, 2m long and weighing 90N, is hanging vertically from the ceiling on a frictionless pivot. Suddenly, it is struck 1.5m below the ceiling by a small 3kg ball, intially travelling horizontally at 10m/s. The ball rebounds in the opposite direction with a speed of 6m/s. Find the angular speed of the bar after the collision. I am struggling to find the right energy/conservation equation to use. I know it is to do with the conservation of angular momentum, but i cannot see how to use the fact that the small ball rebounds with a speed 6m/s in my equation. I would recommend two things: 1. Look at the ball as part of the system, contributing to the angular momentum. If you consider the ball immediately before and immediately after the collision, you will be able to view it its motion as approximately rotational, just as the motion of the free end of the bar is initially going to be approximately linear. 2. Your conservation of angular momentum equation is, I think, the way to go. You'll have the following: $I_{\text{ball}}\,\omega_{\text{ball},i}=I_{\text{b all}}\,\omega_{\text{ball},f}+I_{\text{bar}}\,\ome ga_{\text{bar},f}.$ Does that give you a nudge? What's your target variable? December 22nd 2010, 12:07 PM #2
{"url":"http://mathhelpforum.com/advanced-applied-math/166756-ball-hitting-swinging-rod.html","timestamp":"2014-04-19T04:31:58Z","content_type":null,"content_length":"34415","record_id":"<urn:uuid:7360ce2f-47d1-4cfc-a062-7dfa700620db>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
A postscript to my comment about kids having trouble with the distributive property 01 Wednesday Feb 2012 Posted in General, Mathematics In my most recent blog entry I described how even the very able students at Exeter, working in a problem-based environment, have trouble avoiding common misconceptions. I happened to see and cite an example concerning the distributive property. I thought I would check out released test items. Here’s a 10th grade item from the PSSA in Pennsylvania: Oops! Only 40% got it right. And there are many others like this. Here’s one from NAEP: Results? Grim: Now, I am sure there are readers who will sigh and say that these items are not sufficiently interesting/relevant/real-world, etc. and I will agree. But that’s not the point. The point is that even after a year or two of algebra MOST students cannot use the distributive, (and often the associative, and commutative) properties properly. And that’s a problem with the INSTRUCTION, not the kids. Because the misconceptions are predictable; because it takes a lot of iterations to overcome what is counter-intuitive about much of higher-level math, you have to keep probing for this understanding – as the Exeter – and the Harvard Physics – example so clearly showed us. But because conventional textbook coverage is so fractured, unfocused, superficial, and unprioritized, there is no guarantee that most students will come out knowing the essential concepts of algebra. Don’t you math teachers get that there is a problem? The ‘yield’ from your ‘coverage’ is terrible. So, clearly, ‘coverage’ is not the key to optimal performance on tests. Some day we’ll know why so many math (and history and science…) teachers think coverage is optimal preparation for tests. PS: The NAEP Question Database is here. 2 thoughts on “A postscript to my comment about kids having trouble with the distributive property” 1. Your point is well taken, and it’s certainly a national discussion right now. Here is a great video of David Coleman reviewing the “shifts” within the Common Core Standards: http://engageny.org/ resource/common-core-in-mathematics-overview/ My favorite “shift” is coherence. We need to spiral the same concepts throughout kids’ experiences in the context of meaningful problems. Thanks for sharing this. 2. What I see as the problem is that the students never learned how to solve or never had enough practice solving how to add and subtract fractions with different denominators. You do not have to remember the algebraic method to get the right answer to either question. You can just choose a number for X and then look at the answers and see which one fits the answer you’ve already calculated. This will reinforce to the child that the equation makes sense and is not just something that they have to remember. I have been out of school for many years. I am not a mathematician, but I love math. I was taught the old fashioned way; what many would describe as drill and kill. I did not own a calculator until I got to college. I did not always understand the “theory” behind why I was doing this or that in math when I was in elementary school. I didn’t need to. What I was prepared to do is focus on Algebra when I got there; not try and learn the basics at that time. The equations only make sense because you can do the basic math. I say show kids both ways, but make sure they can DO subtraction and addition and multiplication and division of fractions and then they will get ALL of those math problems right even if they forget how to reduce the equation algebraically.
{"url":"http://grantwiggins.wordpress.com/2012/02/01/a-postscript-to-my-comment-about-kids-having-trouble-with-the-distributive-property/","timestamp":"2014-04-16T04:11:33Z","content_type":null,"content_length":"70159","record_id":"<urn:uuid:2bca4d2a-6c8a-45aa-897b-c9c6869ad042>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
Mandalic geometry and polar coordinates - V (Continued from here.) LEVELS 2 and 4 11-Levels 2 and 4 of the spherical mandala can be divided into two distinct subgroups: A-The first subgroup consists of the remaining six hexagrams of the outer shell (Shell 1). Three of these, each having 4 broken (yin) lines, occur at Level 2; three, each having 2 broken (yin) lines, at Level 4. At both levels the three hexagrams in this subgroup are 120° from one another, forming an equilateral triangle. These two triangles are out of phase with one another by 60°. Therefore when the two triangles are superimposed upon one another they form another 6-pointed star. These hexagrams are all cube vertices in the Cartesian form of the mandala. Together with the two polestar hexagrams they constitute all eight vertices of the Cartesian form of the mandala and the entire outer shell (Shell 1) of the spherical mandala. (Continued here.) © 2013 Martin Hauser Mandalic geometry and polar coordinates - IV (Continued from here.) LEVELS 1 and 5 9-Six hexagrams reside at Level 1 and six at Level 5. These occur in groups of two at each of three points in Level 1 and in groups of two at each of three points in Level 5. The hexagrams in Level 1 all contain a single positive (yang) line and the hexagrams in Level 5 all contain a single negative (yin) line. In both Level 1 and Level 5 the three resident groups are situated 120° from one another. The three groups of Level 5, however, are phased 60° from those of Level 1. All these points and hexagrams are found in Shell 2 of the spherical mandala. In the Cartesian form of the mandala these points and hexagrams are the cube edge centers closest to the maximum yin and maximum yang hexagrams or vertices. There are six other points, each with two resident hexagrams, in Shell 2 but these will all soon be seen to lie at Level 3. 10-Connecting the points of the three groups in Level 1 or Level 5 results in an equilateral triangle. If one were to look down (or up) at these two triangles superimposed upon one another one would see a 6-pointed star. This of course hearkens back to a lot of intellectual history through the ages having historical, religious and cultural contexts. The 6-pointed star, for example, has significance to sacred geometry as it is the compound of two equilateral triangles, the intersection of which is a regular hexagon. Of particular note here, however, is the fact that in mathematics, the root system for the simple Lie group G[2] is in the form of a hexagram [6-pointed star]. (Continued here.) © 2013 Martin Hauser Concerned only with the ethereal world of abstraction, mathematicians are free to introduce this notion of a group and to require that anything that is a group satisfy these criteria. These axioms are neither right nor wrong; they are simply the rules that mathematicians have chosen to require of something that they have decided for some reason or other to call a “group.” The wonder of group theory is that its relevance to the disciplines of both mathematics and natural science far exceeds the self-contained boundaries within which it was first developed. Bruce A. Schumm (2004). Deep Down Things: The Breathtaking Beauty of Particle Physics. Johns Hopkins University Press. pp. 143–144. ISBN 0-8018-7971-X. OCLC 55229065.
{"url":"http://blindmen6.tumblr.com/tagged/group-theory","timestamp":"2014-04-21T09:35:12Z","content_type":null,"content_length":"89284","record_id":"<urn:uuid:3ffa8d6c-e90f-4bb1-a5ed-bf95cd05dea1>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
Confused about this trig substitution! February 18th 2009, 08:49 PM #1 Jan 2009 Confused about this trig substitution! my book tells me to make u = sec(x) du = tanxsecx dxwhich makes sense, but could you also make u = tanx making du = (secx)^2 dx? But the latter method is incorrect why? my book tells me to make u = sec(x) du = tanxsecx dxwhich makes sense, but could you also make u = tanx making du = (secx)^2 dx? Mr F says: Yes. And, in my opinion, this is by far the simplest approach to take. But the latter method is incorrect Mr F says: Says who!? why? Mr F says: It's not incorrect. In fact, it's better! When I graphed both functions they aren't the same, is that normal? oh that make sense :P February 18th 2009, 08:54 PM #2 February 18th 2009, 08:58 PM #3 February 18th 2009, 09:00 PM #4 Jan 2009 February 18th 2009, 09:12 PM #5 February 18th 2009, 09:17 PM #6 Jan 2009
{"url":"http://mathhelpforum.com/calculus/74414-confused-about-trig-substitution.html","timestamp":"2014-04-16T18:04:01Z","content_type":null,"content_length":"44887","record_id":"<urn:uuid:c4486280-4819-4c73-837b-81e08f920d0b>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
Electric potential in relation to electric field problem i am thinking i will use the equation V=Ed, so I would use V=-1000 and d equal to -.9m... This is the right direction. For a potential which only changes along one dimension, you can write E = -dV/dx ; that is, the magnitude of the field is the slope of the potential function and the direction of the field runs from higher to lower potential. (This is basically where V = Ed comes from, for the case of a uniform field.) BTW, in that formula, d is the separation between the two points over which the potential change is measured, not a position. So you have two values of the electric potential at two values of x. What is the slope of this (linear) function? What is the direction of the electric field?
{"url":"http://www.physicsforums.com/showthread.php?t=219987","timestamp":"2014-04-20T11:19:23Z","content_type":null,"content_length":"24230","record_id":"<urn:uuid:76dc1c2f-5613-4f46-92ec-447d60de77af>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
Mapping Diagrams A function is a special type of relation in which each element of the domain is paired with exactly one element in the range. A mapping shows how the elements are paired. Its like a flow chart for a function, showing the input and output values. A mapping diagram consists of two parallel columns. The first column represents the domain of a function f , and the other column for its range. Lines or arrows are drawn from domain to range, to represent the relation between any two elements. A function represented by the mapping above in which each element of the range is paired with exactly one element of the domain is called one-to-one mapping. In the mapping, the second element of the range associates with more than one element in the domain. If the element(s) in range that have mapped more than one element in the domain is called many-to-one mapping. In this mapping, the first element in the domain have mapped with more than one element in the range. If one element in the domain mapped with more then one element in the range, the mapping is called one-to-many relation. One-to-many relations is not a function. Draw a mapping diagram for the function f(x) = 2x^2 + 3. First choose some elements from the domain. Then find the corresponding y -values (range) for the chosen x -values. The domain of the function is all real numbers. Let x = –1, 0, 1, 2, and 3. Substitute these values into the function f(x) to find its range. The corresponding y -values (range) are 5, 3, 5, 11, and 21. Now draw the mapping diagram.
{"url":"http://hotmath.com/hotmath_help/topics/mapping-diagrams.html","timestamp":"2014-04-16T10:10:03Z","content_type":null,"content_length":"5245","record_id":"<urn:uuid:21f3e243-1be7-453b-8b3c-37081497560b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Testing Low-Degree Polynomials over GF(2) Noga Alon Tali Kaufman Michael Krivelevich Simon Litsyn § Dana Ron¶ July 9, 2003 We describe an efficient randomized algorithm to test if a given binary function f : {0, 1}n {0, 1} is a low-degree polynomial (that is, a sum of low-degree monomials). For a given integer k 1 and a given real > 0, the algorithm queries f at O(1 + k4k ) points. If f is a polynomial of degree at most k, the algorithm always accepts, and if the value of f has to be modified on at least an fraction of all inputs in order to transform it to such a polynomial, then the algorithm rejects with probability at least 2/3. Our result is essentially tight: Any algorithm for testing degree-k polynomials over GF(2) must perform (1 + 2k ) queries.
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/556/1705793.html","timestamp":"2014-04-20T19:02:18Z","content_type":null,"content_length":"7855","record_id":"<urn:uuid:ffe1115c-9c07-42a1-8161-03070246b502>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
Is every smooth affine curve isomorphic to a smooth affine plane curve? up vote 11 down vote favorite As suggested by Poonen in a comment to an answer of his question about the birationality of any curve with a smooth affine plane curve we ask the following questions: Q) Is it true that every smooth affine curve is isomorphic to a smooth affine plane curve? (a) In particular, given a smooth affine plane curve $X$ with an arbitrary Zariski open set $U$ in it, can one give a closed embedding of $U$ in the plane again? (b) An extremely interesting special case of (Q) above: Suppose $X$ is a singular plane algebraic curve with $X_{sm}$ the smooth locus. Can one give a closed embedding of $X_{sm}$ in the plane? All varieties in question are over $\mathbb{C}$. Bloch, Murthy and Szpiro have already proven in their paper "Zero cycles and the number of generators of an ideal" , a much more general result (see Theorem 5.7, op.cit), that every reduced and irreducible prjective variety has an affine open set which is a hypersurface. This settles the above question birationally, in particular. The authors give a very short and beautiful alternate proof of their result by M.V. Nori which I include here for its brevity and for anyone who may not have access to the paper: Proof: Suppose $X$ is an integral projective variety of dimension $d$. By a generic projection, easily reduce to the case of a (possibly singular) integral hypersurface $X$ of $\mathbb{A}^{d+1}$. Suppose the coordinate ring of $X$ is $A=\mathbb{C}[x_1,\dots,x_{d+1}]$ and its defining equation is $F=\Sigma_0^m{f_i}x_{d+1}^{i}=0$ with $f_0\neq{0}$. For some element $h$ in $J\cap\mathbb{C}[x_1,\ dots,x_d]$, where $J$ defines the singular locus of $X$, put $x_{d+1}'=x_{d+1}/(hf_0^2)$ in $F=0$ to observe that $1/(hf_0)\in\mathbb{C}[x_1,\dots,x_{d+1}']$ and $A_{hf_0}=\mathbb{C}[x_1,\dots,x_ {d+1}']$. Clearly $\rm{Spec}\ {A_{hf_0}}$ admits a closed immersion in $\mathbb{A}^{d+1}$. However, the above authors also prove in their Theorem 5.8 that there exist affine varieties of any dimension, which are not hypersurfaces. This answers our question in negative. This was also known to Sathaye for curves, see On planar curves. He gives a nice example of a double cover of a punctured elliptic curve, ramified at 9 points and also at the point at infinity. This curve cannot be embedded in $\mathbb{A}^2$. Sathaye uses the value semigroup at the only point at infinity to prove this. His example has trivial canonical divisor. So it answers Poonen's question in the comments below, negatively. In short, $K=0$ for an affine curve is necessary but not sufficient for the curve to be planar, however one should note that $K=0$ is necessary and sufficient for an affine curve to be a complete 1 Re your comment below: send me an email to hdao@math.ku.edu, I will send you the paper. – Hailong Dao Dec 30 '09 at 7:24 Your link to a paper by Bloch, Murthy, Szpiro leads to another paper by Murthy alone. – Matthieu Romagny Aug 25 '11 at 20:52 add comment 1 Answer active oldest votes You can try this: up vote 8 down http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.52.6348 vote accepted 8 The article cited gives a negative answer. The argument is as follows: A smooth affine plane curve has trivial canonical bundle. But if you start with a smooth projective curve X of genus greater than 1 and remove a finite number of sufficiently general points, then the subgroup of Pic(X) generated by the classes of the removed points does not contain the class of the canonical bundle of X, so the resulting affine curve has nonzero canonical class. – Bjorn Poonen Dec 25 '09 at 21:25 4 Is the canonical class the only obstruction to embedding a smooth affine curve as a closed subscheme of A^2? – Bjorn Poonen Dec 26 '09 at 21:18 I would expect so but not sure. This is definitely worthwhile thinking about. My only worry is a paper of the above cited author which says that given any set of points on a projective curve, he can find nearby points such that their complement has no closed embedding in the plane. Unfortunately I am unable to download any of these papers, so details are not available. Relying just on Mathscinet review so far! – Maharana Dec 30 '09 at 6:48 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/9751/is-every-smooth-affine-curve-isomorphic-to-a-smooth-affine-plane-curve","timestamp":"2014-04-18T14:07:20Z","content_type":null,"content_length":"59752","record_id":"<urn:uuid:9964b8ef-b460-4bb6-8cd6-ed23652d483b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
A trait for data that have a single, natural ordering. See scala.math.Ordering before using this trait for more information about whether to use scala.math.Ordering instead. Classes that implement this trait can be sorted with scala.util.Sorting and can be compared with standard comparison operators (e.g. > and <). Ordered should be used for data with a single, natural ordering (like integers) while Ordering allows for multiple ordering implementations. An Ordering instance will be implicitly created if scala.math.Ordering is an alternative to this trait that allows multiple orderings to be defined for the same type. scala.math.PartiallyOrdered is an alternative to this trait for partially ordered data. For example, create a simple class that implements Ordered and then sort it with scala.util.Sorting: case class OrderedClass(n:Int) extends Ordered[OrderedClass] { def compare(that: OrderedClass) = this.n - that.n val x = Array(OrderedClass(1), OrderedClass(5), OrderedClass(3)) It is important that the equals method for an instance of Ordered[A] be consistent with the compare method. However, due to limitations inherent in the type erasure semantics, there is no reasonable way to provide a default implementation of equality for instances of Ordered[A]. Therefore, if you need to be able to use equality on an instance of Ordered[A] you must provide it yourself either when inheriting or instantiating. It is important that the hashCode method for an instance of Ordered[A] be consistent with the compare method. However, it is not possible to provide a sensible default implementation. Therefore, if you need to be able compute the hash of an instance of Ordered[A] you must provide it yourself either when inheriting or instantiating. 1.1, 2006-07-24 See also scala.math.Ordering, scala.math.PartiallyOrdered Linear Supertypes Known Subclasses Type Hierarchy Learn more about scaladoc diagrams Result of comparing this with operand that. Implement this method to determine how instances of A will be sorted. Returns x where: • x < 0 when this < that • x == 0 when this == that • x > 0 when this > that Returns the runtime class representation of the object. a class object corresponding to the runtime type of the receiver. Definition Classes Test two objects for inequality. true if !(this == that), false otherwise. Definition Classes Equivalent to x.hashCode except for boxed numeric types and null. For numerics, it returns a hash value which is consistent with value equality: if two value type instances compare as true, then ## will produce the same hash value for each of them. For null returns a hashcode where null.hashCode throws a NullPointerException. a hash value consistent with == Definition Classes Implicit information This member is added by an implicit conversion from Ordered[A] to StringAdd performed by method any2stringadd in scala.Predef. Definition Classes Implicit information This member is added by an implicit conversion from Ordered[A] to ArrowAssoc[Ordered[A]] performed by method any2ArrowAssoc in scala.Predef. Definition Classes Test two objects for equality. The expression x == that is equivalent to if (x eq null) that eq null else x.equals(that). true if the receiver object is equivalent to the argument; false otherwise. Definition Classes Cast the receiver object to be of type T0. Note that the success of a cast at runtime is modulo Scala's erasure semantics. Therefore the expression 1.asInstanceOf[String] will throw a ClassCastException at runtime, while the expression List (1).asInstanceOf[List[String]] will not. In the latter example, because the type argument is erased as part of compilation it is not possible to check whether the contents of the list are of the requested type. the receiver object. Definition Classes Exceptions thrown if the receiver object is not an instance of the erasure of type T0. Result of comparing this with operand that. Definition Classes Ordered → Comparable Implicit information This member is added by an implicit conversion from Ordered[A] to Ensuring[Ordered[A]] performed by method any2Ensuring in scala.Predef. Definition Classes Implicit information This member is added by an implicit conversion from Ordered[A] to Ensuring[Ordered[A]] performed by method any2Ensuring in scala.Predef. Definition Classes Implicit information This member is added by an implicit conversion from Ordered[A] to Ensuring[Ordered[A]] performed by method any2Ensuring in scala.Predef. Definition Classes Implicit information This member is added by an implicit conversion from Ordered[A] to Ensuring[Ordered[A]] performed by method any2Ensuring in scala.Predef. Definition Classes Compares the receiver object (this) with the argument object (that) for equivalence. Any implementation of this method should be an equivalence relation: • It is reflexive: for any instance x of type Any, x.equals(x) should return true. • It is symmetric: for any instances x and y of type Any, x.equals(y) should return true if and only if y.equals(x) returns true. • It is transitive: for any instances x, y, and z of type AnyRef if x.equals(y) returns true and y.equals(z) returns true, then x.equals(z) should return true. If you override this method, you should verify that your implementation remains an equivalence relation. Additionally, when overriding this method it is usually necessary to override hashCode to ensure that objects which are "equal" (o1.equals(o2) returns true) hash to the same scala.Int. (o1.hashCode.equals(o2.hashCode)). true if the receiver object is equivalent to the argument; false otherwise. Definition Classes Returns string formatted according to given format string. Format strings are as for String.format (@see java.lang.String.format). Implicit information This member is added by an implicit conversion from Ordered[A] to StringFormat performed by method any2stringfmt in scala.Predef. Definition Classes Calculate a hash code value for the object. The default hashing algorithm is platform dependent. Note that it is allowed for two objects to have identical hash codes (o1.hashCode.equals(o2.hashCode)) yet not be equal (o1.equals(o2) returns false). A degenerate implementation could always return 0. However, it is required that if two objects are equal (o1.equals(o2) returns true) that they have identical hash codes (o1.hashCode.equals(o2.hashCode)). Therefore, when overriding this method, be sure to verify that the behavior is consistent with the equals method. the hash code value for this object. Definition Classes Test whether the dynamic type of the receiver object is T0. Note that the result of the test is modulo Scala's erasure semantics. Therefore the expression 1.isInstanceOf[String] will return false, while the expression List(1).isInstanceOf[List[String]] will return true. In the latter example, because the type argument is erased as part of compilation it is not possible to check whether the contents of the list are of the specified type. true if the receiver object is an instance of erasure of type T0; false otherwise. Definition Classes Returns a string representation of the object. The default representation is platform dependent. a string representation of the object. Definition Classes Implicit information This member is added by an implicit conversion from Ordered[A] to ArrowAssoc[Ordered[A]] performed by method any2ArrowAssoc in scala.Predef. Definition Classes Returns true if this is less than that Implicit information This member is added by an implicit conversion from Ordered[A] to Ordered[Ordered[A]] performed by method orderingToOrdered in scala.math.Ordered. This conversion will take place only if an implicit value of type Ordering[Ordered[A]] is in scope. This implicitly inherited member is shadowed by one or more members in this class. To access this member you can use a type ascription: (ordered: Ordered[Ordered[A]]).<(that) Returns true if this is less than or equal to that. Implicit information This member is added by an implicit conversion from Ordered[A] to Ordered[Ordered[A]] performed by method orderingToOrdered in scala.math.Ordered. This conversion will take place only if an implicit value of type Ordering[Ordered[A]] is in scope. This implicitly inherited member is shadowed by one or more members in this class. To access this member you can use a type ascription: (ordered: Ordered[Ordered[A]]).<=(that) Returns true if this is greater than that. Implicit information This member is added by an implicit conversion from Ordered[A] to Ordered[Ordered[A]] performed by method orderingToOrdered in scala.math.Ordered. This conversion will take place only if an implicit value of type Ordering[Ordered[A]] is in scope. This implicitly inherited member is shadowed by one or more members in this class. To access this member you can use a type ascription: (ordered: Ordered[Ordered[A]]).>(that) Returns true if this is greater than or equal to that. Implicit information This member is added by an implicit conversion from Ordered[A] to Ordered[Ordered[A]] performed by method orderingToOrdered in scala.math.Ordered. This conversion will take place only if an implicit value of type Ordering[Ordered[A]] is in scope. This implicitly inherited member is shadowed by one or more members in this class. To access this member you can use a type ascription: (ordered: Ordered[Ordered[A]]).>=(that) Result of comparing this with operand that. Implement this method to determine how instances of A will be sorted. Returns x where: • x < 0 when this < that • x == 0 when this == that • x > 0 when this > that Implicit information This member is added by an implicit conversion from Ordered[A] to Ordered[Ordered[A]] performed by method orderingToOrdered in scala.math.Ordered. This conversion will take place only if an implicit value of type Ordering[Ordered[A]] is in scope. This implicitly inherited member is shadowed by one or more members in this class. To access this member you can use a type ascription: (ordered: Ordered[Ordered[A]]).compare(that) Result of comparing this with operand that. Implicit information This member is added by an implicit conversion from Ordered[A] to Ordered[Ordered[A]] performed by method orderingToOrdered in scala.math.Ordered. This conversion will take place only if an implicit value of type Ordering[Ordered[A]] is in scope. This implicitly inherited member is shadowed by one or more members in this class. To access this member you can use a type ascription: (ordered: Ordered[Ordered[A]]).compareTo(that) Definition Classes Ordered → Comparable Implicit information This member is added by an implicit conversion from Ordered[A] to StringAdd performed by method any2stringadd in scala.Predef. This implicitly inherited member is ambiguous. One or more implicitly inherited members have similar signatures, so calling this member may produce an ambiguous implicit conversion compiler To access this member you can use a type ascription: (ordered: StringAdd).self Definition Classes Implicit information This member is added by an implicit conversion from Ordered[A] to StringFormat performed by method any2stringfmt in scala.Predef. This implicitly inherited member is ambiguous. One or more implicitly inherited members have similar signatures, so calling this member may produce an ambiguous implicit conversion compiler To access this member you can use a type ascription: (ordered: StringFormat).self Definition Classes Implicit information This member is added by an implicit conversion from Ordered[A] to ArrowAssoc[Ordered[A]] performed by method any2ArrowAssoc in scala.Predef. This implicitly inherited member is ambiguous. One or more implicitly inherited members have similar signatures, so calling this member may produce an ambiguous implicit conversion compiler To access this member you can use a type ascription: (ordered: ArrowAssoc[Ordered[A]]).x Definition Classes (Since version 2.10.0) Use leftOfArrow instead Implicit information This member is added by an implicit conversion from Ordered[A] to Ensuring[Ordered[A]] performed by method any2Ensuring in scala.Predef. This implicitly inherited member is ambiguous. One or more implicitly inherited members have similar signatures, so calling this member may produce an ambiguous implicit conversion compiler To access this member you can use a type ascription: (ordered: Ensuring[Ordered[A]]).x Definition Classes (Since version 2.10.0) Use resultOfEnsuring instead
{"url":"http://www.scala-lang.org/api/current/scala/math/Ordered.html","timestamp":"2014-04-18T13:12:00Z","content_type":null,"content_length":"79793","record_id":"<urn:uuid:68c2779e-8be6-4060-9eac-b66441940e9c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
Dihydromonoxide: what's in a name? - Gridshore Dihydromonoxide: what’s in a name? Does the name dihydromonoxide sound in any way familiar to you? It should (really, it should). Try googling it (here, I’ll make it easy for you). If and when you do google it, you will be informed by all sorts of sites of the various properties and dangers of this chemical compound. This one, for instance, covers all the dangers of dihydromonoxide (or DHMO) in great and completely accurate Of course dihydromonoxide and the many warnings against it are all based on a clever use of words. Seen in that light dihydromonoxide is a wonderful example of something I was taught by a professor of mine at university (and reminded of a couple of times today): Shakespeare may have prattled on about roses by any other name, but in reality notation is absolutely everything in our business. Here’s (another) example of the importance of notation: derivation. Derivation as in calculus, which you were taught in highschool. I’m sure everybody remembers doing endless sums about calculating the derivative function f ‘(x) of a given function f(x), quickly followed by the second derivative f ”(x) and possibly even the third derivative f^(3)(x) and fourth derivative f^(4)(x). And I’ll bet most of you never even knew (or cared) about the fact that there are a number of different notations in use for a derivative. But there are; and of these, the three most well-known are the Newton notation (which indicated derivation against a function by placing one or more dots above the functor), the Lagrange notation (the notation with the primes used most often for first and second derivative) and the most commonly used of all, the Leibniz notation (which a number in parentheses, an analog to the power-of or exponent notation). Of these, the Leibniz notation is most often used in mathematics. And the reason is simple: once you get beyond second derivatives, the other notations get to be very cumbersome. Notation is an important factor in our work as computing scientists and software engineers. It often makes the difference between making a piece of software easy to use or cumbersome and unusable. Sometimes it also makes the difference between making a problem easy to solve and difficult. Back in university, for instance, there were a number of courses where we derived solutions to problems from a formal specification. This process involved using predicate calculus, which is very powerful but often cumbersome. However, introducing a useful notation and proving some simple properties of this notation could often greatly reduce the burden of proof in these exercises. Another example of notation making life easier is a pattern that I’ve employed quite often over the past few years when applying the strategy pattern: in order to select the correct strategy for a specific case, instead of using a gigantic pile of if-statements, I prefer to put instances of all my different strategies in a hashtable using a representative for each case (like a Class instance) as the key. The result looks a little like this: private Map strategyMap = ...; // Instantiate this somehow public void applyStrategyToASpecificCase(final Object theCase) One of the reasons I like this, by the way, is that it also mixes well with dependency injection frameworks like Spring. Like I already said, I was reminded by my professor’s wise lesson a number of times today, both in the positive and in the negative sense. One such reminder occurred when I was working on some UML diagrams using Enterprise Architect (which is a horror I hope you will all be spared). This series of diagrams includes a certain construct which occurs a large number of times in many different places. I quickly found myself wishing there was a simple way to create a shorthand notation for this construct. I was however pleasantly surprised to find that you can embed diagrams in other diagrams (both as embedded images and as symbolic links) and can drill down to them. This allows you to define subactivities for your activity diagrams, for instance. Now if only you could actually include the subactivity within the overall activity as an activity, rather than having to include an activity and put the diagram link next to that (but not connected to the diagram)…. A second example I ran into is related to the definition of a web service interface that is currently being defined. This interface must provide for the inclusion of one or more pieces of configuration data that are retrieved from other XML documents. A very generic solution was chosen by defining a repeatable element called Item, which contains an XMLElement element and a FieldValue element. This solution certainly works, but I think that names like ConfigurationItem, ConfigurationItemName and ConfigurationItemValue are a bit more expressive and therefore make it easier to understand how to use the service. Notation is one of those subtle little things that I think are too easily overlooked in software engineering, especially given the huge role it can play in making or breaking your system. Notation means the difference between an easily usable API and an airline terminal (ask a travel agent about it, if you’ve never seen one used). It means the difference between self-documenting, maintainable code and unmaintainable code. It can mean the difference between an easily comprehensible and codable solution and a solution that hurts your head just to come up with. It can aid you in achieving separation of concerns (if only by ordering your thoughts). It can even be the key to better-performing code for some problems. The importance of notation in API design is one of the reasons I am quite enamoured of the notion of a Fluent API, by the way. This interface style promises to make APIs a lot easier to use and a lot less error prone albeit at the cost of requiring more thought on the part of the development team. One of my current projects is experimenting with Fluent APIs right now; I’ll dedicate a future blog to the outcome. Whether the fluent style will work out or not, I can confirm right now that a clear and expressive interface is always a good idea: two of the projects I am involved with are moving from very generic, “you can put any parameter and value you want in this key-value pair list”-style of interfaces to interfaces that are very explicit about what is needed, what is allowed and what the effect and outcome will be; both projects are reporting far less aggravation in the implementation of their new services than they had previously. Note that the type of data that can be passed through the interface has not changed; just the way the interface forces the client to present this data. Like my professor said, notation is everything. By the way, in case you were worried: dihydromonoxide (or H[2]O) is more commonly referred to as water….
{"url":"http://www.gridshore.nl/2008/08/06/dihydromonoxide-whats-in-a-name/","timestamp":"2014-04-16T20:22:32Z","content_type":null,"content_length":"90063","record_id":"<urn:uuid:0ba7d127-dd49-4d17-8d9b-50482ddf57b3>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Systems - Method of Elimination May 8th 2012, 01:42 AM #1 May 2012 Systems - Method of Elimination Hi there, I can't figure out how to solve these under cases he hasn't shown in class and he asked one like this on a quiz. My professor doesn't want us to solve them by multiplying and substituting as shown in the first part of this post which he called ad-hoc, he would like us to use the method involving the Ds which he calls systematic elimination as shown in the second part of the post. I can do it easily when the right side is both zeros but I'm having trouble when there is something like e^(2t) on the right. For example: x''+x-y''=2e^(-t) > ((D^2)+1)x-(D^2)y=2e^(-t) x''-x+y''=0 > ((D^2)-1)x-(D^2)x=0 The operational determinate is ((D^2)+1)*(D^2)-((D^2)-1)*(D^2)=2D^4 Thus x=(2e^(-t)*(D^2))-(0*(-D^2))*2D^-4 > x=(2e^(-t)*(D^2))/2D^4 > (2D^4 )x=(2e^(-t)*(D^2)) and y=(((D^2)+1)*0)-(2e^(-t)*((D^2)-1))*2D^-4 > y=(2e^(-t)*((D^2)-1)/2D^4 > (2D^4)y=(2e^(-t)*((D^2)-1) Normally I've got something like (D^2)(3D+2)x=0 and can solve it with characteristic equations but I'm not sure what to do here. Thanks in advance for any help! Re: Systems - Method of Elimination No huh? Moot point now. May 9th 2012, 05:24 AM #2 May 2012
{"url":"http://mathhelpforum.com/differential-equations/198528-systems-method-elimination.html","timestamp":"2014-04-18T16:08:30Z","content_type":null,"content_length":"32438","record_id":"<urn:uuid:2edaaf21-4789-4943-91e4-ca05429468a6>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
Flatland and its sequel bring the math of higher dimensions to the silver screen In 1884, Edwin Abbott wrote a strange and enchanting novella called Flatland, in which a square who lives in a two-dimensional world comes to comprehend the existence of a third dimension but is unable to persuade his compatriots of his discovery. Through the book, Abbott skewered hierarchical Victorian values while simultaneously giving a glimpse of the mathematics of higher dimensions. In 2007, Flatland was made into an animated movie with the voices of Martin Sheen, Kristen Bell and Michael York. Now there’s a sequel called Flatland 2: Sphereland, which expands the story into non-Euclidean geometry. It’s a gem: an engaging, beautiful and mathematically rich film that children and adults alike can enjoy. The first film, Flatland: The Movie, takes immediate advantage of its medium by opening with a dazzling flyover of Flatland. The inhabited parts of Flatland look like ornate Islamic tilings, and the uninhabited parts are filled with exotic fractal patterns that could come from the surface of Mercury. We then zoom into the home of Arthur Square, who is late to take his granddaughter Hex to school at St. Euclid. On the way, Arthur drills Hex—a darling little hexagon, complete with a bow and a big expressive eye — on Flatland’s uncompromising hierarchy. Each subsequent generation, Hex dutifully reports, acquires an additional side, so that Arthur Square’s children are pentagons and Hex is a hexagon. In Flatland, having more sides supposedly means that you’re smarter. Lowly triangles are good only for manual labor; squares are part of the professional class; and creatures with so many sides that they look circular are priests, who, frowns Hex, “just make rules that everyone else has to obey.” The circles pronounce a frightening decree: Anyone espousing the nonsensical and heretical notion that the third dimension exists will be executed. What, Hex asks, is a dimension? So Arthur explains: A point is zero-dimensional; a point moving straight traces out a one-dimensional line; and a line moving perpendicular to itself traces out a two-dimensional square. Hex immediately makes the forbidden leap: A square that somehow moves perpendicularly to itself, she reasons, would trace out a “super square” in three dimensions. She even calculates how big such an object would be. But her mathematical insights only earn her a scolding from Arthur. By the end, Hex wins happy vindication. Both she and Arthur get a mind-blowing tour of the full three-dimensional universe from a sphere, Spherius — and they even manage to proclaim their discovery and save their own skins (er, perimeters). But when they ask Spherius about a fourth or fifth dimension, following their mathematical logic, he’s as skeptical as their compatriots had been about the third. The sequel joins Hex 20 years later, with her bow long lost and a disillusioned cast to her eye. Although Hex and Arthur’s discoveries have knocked the circle priests from power and brought equal rights to all shapes, Flatlanders still deny the reality of the third dimension. The sphere never returned, and Arthur has died heartbroken and disgraced. Hex is now living in isolation, pursuing her Then a fellow hexagon, Puncto, seeks her out for help with a mathematical problem he can’t get anyone else to take seriously. He’s an engineer for the Flatland space program, and his data haven’t made any sense. By his calculations, the angles on some very big triangular paths that Flatland’s rockets will follow to other planets add up to more than 180 degrees. Everyone has been telling him that he must just be making a mistake, but he’s convinced there’s a deeper issue. Space itself must be warped, he says. And if space is warped, the rocket they’re about to send out could hit an asteroid in the Sierpinski belt! Hex and Puncto end up on an otherworldly adventure through multiple dimensions and worlds. Hex stumbles on a key mathematical insight — the key to Puncto’s dilemma — when they visit one-dimensional Lineland. The world appears to be a straight line, but when they travel high above it they discover that it’s a circle. Hex realizes that similarly, Flatland itself might not be flat, even though it seems so — it could be curved into the third dimension. Perhaps Flatland is on the surface of a sphere: Sphereland! If so, Hex realizes, the edges of a triangle in Flatland would actually curve outward in three dimensions, making the angles a bit more than 180 degrees, just as Puncto had found. But if Hex is right, the rocket’s path is off, and unless she and Puncto convince the Flatlanders of their discovery, it could crash. Thus begins a madcap race back to Flatland, complete with other mathematical revelations along the way. Calling the films educational somehow seems an insult. They manage to accomplish that so-rare feat of giving viewers a taste of the delight of mathematical discovery while carrying them along through a quirky, multi-dimensional story. Note: To comment, Science News subscribing members must now establish a separate login relationship with Disqus. Click the Disqus icon below, enter your e-mail and click “forgot password” to reset your password. You may also log into Disqus using Facebook, Twitter or Google.
{"url":"https://www.sciencenews.org/article/flatland-and-its-sequel-bring-math-higher-dimensions-silver-screen","timestamp":"2014-04-18T00:14:32Z","content_type":null,"content_length":"77413","record_id":"<urn:uuid:c5fb897b-286d-49f6-b0a9-a277b7eb12e2>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
Hana Štěpánková On Nonnegative Solutions of Initial Value Problems for Second Order Linear Functional Differential Equations Optimal sufficient conditions are established for the existence and uniqueness of a nonnegative solution of the initial value problem $$u''(t)=\ell(u)(t)+q(t),\quad u(a)=c_1,\quad u'(a)=c_2,$$ where $\ell$ is nonpositive $a$-Volterra operator. Second order linear functional differential equation, initial value problem, nonnegative solution. MSC 2000: 34K06, 34K05
{"url":"http://www.emis.de/journals/GMJ/vol12/12-3-14.htm","timestamp":"2014-04-19T00:04:18Z","content_type":null,"content_length":"1044","record_id":"<urn:uuid:36b78ea6-179a-454a-b493-81bf1cc03246>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
6.10.5 The As If Rule There were several reasons for the K&R C rearrangement rules: • The rearrangements provide many more opportunities for optimizations, such as compile-time constant folding. • The rearrangements do not change the result of integral-typed expressions on most machines. • Some of the operations are both mathematically and computationally commutative and associative on all machines. The ISO C Committee eventually became convinced that the rearrangement rules were intended to be an instance of the as if rule when applied to the described target architectures. ISO C’s as if rule is a general license that permits an implementation to deviate arbitrarily from the abstract machine description as long as the deviations do not change the behavior of a valid C program. Thus, all the binary bitwise operators (other than shifting) are allowed to be rearranged on any machine because there is no way to notice such regroupings. On typical two’s-complement machines in which overflow wraps around, integer expressions involving multiplication or addition can be rearranged for the same reason. Therefore, this change in C does not have a significant impact on most C programmers.
{"url":"http://docs.oracle.com/cd/E19205-01/819-5265/6n7c29d9f/index.html","timestamp":"2014-04-16T11:58:54Z","content_type":null,"content_length":"4245","record_id":"<urn:uuid:8284abc1-3eb5-43ee-a1da-2a6895fd23ab>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
Strength Of A Beam June 11th 2009, 12:46 AM #1 Strength Of A Beam Hi All, Not sure if this is in the right section, but would love your help with this. Engineers have determined that the strength s of a rectangular beam varies as the product of the width w and the square of the depth d of the beam, that is, s = kwd² for some constant k. Find the dimensions of the strongest rectangular beam that can be cut from a cylindrical log with diameter 48cm. Hi All, Not sure if this is in the right section, but would love your help with this. Engineers have determined that the strength s of a rectangular beam varies as the product of the width w and the square of the depth d of the beam, that is, s = kwd² for some constant k. Find the dimensions of the strongest rectangular beam that can be cut from a cylindrical log with diameter 48cm. find d with respect to w or find w with respect to d then sub it in the equation after that find the derivative and see the extreme max point I think the relation between them can given by $24^2=\frac{d^2}{2^2} + \frac{w^2}{2^2}$ 24 is the radius of the base of the cylinder $d^2=4( 24^2 - \frac{w^2}{4} )$ $d=2\left(\sqrt{24^2-\frac{w^2}{4}}\right)$ sub this in the equation of s $s=4kw\left( 24^2 - \frac{w^2}{4} \right)$ find the derivative of s to find the extreme value that will be w then sub it in this equation to find d $24^2=\frac{d^2}{2^2} + \frac{w^2}{2^2}$ I wish that help June 11th 2009, 04:20 AM #2
{"url":"http://mathhelpforum.com/calculus/92532-strength-beam.html","timestamp":"2014-04-19T08:37:19Z","content_type":null,"content_length":"35036","record_id":"<urn:uuid:d367a8c0-e83a-4f85-aa1f-2c8b95f59fe6>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
Flatland and its sequel bring the math of higher dimensions to the silver screen In 1884, Edwin Abbott wrote a strange and enchanting novella called Flatland, in which a square who lives in a two-dimensional world comes to comprehend the existence of a third dimension but is unable to persuade his compatriots of his discovery. Through the book, Abbott skewered hierarchical Victorian values while simultaneously giving a glimpse of the mathematics of higher dimensions. In 2007, Flatland was made into an animated movie with the voices of Martin Sheen, Kristen Bell and Michael York. Now there’s a sequel called Flatland 2: Sphereland, which expands the story into non-Euclidean geometry. It’s a gem: an engaging, beautiful and mathematically rich film that children and adults alike can enjoy. The first film, Flatland: The Movie, takes immediate advantage of its medium by opening with a dazzling flyover of Flatland. The inhabited parts of Flatland look like ornate Islamic tilings, and the uninhabited parts are filled with exotic fractal patterns that could come from the surface of Mercury. We then zoom into the home of Arthur Square, who is late to take his granddaughter Hex to school at St. Euclid. On the way, Arthur drills Hex—a darling little hexagon, complete with a bow and a big expressive eye — on Flatland’s uncompromising hierarchy. Each subsequent generation, Hex dutifully reports, acquires an additional side, so that Arthur Square’s children are pentagons and Hex is a hexagon. In Flatland, having more sides supposedly means that you’re smarter. Lowly triangles are good only for manual labor; squares are part of the professional class; and creatures with so many sides that they look circular are priests, who, frowns Hex, “just make rules that everyone else has to obey.” The circles pronounce a frightening decree: Anyone espousing the nonsensical and heretical notion that the third dimension exists will be executed. What, Hex asks, is a dimension? So Arthur explains: A point is zero-dimensional; a point moving straight traces out a one-dimensional line; and a line moving perpendicular to itself traces out a two-dimensional square. Hex immediately makes the forbidden leap: A square that somehow moves perpendicularly to itself, she reasons, would trace out a “super square” in three dimensions. She even calculates how big such an object would be. But her mathematical insights only earn her a scolding from Arthur. By the end, Hex wins happy vindication. Both she and Arthur get a mind-blowing tour of the full three-dimensional universe from a sphere, Spherius — and they even manage to proclaim their discovery and save their own skins (er, perimeters). But when they ask Spherius about a fourth or fifth dimension, following their mathematical logic, he’s as skeptical as their compatriots had been about the third. The sequel joins Hex 20 years later, with her bow long lost and a disillusioned cast to her eye. Although Hex and Arthur’s discoveries have knocked the circle priests from power and brought equal rights to all shapes, Flatlanders still deny the reality of the third dimension. The sphere never returned, and Arthur has died heartbroken and disgraced. Hex is now living in isolation, pursuing her Then a fellow hexagon, Puncto, seeks her out for help with a mathematical problem he can’t get anyone else to take seriously. He’s an engineer for the Flatland space program, and his data haven’t made any sense. By his calculations, the angles on some very big triangular paths that Flatland’s rockets will follow to other planets add up to more than 180 degrees. Everyone has been telling him that he must just be making a mistake, but he’s convinced there’s a deeper issue. Space itself must be warped, he says. And if space is warped, the rocket they’re about to send out could hit an asteroid in the Sierpinski belt! Hex and Puncto end up on an otherworldly adventure through multiple dimensions and worlds. Hex stumbles on a key mathematical insight — the key to Puncto’s dilemma — when they visit one-dimensional Lineland. The world appears to be a straight line, but when they travel high above it they discover that it’s a circle. Hex realizes that similarly, Flatland itself might not be flat, even though it seems so — it could be curved into the third dimension. Perhaps Flatland is on the surface of a sphere: Sphereland! If so, Hex realizes, the edges of a triangle in Flatland would actually curve outward in three dimensions, making the angles a bit more than 180 degrees, just as Puncto had found. But if Hex is right, the rocket’s path is off, and unless she and Puncto convince the Flatlanders of their discovery, it could crash. Thus begins a madcap race back to Flatland, complete with other mathematical revelations along the way. Calling the films educational somehow seems an insult. They manage to accomplish that so-rare feat of giving viewers a taste of the delight of mathematical discovery while carrying them along through a quirky, multi-dimensional story. Note: To comment, Science News subscribing members must now establish a separate login relationship with Disqus. Click the Disqus icon below, enter your e-mail and click “forgot password” to reset your password. You may also log into Disqus using Facebook, Twitter or Google.
{"url":"https://www.sciencenews.org/article/flatland-and-its-sequel-bring-math-higher-dimensions-silver-screen","timestamp":"2014-04-18T00:14:32Z","content_type":null,"content_length":"77413","record_id":"<urn:uuid:c5fb897b-286d-49f6-b0a9-a277b7eb12e2>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
What do you think? Super Member Re: What do you think? Hidden message abuse! I have discovered a truly marvellous signature, which this margin is too narrow to contain. -Fermat Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. -Archimedes Young man, in mathematics you don't understand things. You just get used to them. - Neumann Re: What do you think? Hi Shivamcoder3013; In this thread because other people may want to work on the problems without actually seeing anyone else's work we use hidden replies. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Real Member Re: What do you think? Hi Bobby, "The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do." - Ted Nelson Re: What do you think? Hi phrontister; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Real Member Re: What do you think? Hi Bobby, "The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do." - Ted Nelson Re: What do you think? Hi phrontister; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: What do you think? Hi bobbym Got any expectation problems? The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: What do you think? bobbym wrote: New problem! The probability of 1 kid is 1/6. Two kids is 1/3. Three kids is 1/3 and four kids is 1 / 6. Everyone gets married and has kids. What is the chance that a couple has exactly 5 grandkids? A says 1 / 8 B says 4 / 129 C says (1 / 9)^4 D says ( 1 / 3 )^5 What if they have only one child and that child has 5 of his own childeren? We aren't given the probability of 5 children... Last edited by anonimnystefy (2012-11-05 09:43:52) The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: What do you think? Yikes! What in the name of a holy yoctometer are you talking about? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: What do you think? Seems you missed post #1907. I dug that problem from this thread... The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: What do you think? Hmmm, and now you are getting your just reward! In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: What do you think? I don't understand... The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: What do you think? Do you not know my age? Are ye unaware of the aging process? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: What do you think? bobbym wrote: Do you not know my age? Are ye unaware of the aging process? Well, you told me you were 9,99 and 92. I'm not sure which (if not all) version to believe. And yes, I am aware of it... What does that have to do with what I asked you? The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: What do you think? Well, us old people we forget things. Everyday I lose a couple of neurons and along with them a fact or two. I used to know what the heck that problem was talking about but I forgot. That is the price you pay for not answering problems promptly. But if I had to bet my oatmeal on someone, I would bet on B! That there is one smart fella! In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: What do you think? bobbym wrote: Well, us old people we forget things. Everyday I lose a couple of neurons and along with them a fact or two. I used to know what the heck that problem was talking about but I forgot. That is the price you pay for not answering problems promptly. But if I had to bet my oatmeal on someone, I would bet on B! That there is one smart fella! If you read the problem you will see that we are not given the probability that a couple has 5 children... The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: What do you think? I do not think you need that information. Four children could marry and produce 5 grandchildren could they not? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: What do you think? But for the expected value, I need to include all possibilities of getting 5 grandchildren including having 5 children, each having 1 child and having only 1 child who has 5 children of his own. Thus, I need the information about the probability of having 5 children... The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: What do you think? Who said anything about expected value? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: What do you think? Sorry, probability. But we still need the probability of having 5 children... Last edited by anonimnystefy (2012-11-05 10:12:26) The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: What do you think? It is not necessary to know that. You can derive it from the other information. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: What do you think? But then the problem gets much tougher... The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: What do you think? Not necessarily, if you remember the methods you have been taught. If you remember Richard. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: What do you think? Feynmen? You only mentioned that story where he used DUIS. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: What do you think? Nope! That story is not just for DUIS. It says a lot more than that. Feynmann is telling us something important there... In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=238927","timestamp":"2014-04-20T16:47:19Z","content_type":null,"content_length":"42483","record_id":"<urn:uuid:3f311721-0f5f-4aaf-88a7-394d2aedc9ce>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
Financial Mathematics Seminars Financial Mathematics Seminars - October 25, 2000 4:30 - 5:30 p.m. Marcel Rindisbacher, Joseph L. Rotman School of Management, University of Toronto This talk and associated article extend the standard continuous time financial market model pioneered by Samuelson (1969) and Merton (1971) to allow for insider information. We prove that if the investment horizon of an insider ends after his initial information advantage has disappeared, an insider has arbitrage opportunities if and only if the anticipative information is so informative that it contains zero-probability events given initial public information. When it ends before or anticipative information does not contain such events we derive expressions for optimal consumption and portfolio policies, which allow to analyze, how the anticipative information affects optimal strategies of ins iders. Optimal insider policies are shown not to be fully revealing. Individually, anticipative information is of no value and therefore does not affect the optimal behavior of insiders if and only if it is independent from public information. We show that arbitrage opportunities allow to replicate arbitrary consumption streams such that Merton's consumption-investment problem with general convex von Neumann-Morgenstern preferences has no solution whenever investment horizons are longer than resolution times of signals. If the true insider signal is perturbed by independent noise this problem can be avoided. But since in this case non-insiders will never learn the anticipative information we argue that this is not appropriate to capture important features of insider information. We also show that the valuation of contingent claims measurable with respect to the public information by arbitrage is invariant to insider information if it does not allow for arbitrage opportunities. In contrast contingent claims have no value for insiders with anticipative information generated by signals with continuous distributions. 6:00 - 7:00 p.m. S. David Promislow, Department of Mathematics & Statistics, York University A life annuity is a contract which promises a stream of payments at fixed times and of fixed amounts, with the provision that the purchaser must be alive at the time of each payment in order to collect. The pricing of such contracts involves an assessment of both future interest rates and future mortality experience. This talk will discuss a joint project with M. Milevsky, which considers the problem of valuing options on these mortality-contingent claims. A typical product of this type would give the option holder the right to purchase a life annuity at some future date, at a price which is guaranteed now. Although these do not appear to be sold directly at the present time, many U.S. insurance companies offer this type of option as an additional benefit to holders of their tax sheltered savings plans. The valuation of annuity options requires a somewhat different approach towards mortality measurement than the traditional actuarial technique. In order to model the uncertainty in future mortality, one must view the force of mortality ( hazard rate) as a stochastic process, rather than a fixed function of time. We show that under certain natural assumptions, both the mortality and interest rate risk can be hedged, and the option to annuitize can be priced by constructing a replicating portfolio involving insurance, annuities, and default-free bonds. Both discrete time and continuous time models will be discussed.
{"url":"http://www.fields.utoronto.ca/programs/cim/financial_math/finance_seminar/00-01/abstract10-25.html","timestamp":"2014-04-19T22:25:47Z","content_type":null,"content_length":"12469","record_id":"<urn:uuid:eea53cd2-17bc-415c-8c6e-6332932835a7>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Citizendium - building a quality free general knowledge encyclopedia. Click here to join and contribute—free Many thanks December donors; special to Darren Duncan. January donations open; need minimum total $100. Let's exceed that. Donate here. By donating you gift yourself and CZ. Green's function From Citizendium, the Citizens' Compendium In physics and mathematics, Green's functions are auxiliary functions in the solution of linear partial differential equations. Green's function is named for the self-taught English mathematician George Green (1793 – 1841), who investigated electricity and magnetism in a thoroughly mathematical fashion. In 1828 Green published a privately printed booklet, introducing what is now called the Green function. This was ignored until William Thomson (Lord Kelvin) discovered it, recognized its great value and had it published nine years after Green's death. Bernhard Riemann gave it the name "Green function".^[1] Let L[x] be a given linear differential operator in n variables x = (x[1], x[2], ..., x[n]), then the Green function of L[x] is the function G(x,y) defined by $L_\mathbf{x} G(\mathbf{x},\mathbf{y}) = \delta(\mathbf{x}- \mathbf{y}),$ where δ(x-y) is the Dirac delta function. Once G(x,y) is known, any differential equation involving L[x] is formally solved. Suppose we want to solve, $L_\mathbf{x} \,\phi(\mathbf{x}) = \rho(\mathbf{x})$ for a known right hand side ρ(x). The formal solution is $\phi(\mathbf{x}) = \int\; G(\mathbf{x},\mathbf{y})\; \rho(\mathbf{y})\; \mathrm{d}\mathbf{y}.$ The proof is by verification, $L_\mathbf{x} \,\phi(\mathbf{x}) = \int\;L_\mathbf{x} \; G(\mathbf{x},\mathbf{y})\; \rho(\mathbf{y})\; \mathrm{d}\mathbf{y} = \int\;\delta(\mathbf{x}- \mathbf{y})\;\rho(\mathbf{y}) \mathrm{d}\ mathbf{y} = \rho(\mathbf{x})$ where in the last step the defining property of the Dirac delta function is used. The integral operator that has the Green function as kernel may be seen as the inverse of a linear operator, $L_\mathbf{x}\;\phi(\mathbf{x}) = \rho(\mathbf{x}) \quad\Longrightarrow \quad \phi( \mathbf{x}) =L_\mathbf{x}^{-1}\;\rho(\mathbf{x}) = \int G(\mathbf{x},\mathbf{y}) \rho(\mathbf{y})\;\mathrm{d}\ mathbf{y} .$ It is illuminating to make the analogy with matrix equations. Let $\mathbb{L}$ and $\mathbb{G}$ be n×n matrices connected by $\mathbb{L} \mathbb{G} = \mathbb{E}\quad \Longleftrightarrow \quad \left(\mathbb{L} \mathbb{G}\right)_{ij} = \delta_{ij}, \quad\hbox{i.e.,}\quad \mathbb{G} = \mathbb{L}^{-1},$ then the solution of a matrix-vector equation is $\mathbb{L}\boldsymbol{\phi} = \boldsymbol{\rho}\quad \Longrightarrow \quad \phi_i = \sum_{j} \mathbb{G}_{ij} \rho_j.$ Make the correspondence i ↔ x, j ↔ y, and compare the sum over j with the integral over y, and the correspondence is evident. We consider a case of three variables, n = 3 with x = (x, y, z). The Green function of $abla^2 \equiv \left( \frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+\frac{\partial^2}{\partial z^2}\right)$ $G(\mathbf{x},\mathbf{y}) = -\frac{1}{4\pi} \frac{1}{|\mathbf{x}-\mathbf{y}|}.$ As an important example of this Green function we mention that the formal solution of the Poisson equation of electrostatics, reading $abla^2 \Phi(\mathbf{x}) = -\frac{1}{\epsilon_0} \rho(\mathbf{x}),$ where ε[0] is the electric constant and ρ is a charge distribution, is given by $\Phi(\mathbf{x}) = \frac{1}{4\pi \epsilon_0} \iiint\; \frac{\rho(\mathbf{y})}{|\mathbf{x}-\mathbf{y}|} \;\mathrm{d}\mathbf{y}.$ $abla^2 \Phi(\mathbf{x}) = \frac{1}{4\pi \epsilon_0} \iiint\; \rho(\mathbf{y}) abla^2\frac{1}{|\mathbf{x}-\mathbf{y}|} \;\mathrm{d}\mathbf{y} = -\frac{4\pi}{4\pi\epsilon_0} \iiint\; \rho(\mathbf {y}) \delta(\mathbf{x}-\mathbf{y}) \;\mathrm{d}\mathbf{y} = -\frac{\rho(\mathbf{x})}{\epsilon_0}.$ The integral form of the electrostatic field may be seen as a consequence of Coulomb's law. The field at the point x due to the charge dQ = ρ(y)dy is equal to $d\Phi(\mathbf{x}) = \frac{dQ}{4\pi\epsilon_0 |\mathbf{x}-\mathbf{y}|}.$ The field is additive in the charges, so integration gives the total field at x. Proof of Green function of ∇^2 Without loss of generality we take x as the origin (0, 0, 0) and replace y by r = (x, y, z) in the above formulation. The length of r is indicated by r. The proof uses Green's theorem: $\iiint\limits_{V_a} \Big( \phi abla^2\frac{1}{r} - \frac{1}{r} abla^2\phi\Big)\, d V = \iint\limits_{S_a} \big(\phi \boldsymbol{abla}\frac{1}{r}\big) \cdot d\mathbf{S} - \iint\limits_{S_a} \big (\frac{1}{r} \boldsymbol{abla}\phi\big) \cdot d\mathbf{S},$ where V[a] is a sphere with radius a and S[a] is the surface of this sphere. The smooth test function φ and its gradient vanish for large r, $\phi(x,y,z) =0, \quad \boldsymbol{abla}\phi(x,y,z) = \mathbf{0}\quad \hbox{for}\quad r \ge R \quad\hbox{with}\quad r \equiv \sqrt{x^2+y^2+z^2}.$ Further we notice that $abla^2 \frac{1}{r} = 0\quad \hbox{for}\quad r \in U_\epsilon,$ because r ≠ 0 in that region (see the figure, where the region is indicated in yellow). This result is most easily proved if we recall that in spherical polar coordinates $abla^2\frac{1}{r} = \frac{1}{r} \frac{\partial^2 r}{\partial r^2} \frac{1}{r} = \frac{1}{r} \frac{\partial^2 }{\partial r^2} 1 = 0.$ First apply Green's theorem to the large sphere of radius R $\iiint\limits_{V_R} \phi abla^2\frac{1}{r}\, d V = \iiint\limits_{V_R} \frac{1}{r} abla^2\phi\, d V + \iint\limits_{S_R} \big(\phi \boldsymbol{abla}\frac{1}{r}\big) \cdot d\mathbf{S} - \iint\ limits_{S_R} \big(\frac{1}{r} \boldsymbol{abla}\phi\big) \cdot d\mathbf{S} = \iiint\limits_{V_R} \frac{1}{r} abla^2\phi\, d V$ because by assumption φ and its gradient vanish on S[R]. We consider the integral on the right hand side and we will show that $\iiint\limits_{V_R} \frac{1}{r} abla^2\phi\, d V = -4\pi \phi(\mathbf{0}),$ from which the result to be proved follows directly. The main trick is to write $\iiint\limits_{V_R} \frac{1}{r} abla^2\phi\, d V = \lim_{\epsilon \rightarrow 0} \iiint\limits_{U_\epsilon} \frac{1}{r} abla^2\phi\, d V$ and to consider first the integral over U[ε] (the yellow domain in the figure) for non-zero, but small, ε. After the integral has been evaluated, the limit for zero ε is taken. Since U[ε] has two surfaces, Green's theorem cannot be applied directly, and therefore we write (see the figure), $\iiint\limits_{U_\epsilon} = \iiint\limits_{V_R} - \iiint\limits_{V_\epsilon}$ and apply Green's theorem to the two terms. Recalling that we saw already the first term, we get \begin{align} \iiint\limits_{U_\epsilon} \frac{1}{r} abla^2\phi\, d V &= \left[ \iiint\limits_{V_R} \phi abla^2\frac{1}{r}\, d V - \iiint\limits_{V_\epsilon} \phi abla^2\frac{1}{r}\, d V \right] +\iint\limits_{S_\epsilon} \big(\phi \boldsymbol{abla}\frac{1}{r}\big) \cdot d\mathbf{S} - \iint\limits_{S_\epsilon} \big(\frac{1}{r} \boldsymbol{abla}\phi\big) \cdot d\mathbf{S}\\ &= \left[\ iiint\limits_{U_\epsilon} \phi abla^2\frac{1}{r}\, d V \right] +\iint\limits_{S_\epsilon} \big(\phi \boldsymbol{abla}\frac{1}{r}\big) \cdot d\mathbf{S} - \iint\limits_{S_\epsilon} \big(\frac{1} {r} \boldsymbol{abla}\phi\big) \cdot d\mathbf{S}\\ \end{align} The integral between square brackets is zero because ∇^2(1/r) is zero on U[ε]. The last integral can be shown to vanish for small ε. Because φ and its gradient are smooth and finite, and r is constant (equal to ε) on the surface, we may write for small ε $\iint\limits_{S_\epsilon} \big(\frac{1}{r} \boldsymbol{abla}\phi\big) \cdot d\mathbf{S} \approx \frac{\langle\mathbf{n}\cdot \boldsymbol{abla}\phi(\epsilon)\rangle}{\epsilon} \iint\limits_{S_\ epsilon}dS = 4\pi \epsilon \langle\mathbf{n}\cdot \boldsymbol{abla}\phi(\epsilon)\rangle \rightarrow 0,$ where we assumed that $\boldsymbol{abla}\phi(\mathbf{r})\cdot d\mathbf{S} = \boldsymbol{abla}\phi(\mathbf{r})\cdot\mathbf{n}\;dS \approx \langle \boldsymbol{abla}\phi(\epsilon)\cdot\mathbf{n}\rangle \;dS$ and that the value of the gradient averaged over the surface may be taken out of the integral. The remaining surface integral is equal to 4πε^2. In order to evaluate the final integral we use $\boldsymbol{abla} \frac{1}{r} = - \frac{\mathbf{r}}{r^3} \equiv -\frac{\mathbf{e}_r}{r^2} \qquad \hbox{and}\qquad d\mathbf{S} = \mathbf{e}_r \;r^2\sin\theta\,d\phi d\theta$ so that $\iint\limits_{S_\epsilon} \big(\phi \boldsymbol{abla}\frac{1}{r}\big) \cdot d\mathbf{S} = -\iint\limits_{S_\epsilon} \phi(\mathbf{r}) \sin\theta\,d\phi d\theta = -\phi(\epsilon) \; 4\pi.$ Here we assumed that the test function φ(r) is constant over the surface (isotropic) for small ε. The limit ε → 0 gives the desired result. $\iiint\limits_{V_R} \phi abla^2\frac{1}{r}\, d V = \iiint\limits_{V_R} \frac{1}{r} abla^2\phi\, d V = -4\pi \phi(\mathbf{0}) = -4\pi \iiint\limits_{V_R} \delta(\mathbf{r}) \phi(\mathbf{r}) \, d ┃ $abla^2\frac{1}{r} = - 4\pi \delta(\mathbf{r})$ ┃ 1. ↑ M. Kline, Mathematical Thought from Ancient to Modern Times, Oxford University Press, New York (1972) p. 683 • P. Roman, Advanced Quantum Theory, Addison-Wesley, Reading, Mass. (1965) Appendix 4. • I. M. Gel'fand and G. E. Shilov, Generalized Functions, Vol. 1, Academic Press, New York (1964)
{"url":"http://en.citizendium.org/wiki/Green's_function","timestamp":"2014-04-18T13:10:00Z","content_type":null,"content_length":"41203","record_id":"<urn:uuid:fbf525a8-819e-43a6-9833-bd7f6b79021d>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
CS714: Machine Learning Fall 2013 This introductory course on machine learning will give an overview of many concepts, techniques, and algorithms in machine learning that are now widely applied in scientific data analysis, data mining, trainable recognition systems, adaptive resource allocators, and adaptive controllers. The emphasis will be on understanding the fundamental principles that permit effective learning in these systems, realizing their inherent limitations, and exploring the latest advanced techniques employed in machine learning. Topics include: • Classification and linear regression • Support vector machines • Ensemble methods, boosting algorithms, random forest • Learning theory: bias-variance, uniform convergence, VC dimension • Mixtures models, EM algorithm and hidden Markov models • Structured prediction • Deep learning Time: Monday/Wednesday 4:40 pm -6:00 pm; Location: Russ 155 Shaojun Wang 387, Joshi Center (937) 775-5140 Office hours: Monday/Wednesday 3:30PM-4:30PM K. Murphy Machine Learning: A Probabilistic Perspective MIT Press, 2012. T. Hastie, R. Tibshirani and J. Friedman The Elements of Statistical Learning: Data Mining, Inference and Prediction Springer, 2nd Edition, 2009. V. Vapnik The Nature of Statistical Learning Theory Springer, 2nd Edition, 2000. Course Grades and Workload Four Homeworks 70% Project or Final Exam 30% Probability and Statistics Linear Algebra Programming language: matlab, C++, Java
{"url":"http://cecs.wright.edu/~swang/cs714/","timestamp":"2014-04-18T03:40:52Z","content_type":null,"content_length":"2563","record_id":"<urn:uuid:a46b57b2-eb49-4e72-a7ca-fe70b3284c3d>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
R: st: -cmp for mixed and nonrecursive process? Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] R: st: -cmp for mixed and nonrecursive process? From Jordana Rodrigues Cunha <jordana.rodrigues@unibo.it> To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu> Subject R: st: -cmp for mixed and nonrecursive process? Date Tue, 7 Dec 2010 10:02:08 +0000 Dear Robert thank you very much for the response, I've taken a bit of time to understand better my problem and after answer the questions you've made, I rethink the model differently: First of all: I ran an OLS regression for the 1st equation, to check the significance level of X and W in Y: 1)reg Y X W var1 var2 var3, where: Y is a continuous variable no time-varying; X is binary dummy that varies from 0 to 1 W is an ordinal variable that varies from 1 to 5 var* are controls In managing the possibility of bias of the OLS regression; I decided to do a Hausmann test using a 2 stage regression, using Z1 and Z2 as IV to check for the endogeneity in this way: 2)probit X Z1 var4 var5 var6 predict res_X 3) oprobit W Z2 var4 var5 var6 predict res_W* 4) reg Y X resX W res_W1 res_W2 res_w3 res_W4 res_W5 var1 var2 var3 The coefficient of the residuals are not significant correlated to Y, meaning that neither X or W are correlated to Y error term and so I conclude that they are not endogenous in the fisrst equation. Are my results valid? I tried to use cmp (Y = X W var1 var2 var3) (X = Z1 var4 var5 var6) (W = Z2 var4 var5 var6), ind ($cmp_cont $cmp_probit $cmp_oprobit) and I can get convergence even after 100 interactions even after setting the scaled gradient tolerance to a value larger than its default of 10(-5) to 10(-3). obs: your precedent questions: 1 -Are these variables time-varying or not? No, they aren't. 2- Do you have strong instruments for any of them? If so, how many? Yes, I have strong instruments for both of the variables which I would like to test the endogeneity; 3- How many control variables are you using? in the first equation I am using 6 control variables (plus YEAR controls, I have 37 different years, 10 industry sectors controls and controls for 65 countries ) for the 2nd and the 3rd variables I am using 3 (more YEAR controls) 4- Can you bifurcate the ordinal variable? I've tried dot do so, and I lost significance, I've built it in a really ordered scale, where at each level foward I increase complexity respect to the precedent level, so I prefere do not do it if it would be possible Jordana Rodrigues Cunha PhD. Candidate University of Bologna Department of Management Via Capo di Lucca, 34, 1st floor 40126 ? Bologna, ITALY Fixed line: 0039 (051) 20 98 073 Fax: 0039 (051) 20 98 074 Inizio: owner-statalist@hsphsun2.harvard.edu [owner-statalist@hsphsun2.harvard.edu] per conto di Robert A Yaffee [bob.yaffee@nyu.edu] Inviato: giovedì 25 novembre 2010 16.48 Fine: statalist@hsphsun2.harvard.edu Oggetto: Re: st: -cmp for mixed and nonrecursive process? I have a few preliminary questions? Are these variables time-varying or not? Do you have strong instruments for any of them? If so, how many? Can you impose constraints? Have you tested for identification of your model with an order or rank test? If the parameters are time-varying, you will also have to be concerned with the stability of the feedback loops? Assuming that you can impose enough constraints or have enough instruments, and that the resulting model is identified, you might want to consider using a structured equation model approach with a polyserial-polychoric covariance matrix as inputs. Check out Stas Kolenkov's work on confa for the application of such input. How many control variables are you using? Can you bifurcate the ordinal variable? Robert A. Yaffee, Ph.D. Research Professor Silver School of Social Work New York University Biosketch: http://homepages.nyu.edu/~ray1/Biosketch2009.pdf CV: http://homepages.nyu.edu/~ray1/vita.pdf ----- Original Message ----- From: Jordana Rodrigues Cunha <jordana.rodrigues@unibo.it> Date: Wednesday, November 24, 2010 10:16 am Subject: st: -cmp for mixed and nonrecursive process? To: "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu> > Dear statalisters, I really need your help, seems that this is an > impossible mission. I have consulted all the precedent faq's to arrive > until here, but without success. > I am estimating the effects of X, Z and W over Y, where they are (in > sequence): a dummy, an ordinal, a dummy and a continuous variable. The > three independent variables are linked by a nonrecursive looping, > meaning that they are linked by reciprocal feedbacks: X determines Z > and W, as well as Z determines X and W and W determines Z and Y. > X = a1+ bZ + cW + λ1Controls + e1; (binary dummy that varies from 0 to > 1) > Z = a2 + dX + eW + λ2Controls + e2;(ordinal variable that varies form > 1 to 5) > W = a3 + fX + gZ + λ3Controls + e3 ; (binary dummy that varies from 0 > to 1) > the full model would be: > Y = a4 + hX + iZ + jW + λ4Controls + e4 (continuous variable) > I have made simple probits and ordered probits to check the > relationship among the independent variables and I executed -cmp > models to check the correlations among their error terms (in pairs of > variables), confirming that they were really non-independent of each > other. The problem is that as -cmp doesn't allow for reciprocal > interaction among the variables and I included the second variable in > the first equation and omitted the first variable in the second > equation and so on. I have run, a naive OLS where the coefficients of > X and Z were significant but not the coefficient of W. I think that > the best would run a SEM to use the 3 equations of X, Z and W with the > fourth Y in the full model structured to allow for endogeneity but I > have two main problems: > 1- I cannot execute a Hausmann test ( to check for endogeneity in the > full model) because when I ask for the residuals prediction after > running oprobit (for estimate the variable Z) this option is not > allowed and so I cannot regress the full model with the residual of Y > and check the significance of its coefficient. > 2- I have four different functional distributions and so if I would > like to do 2sls or 3sls in different stages I wouldn't know how to > indicate the nature of the distribution for each variable and 3sls > assumes that all the variables are continuous, right? > 3 - I am using the same controls in all the equations, this could be a > problem? > Please, somebody could give me an advice? I hope I had been clear in > explaining, thank you all in advance, > jordana > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2010-12/msg00245.html","timestamp":"2014-04-18T15:47:10Z","content_type":null,"content_length":"14874","record_id":"<urn:uuid:172237d2-cf38-4cd3-b39e-ca0e67894ad5>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Coupled random fixed point theorems for nonlinear contractions in partially ordered metric spaces. (English) Zbl 1176.54030 Summary: Let $\left(X,\le \right)$ be a partially ordered set and suppose there is a metric such that is a complete separable metric space and $\left({\Omega },{\Sigma }\right)$ be a measurable space. In this article, a pair of random mappings $F:{\Omega }×\left(X×X\right)\to X$ $g:{\Omega }×X\to X$ , where has a mixed -monotone property on , and satisfy a certain nonlinear contractive condition, are introduced and investigated. Two coupled random coincidence and coupled random fixed point theorems are proved. These results are random versions and extensions of recent results of the authors [ V. Lakshmikantham and Lj. Ćirić, Nonlinear Anal., Theory Methods Appl. 70, No. 12 (A), 4341–4349 (2009; Zbl 1176.54032 )] and include several recent developments. 54H25 Fixed-point and coincidence theorems in topological spaces 54F05 Linearly, generalized, and partial ordered topological spaces 47H40 Random operators (nonlinear) 34B15 Nonlinear boundary value problems for ODE
{"url":"http://zbmath.org/?q=an:1176.54030","timestamp":"2014-04-19T22:14:24Z","content_type":null,"content_length":"23027","record_id":"<urn:uuid:f9b9a7ce-c430-470b-b8c6-5ddbdefb2fc2>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
max shearing stress problem Hey guys, I got a problem that goes like: A shaft with a circular cross section is subjected to a torque of 120ft-lb. Its shaft's diameter is 0.750 in and its length is 15 in., determine the maximum shearing stress. I did the following: I tried to use shearing stress = Tr/J in which i plugged in as followed (120*.750/2)/(pi/32)*.750^4 and i get 1448, but the answer should be17.39ksi. I assume i am not getting the right answer because I am not factoring in the length but I am not sure. Any help is appreciated, thanks
{"url":"http://www.physicsforums.com/showthread.php?t=88498","timestamp":"2014-04-19T19:52:06Z","content_type":null,"content_length":"22092","record_id":"<urn:uuid:52163508-17e2-4cd2-97ae-fead9a0d7fba>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 63 5 Small-Area Estimation With the notable exception of the estimation of nonprofit R&D fund- ing amounts, the National Center for Science and Engineering Statistics (NCSES) primarily uses standard survey estimates—that is, direct survey- weighted totals and ratios—in its tabulations for National Patterns. Design- based survey regression estimators, although they can be more accurate in many survey applications, are often not used for lack of exploratory studies to develop good survey predictors (whose totals are known or measured accurately by other instruments). In relation to consideration of more sophisticated estimation techniques, the charge to the steering com- mittee included the issue of the responsiveness of the information content to user’s needs. The workshop covered a set of techniques that could provide, for instance, estimates for state-level R&D funds for specific types of industries. This is a type of tabulation that NCSES believes its users would be very interested in obtaining. In addition, the subject of the distinction between measurement error and definitional vagueness, which is related to data quality, was raised during the discussion period; a summary of that discus- sion is included as a short section at the end of this chapter. OVERVIEW Julie Gershunskaya of the Bureau of Labor Statistics presented a survey of current methods for small-area estimation that have been found useful in various federal statistical applications. Such techniques have the potential to produce R&D statistics on more detailed domains for inclusion in future 63 OCR for page 63 64 NATIONAL PATTERNS OF R&D RESOURCES National Patterns reports. Currently, the National Patterns reports tabulate statistics on R&D funding primarily at the national level, but there are also state-level tabulations for some major categories of funders and performers for the current year. In addition, tabulations of industrial R&D funding are also available for about 80 separate North American Industrial Classifica- tion System (NAICS) codes, down to three digits of detail.1 These efforts to provide information for subnational domains are commended. Kei ­ oizumi K noted earlier in the workshop that many users would benefit from the publication of statistics on R&D funding for more detailed domains: pos- sibilities include providing R&D funding for substate geographic levels or for domains defined by states crossed by type of industry. There may also be interest in providing future tabulations for particular categories of col- leges and universities. A small area is defined as a domain of interest for which the sample size is insufficient to make direct sample-based estimates of adequate precision. These “small areas” in the context of R&D can be geographic entities, industrial types, socio­ emographic groups, or intersections of d geography, industry, or demography. Small-area estimation methods are ­ techniques that can be used, when the sample size is inadequate, to pro- duce reliable estimates by using various additional sources of information from other domains or time periods. However, such methods do rely on various assumptions about how that information links to the information from the domain of interest. Gershunskaya said at the outset that the best strategy to avoid reli- ance on small-area estimation is to provide for sufficiently reliable direct estimates for the domains of interest at the sample design stage. How- ever, it is typical for surveys carried out by federal statistical agencies to have insufficient sample sizes to support estimates for small domains ­ requested by the user communities. Hence, the need for small-area estimates is widespread. Gershunskaya differentiated between direct and indirect estimates. Direct estimates use the values on the variable of interest from only the sample units for the domain and time period of interest. They are usually unbiased or nearly so, but due to limited sample size, they can be unreliable. Indirect estimates “borrow strength” outside the domain or time period (or both) of interest and so are based on assumptions, either implicitly or explicitly. As a result of their use of external information, indirect estimates can have smaller variances than direct estimates, but they can be biased if the assumptions on which they are based are not valid. The objective therefore is to try to find an estimator with substantially reduced variance but with only slightly increased bias. 1 For 2007 data, see http://www.nsf.gov/statistics/nsf11301/pdf/ tab58.pdf [January 2013]. OCR for page 63 SMALL-AREA ESTIMATION 65 DIRECT ESTIMATORS Gershunskaya first reviewed the basic Horvitz-Thompson estimator and then discussed several modifications to it. Horvitz-Thompson Introducing some notation, let the quantity of interest be denoted by Yd for domain d of the population. In considering the application of these methods to R&D statistics, domains could be defined by states or by industries with a certain set of NAICS codes. Each sampled unit j has an associated sample weight, denoted wj, which is equal to the inverse of a unit’s probability of being selected.2 (The rough interpretation is that the weight corresponds to the number of population units represented by each sampled unit.) The Horvitz-Thompson estimator of Yd is ∑w y j j , j ∈sd where yj is the measurement of interest for sample unit j, and sd denotes the set of sampled units in domain d. The Horvitz-Thompson estimator may be unreliable, especially for small domains. To address this, there are various alternative direct estimators that may out-perform the Horvitz-Thompson estimator, especially when auxiliary data are available. Ratio Estimators To discuss these estimators, additional notation is needed. Let yj denote the measurement of interest for sample unit j, let xj denote auxiliary data for sample unit j (assumed univariate to start), and let Xd be the known population total for domain d, from administrative or census data. (Note that the dependence of both yj and xj on domain d is not explicitly indi- cated in the notation to keep things more readable.) In the case of R&D statistics, yj could be the R&D expenditure for company j, xj could be the total payroll for company j, and Xd could be the true population total payroll in a particular state. Then the ratio estimator, using sample data, is given by 2 In practice, survey weights are almost never design weights in the sense of being inverse selection probabilities; nonresponse adjustment or imputation (or both) change their properties (see, e.g., Särndal and Lundström, 2005). OCR for page 63 66 NATIONAL PATTERNS OF R&D RESOURCES ˆ Y HT ˆ ˆ ˆ Yd(R) = Xd Bd , where Bd = dHT , Xˆ d ˆ ˆ where YdHT and XdHT are the Horvitz-Thompson estimators of the respec- tive population totals. If there is a substantial correlation between R&D expenditure and payroll size, the ratio estimator may provide a marked improvement over the Horvitz-Thompson estimator. A particular case of the ratio estimator is given for the situation where xj equals 1 if j is in the dth domain and equals zero otherwise, which is referred to as the post-stratified estimator. In this case, letting Nd be the number of population units in the domain (assumed to be known), and letting ˆ( NdHT ) = ∑w j j ∈sd be the sample-based estimate of Nd, the post-stratified estimator can be written as ˆ Y (HT ) ˆ Yd(PS) = Nd d (HT ) . ˆ Nd The post-stratified estimator has improved performance in comparison with the Horvitz-Thompson estimator. However, when the domain sample size is small, the post-stratified estimator can still perform poorly. Generalized Regression Estimator The ratio estimator can be expressed as a special case of the General- ized Regression (GREG) estimator: ( ) T ˆ ˆ ˆ( Yd(GREG) = Yd(HT ) + Xd − XdHT ) ˆ Bd , where: Xd is a vector of known population totals for domain d, ˆ( XdHT ) is a vector of Horvitz-Thompson estimates of Xd, ˆ Yd(HT ) is a Horvitz-Thompson estimate of Yd, and ˆ Bd is a vector of coefficients (derived from the sample using a particular formula). OCR for page 63 SMALL-AREA ESTIMATION 67 This estimator also belongs to a variety called calibration estimators, as the second term here “corrects” (or “calibrates”) the Horvitz-Thompson estimator for Y using known population totals for X. Note that the estimator for Bd is based on sample data. When the sample size is small, this estimate may be unstable. To address this, one can ˆ pool the data over domains to produce a single B . The resulting modified direct estimator, known as the survey regression estimator, is expressed as follows: ( ) T ˆ ˆ ˆ( Yd(SR) = Yd(HT ) + Xd − XdHT ) ˆ B. Gershunskaya illustrated how this estimator might be applied to NCSES R&D data. Let Xim be the known population payroll in industry-type i and ˆ (HT ) state m, Xim be the Horvitz-Thompson estimate of payroll in industry- type i and state m, Xiˆ (HT ) be the Horvitz-Thompson estimate of payroll, national total for industry-type i, and Yi ˆ (HT ) be the Horvitz-Thompson estimate of R&D funds, national total for industry-type i. Then, one can compute ˆ Y (HT ) ˆ Bi = i (HT ) ˆ Xi using national data for industry i in the survey regression estimator to estimate ( ) T ˆ (SR ˆ (HT ˆ (HT Yim ) = Yim ) + Xim − Xim ) ˆ Bi . ˆ Gershunskaya pointed out that, although B in the survey regression estimator is based on a larger sample, the effective sample size still equals the domain sample size. To see why that is so, one can rewrite the survey regression estimator as ˆ ˆ j ∈sd ( Yd(SR) = Xd B + ∑ w j y j − x j B , ˆ ) which shows that the survey regression estimator is a sum of the fitted v ­ alues from a regression model based on predictors from the domain of interest, and it has a bias correction from weighting the residuals again from a regression using data only from that domain. Therefore, the effi- OCR for page 63 68 NATIONAL PATTERNS OF R&D RESOURCES ciency of the survey regression estimator depends on the variability of the residuals and on the domain sample size. INDIRECT ESTIMATORS Gershunskaya then moved to a description of indirect estimators. As she noted earlier, direct sample-based estimators are unbiased (or nearly so) but they may have unacceptably large variances. To overcome this problem, certain assumptions about similarity or relationships between areas or time periods (or both) are made, and these assumptions allow one to use more sample data, thus “borrowing strength.” Her first example of an indirect estimator was the synthetic estimator, which is a sample-based estimator for which the parameters estimated from larger (or combined) domains or from other time periods are applied to a small area. She then discussed the structure-preserving estimator, known as SPREE, and composite estimators. Synthetic Estimator To describe synthetic estimation, Gerhsusnkaya began with the usual direct estimate of the sample domain mean from a simple random sample, namely: 1 yd = nd ∑y j . j ∈sd Unfortunately, this estimator can be unreliable if the sample size in the domain is small, so one would want to use the data from the other domains to improve its reliability. One obvious candidate, assuming that means are constant over domains, would be the global average over all domains. The resulting estimator, 1 yd = ∑ yj , n j∈s is an example of the synthetic estimator. It is much more stable, but it is very likely to be substantially biased because the assumption of a common mean across domains will rarely hold. If there are auxiliary variables, a more realistic assumption than the assumption of a common mean would be to assume, for example, a common regression slope across domains. Consider again the survey regression estimator: OCR for page 63 SMALL-AREA ESTIMATION 69 Yd(SR) = Xd B + ∑ w j y j − x j B . ˆ ˆ j ∈sd ˆ ( ) This estimator can be depicted as survey regression equals model plus bias correction. The “model” part of the survey regression estimator turns out to be a synthetic estimator, ˆ ˆ Yd(Syn) = Xd B . To better understand synthetic estimation, consider an R&D example. A synthetic estimator of R&D expenditure in industry type i and state m is ˆ (Syn ˆ Yim ) = Xim Bi , ˆ Y (HT ) ˆ ˆ where Bi = i (HT ) and it is assumed that the common ratio Bi of R&D to ˆ Xi total payroll holds across all states in industry type i. NCSES has already used a similar approach to produce a Survey of Industrial R&D state estimator, which is described in Slanta and Mulrow (2004). For her example, Gershunskaya said, R&D for state m is estimated as ˆ ˆ Ym = Ym ,s + Ym ,c , where Ym ,s = ∑ ym , j j ∈s ˆ is the observed sample total for R&D in state m, and Ym ,c is a prediction of the nonsampled part of the population for R&D in state m, which is computed as I Ym ,c = ∑ RimYi ,c ˆ ˆ i =1 where Rim is the ratio of payroll in state m to national payroll total for industry i, and ( Yi ,c = ∑ w j − 1 yi , j ˆ j ∈s ) OCR for page 63 70 NATIONAL PATTERNS OF R&D RESOURCES is a prediction for the nonsampled part of R&D in industry type i. This approach relies on the assumption that in each industry type i, R&D is distributed among states proportionately to each state’s total payroll. Gershunskaya compared the state estimator from Slanta and Mulrow (2004) with the synthetic estimator based on a common industry slope. For simplicity, Gershunskaya considered the estimator for the whole popula- tion, rather than for only the nonsampled part. The Slanta-Mulrow (SM) estimator can then be expressed as I X ˆ YmSM) = ∑ im Yi (HT ) , ˆ( i = 1 Xi and the common industry slope estimator can be expressed as I ˆ Y (HT ) YmCIS) = ∑ Xim i (HT ) . ˆ( i =1 ˆ Xi Both estimators are synthetic estimators and are based on similar assump- tions. Notice that in the denominators, the Slanta-Mulrow estimator uses the population total Xi, and the common industry slope estimator uses the ˆ Horvitz-Thompson estimator Xi(HT ) of the population total Xi. It might be worth evaluating these two competing estimators using BRDIS data. If indeed R&D is correlated with payroll, the common industry slope estima- tor may prove to be preferable. SPREE Another synthetic estimator is SPREE, the structure preserving esti- mator. It is based on a two-dimensional table of estimates, with elements Cim, with one dimension indexed by i and running from 1 to I (e.g., type of industry) and the other dimension indexed by m and running from 1 to M (e.g., state). The Cim here represents the total of R&D funds for all industries of a certain type in a given state. SPREE assumes that initial estimates of individual cell totals, Cim, are available from a previous census or from administrative data, though as such they are possibly substantially biased. This approach also assumes that the sample from a current survey is large enough so that one can obtain direct sample-based estimates for the ˆ ˆ marginal totals, denoted Yi and Ym . The goal is to estimate the amount of R&D funding for each of the individual cells by updating them to be con- sistent with the marginal totals. Iterative proportional fitting (also known as raking) is a procedure that adjusts the cell totals Cim so that the modified OCR for page 63 SMALL-AREA ESTIMATION 71 table conforms to the new marginal estimates. The revised cell totals are the new small-area estimates. The implicit assumption is that the relative structure of the table is constant since the last census, that is, C ij Yij Ckj Ykj = Cil Yil Ckl Ykl for any combination of indices i, j, k, and l. In summary, Gershunskaya said, direct estimators are unbiased and should be used when the sample size is sufficient to produce reliable esti- mates. However, with small samples they have large variances. Synthetic estimators, in contrast, have smaller variances but they are usually based on strong assumptions, and therefore may be badly biased if their assump- tions do not hold. Composite Estimators Gershunskaya then turned to another type of indirect estimator, com- posite estimators. They are convex combinations of direct and synthetic estimators, which provide a compromise between bias and variance. They can be expressed as follows: Yd(C ) = ν dYd(Direct ) + (1 − ν d ) Yd(Model ) . ˆ ˆ ˆ The central question in using them is how one should choose the weights nd. One possible approach is to define weights on the basis of sample cover- ˆ age in the given area, e.g., selecting nd proportional to Nd / Nd . However, this method fails to account for variation of the variable of interest in the area. A second possibility is to use weights that minimize the mean squared error of the resulting estimator. This second method depends on potentially unreliable estimates of the mean squared error of composite parts. Methods Based on Explicit Models In contrast to these approaches that are based on implicit models, the final general category of estimators described by Gershunskaya covers methods based on explicit models. Explicitly stated modeling assump- tions allow for the application of standard statistical methods for model selection, model evaluation, the estimation of model parameters, and the OCR for page 63 72 NATIONAL PATTERNS OF R&D RESOURCES production of measures of uncertainty (e.g., confidence intervals, mean squared error) under the assumed model. Methods based on explicit models constitute the core of modern small-area methods. The most popular small-area methods are based on either the linear mixed model (for continuous variables) or the generalized linear mixed model (for binary or count data). Two types of models are commonly used, area-level and unit-level models. An area-level model (with assump- tions pertaining to the aggregate area level) is applied when only area-level a ­ uxiliary data are available (rather than auxiliary data for individual units). In this case, direct sample-based estimates play the role of individual data points in the model, with their sampling variances assumed to be known. Generally, area-level models are easier to apply than the unit-level models. One benefit from the application of these models is that they usually take into account the sample design.3 Unit-level models (with the assumptions based on relationships between individual respondents) require different and more detailed information (which is why they are seldom used by statistical agencies) and generally rely on assumptions of independence of units, assumptions that are often violated in clustered survey designs. But if the assumptions for unit-level models are tenable and the unit-level data are available, one would want to use them in place of area-level aggregated models for reasons of efficiency. However, some complications can arise when trying to account for the sample design. Fay-Herriot Small-Area Model Fay and Herriot (1979) introduced an area-level model in the context of the estimation of per capita income for small places. The authors used the following set of auxiliary variables: county level per capita income, the value of owner-occupied housing, and the average adjusted gross income per exemption. Fay-Herriot models are often represented using two-level model assumptions, the sampling model and the linking model. The sam- pling model states that the direct sample estimator estimates the true popu- lation parameter without bias and with a certain (sampling) error. The linking model makes certain assumptions (e.g., linear regression relation- ship) about the true underlying values. In the Fay-Herriot model, the sample-based estimate is ˆ Yd(Direct ) = θ d + ε d , 3 However, some areas may not have any sample. If areas are selected into the sample with unequal probabilities related to the true area means, bias may occur as a result. OCR for page 63 SMALL-AREA ESTIMATION 73 that is, the sum of an expected value plus an error term with zero mean and with its variance equal to the sampling variance of the direct sample estimate. The linking model for the mean can be written as θ d = Xd β + ν d . T This equation indicates that the mean of the sample estimate is expressed as ( ) a linear combination of the auxiliary variables Xd β plus a model-error, T nd, having mean zero and a constant variance, with the model error inde- pendent of the sampling error. The entire model can then be expressed as ˆ Yd(direct ) = Xd β + ν d + ε d , T which is a linear mixed model since it has both fixed and random effects. Under this model, the best unbiased linear estimator (in a certain well- defined sense) for qd has a composite form, as follows: θ d = γ dYd(Direct ) + (1 − γ d ) Xd β , ˆ ˆ T ˆ where A γd = A + Vd(Direct ) , ( Direct ) A is the variance of the random term in the linking model, and Vd is the sampling variance, which is assumed known. The above composite form shows that the direct estimates are shrunk toward the synthetic part, where the smaller A is (i.e., the better the linking model explains the underlying relationship), the more weight goes to the synthetic (i.e., model-based) part. Similarly, areas with estimates with larger sampling variances also have more weight allotted to the synthetic part. R&D Example Gershunskaya then provided an example to show how one might produce small-area estimates of R&D funds for small domains defined by ˆ (Direct states and industry types. Let Yim ) be a direct sample-based estimator for R&D in industry i and state m from BRDIS. The direct sample estima- tor provides unbiased measurement of the unobserved truth qim, with some random error: OCR for page 63 74 NATIONAL PATTERNS OF R&D RESOURCES ˆ (Direct Yim ) = θ im + ε im (sampling model). The assumption is that ignoring an error term, the state-level R&D funds in industry type i are proportional to the state’s total payroll, which can be expressed as θ im = Xim Bi + ν im (linking model). The resulting small-area estimator can be written as ˆ Ai ˆ ( ˆ (Direct ˆ ) θ im = Xim Bi + γ im Yim ) − Xim Bi , where γ im = ˆ ˆ ( Direct . Ai + Vim ) Estimation of Bi and Ai are straightforward from the data. (A cautionary note: this application differs from the formal Fay-Herriot model since the variances of eim must be estimated and can be inaccurate if they are based on small sample sizes.) Unit-Level Small-Area Modeling An example of a unit-level model is a small-area model of areas planted with corn and soybeans for 12 Iowa counties (Battese, Harter, and Fuller, 1988). The survey data consisted of Ydj , the number of hectares of corn (or soybeans) per segment j in county d. The auxiliary variables, collected by satellite, were x1,dj , the number of pixels planted with corn per seg- ment j in county d, and x2,dj , the number of pixels planted with soybean per segment j in county d. The model considered in the paper is called the nested-error regression: Ydj = β0 + β1x1,dj + β 2 x2,dj + ν d + ε dj , where the error terms are independent. The resulting small-area estimator is ˆ ( θ d = γ d yd + (1 − γ d ) β0 + β1x1d + β 2 x2 d , ˆ ˆ ˆ ) where σν2 γd = σ + σ e / nd . 2 ν 2 OCR for page 63 SMALL-AREA ESTIMATION 75 Note that the larger the sample size of an area, the more relative weight is placed on the sample part of the weighted average. The regression c ­ oefficients and the error variances are easily estimated from the data. Both the Fay-Herriot and the Battese, Fuller, and Harter models are examples of linear mixed models, Gershunkaya noted. Smoothing Over Time None of the models presented so far examined the potential from use of the dependent variable collected from previous time periods. This is extremely relevant for R&D statistics, Gershunskaya said, since many of the surveys and censuses used as inputs for National Patterns have a rela- tively long, stable, historical series, often going back to the 1950s. As an example of a small-area spatial-temporal model, Gershunskaya described the results in Rao and Yu (1994). For areas d = 1, …, D and time periods t = 1, …, T, assume that ˆ( YdtDirect ) = Xdt β + ν d + udt + ε dt , T where the udt are random error terms that follow a first-order autoregres- sive process. In this case, a good small-area estimator (for the current period) is a weighted sum of the synthetic estimator for the current period and model residuals from the previous time periods, namely, (Yˆ ) T −1 θ dT = γ dT YdT ) + (1 − γ dT ) XdT β + ˆ ˆ (Direct T ˆ ∑γ dt ( Direct ) dt T ˆ − Xdt β . t =1 Modifications for Discrete Dependent Variables Gershunskaya then briefly discussed models for discrete dependent variables. The most common case is when yd is a binary variable. Assume that the quantity of interest is the small-area proportion Pd = Nd 1 ∑ ydj . − jε d Then one can formulate an area-level Fay-Herriot-type model using direct sample-based estimates of proportions. However, the area-level approach has shortcomings, one of which is that some areas may have no sample units reporting R&D and thus will be dropped from the model. A unit-level generalized linear mixed model may be more efficient in this case. Assume OCR for page 63 76 NATIONAL PATTERNS OF R&D RESOURCES that ydj is 1 with probability pdj and 0 with probability 1 – pdj; then the standard model in this situation has the following form: pdj = xdj β + ν d . T log 1 − pdj Implementing Small-Area Modeling for National Patterns Gershunskaya then indicated how NCSES could develop explicit small- area models using the National Patterns datasets. She first considered a unit-level model scenario. Assume that wages, employment, and possibly other covariates are obtained from administrative data for all businesses in the target population. Using sample data, one could establish a relationship between R&D funding and auxiliary variables by fitting the parameters of some explicit model. One could then apply the results of this model fitting to the prediction of R&D in the nonsampled part of the above models. However, because there is no explicit question on state-by-industry R&D in the BRDIS questionnaire, a proxy for it would have to be derived. (Although possible, it would currently be a laborious effort.) In such model- ing, it would be important to account for the sample design in the variance estimation, which is a serious complication. The second scenario proposed by Gershunskaya was for an area-level model. Here a current design-based ratio or regression estimator (or other area-level predictor(s), e.g., “true” population values available from an administrative file) could be used in the synthetic part of the composite estimator. It would also be useful to consider alternative direct estimators of R&D that could be used in the area-level model, Gershunskaya said, and she outlined a few possibilities of improved direct estimators (based on the theory developed by Sverchkov and Pfeffermann, 2004). Let dm,j be 1 if company j reports R&D in state m, and 0 otherwise. If one does not have auxiliary information, an alternative direct sample estimator is ∑ (w j ) − 1 ym , j Ym = ∑ ym, j + ( Nm − nm ) ˆ j ∈s j ∈s ∑δ (w j ∈s m, j j −1). As an analogue of the ratio estimator, using company’s payroll xj or some other auxiliary variable (possibly payroll per employee), a modified form of the previous alternative direct sample estimator can be defined as OCR for page 63 SMALL-AREA ESTIMATION 77 ∑ (w j ) − 1 ym , j Ym = ∑ ym , j + Xm ,c ˆ j ∈s , j ∈s ∑δ (w m, j j ) − 1 xj j ∈s where Xm,c is the total payroll in the nonsampled portion of state m. Finally, as an analogue of the modified direct estimator, I Ym = ∑ ym , j + ∑ Xim ,c Bi + ( Nm − nm ) Mm,c , ˆ ˆ ˆ j ∈s i =1 where ∑y i, j 1 if company j reports R & D in industry i ˆ Bi = j ∈s , where δ i , j = ∑ δ i, j xj 0 otherwise j ∈s I ˆ ∑δ (w m, j j ) − 1 ym , j − ∑ xim , j Bi ˆ Mm ,c = j ∈s i =1 . ∑δ (w m, j j −1 ) j ∈s Final Considerations and Discussion Gershunskaya concluded her review of small-area estimation methods with a set of important considerations: • It is important to plan for estimation for domains of interest at the design stage to ensure that one has direct estimates of some reliability to start off. • Finding a set of good auxiliary variables is crucial for success in small-area modeling. • Small-area estimation methods are based on assumptions, and therefore evaluation of the resulting estimates is vital. • Using a statistical model supports a systematic approach for a given problem: (a) the need for explicitly stated assumptions, (b) the need for model selection and checking, and (c) the production of measures of uncertainty. • It is important to account for the sample design (unequal prob- abilities of selection and clustering effects) in the model formulation and fitting. OCR for page 63 78 NATIONAL PATTERNS OF R&D RESOURCES Joel Horowitz pointed out that these models have a long history, and there are methodologies that have been developed that avoid the assumption of linearity or proportionality and that can accommodate estimation errors in the predictors. Gershunskaya agreed that there were non­ arametric p models that had such properties. She said that her presentation was already detailed and therefore some complicated issues could not be included. Eric Slud added that the sample survey context made some of this particular application more difficult than in the literature that Horowitz was refer- ring to. Slud added that the key issue in applying these techniques is finding predictive covariates. Often, one is restricted to synthetic variables, often measured in more aggregated domains. Another topic that emerged during the floor discussion was whether there were likely to be productive predictors available in this context. Slud said that clearly there were opportunities to use synthetic variables by using information at higher levels of aggregation. However, the availability of useful predictors at the respondent level was less clear and would be known only by subject-matter researchers conducting some exploratory data analysis. John Jankowski said that one of NCSES’s highest areas of concern in terms of data presentation is the state and subnational distribution of R&D activity. He added that the business sector is the one for which this is most relevant. He said that a small firm in Massaschusetts sampled in BRDIS could have a weight of 250 so the resulting direct small-area estimates would likely be unreliable, but he noted that the Slanta and Mulrow (2004) paper was successful in reducing the magnitude of that problem. However, it could not address the small-area distribution by industrial category because in the Survey of Industrial R&D for the distribution of R&D by industry sector the funds were just assigned to the major industry category. Now, however, BRDIS has the entire distribution of R&D by industry sector and so there is a great deal more potential for the use of small-area estimation. Jankowski added that BRDIS also provides the geographic distribution of such funds. Even though geography and industrial sector are not simultaneously available, he said that it might now be possible to produce such estimates through some hard work, though it is not certain. Jankowski added as things stand now, if there is a large R&D-performing company that is 51 percent in one category and 49 percent in another, 100 percent would be assigned to the first category, and users would notice that. New technology gives us a chance to better distribute those funds to industrial categories. Christopher Hill was concerned that if the National Science Foundation provided small-area estimates, there would be situations in which experts in R&D funding would know that the estimates are incorrect because they have local information. Slud responded that this is the case for every set of OCR for page 63 SMALL-AREA ESTIMATION 79 small-area estimates. When such situations occur, they should be seen as opportunities to improve either the quality of the input data, the form of the model, or the variables included in the model. Many participants noted that model validation is an important part of the application of these tech- niques. Jankowski added that it would be unclear the form in which such estimates would be made publicly available. Slud pointed out that prior to application of this methodology, it is important to explore how sensitive the results are to the model assump- tions, which depends on the relative size of the sampling errors to the others that one might be able to quantify. Karen Kafadar pointed out that one advantage with this methodology is that since you get standard errors for your estimates, you can compare your estimates with ground truth and know whether they are or are not consistent. MEASUREMENT ERROR OR DEFINITIONAL VAGUENESS As noted above, an issue that arose during the workshop concerned the need to better understand sources of error underlying NCSES survey and census responses. Survey data are subject to sampling error and non­ sampling error, with nonsampling error often decomposed into nonresponse error and measurement error. As is almost always the case for survey estimates, NCSES does not have a complete understanding of the magni- tude of nonsampling error. Several participants suggested that it would be beneficial for NCSES to investigate this topic, possibly through greater use of ­ einterviews or comparison of survey and census responses with admin- r istrative records (see section on STAR METRICS in Chapter 4). In particular, several participants pointed out that it would be impor- tant to distinguish between true measurement error, that is, when the total R&D funding level is misreported, and differences in interpretation, for example, in distinguishing between what is applied research and what is development. It was suggested this issue could be addressed through the use of focus groups and other forms of cognitive research. Or a subsampling study could be carried out in which answers subject to possible definitional vagueness could be followed up and resolved. However, several participants acknowledged that such a study would be expensive and labor intensive.
{"url":"http://www.nap.edu/openbook.php?record_id=18317&page=63","timestamp":"2014-04-18T03:56:15Z","content_type":null,"content_length":"81520","record_id":"<urn:uuid:06298c26-6103-4280-a957-4ea90ff5abae>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Causes of loss of interest in String program String experts have decided after several decades experience that one should NOT think in terms of strings and branes in a geometry with compactified extra dimensions. But what you get from AdS/CFT are low-dimensional field theories in flat space being to an AdS space times a compact space, containing strings and branes. The radial AdS dimension encodes the RG flow, and the compact space (and the objects with extension in it) is "made from" the space of ground states of the field theory. From this perspective, string theory is the universal theory of emergent RG geometry in quantum field theory. At the moment, it only works properly for an emergent AdS space, but if the dS/CFT correspondence can be understood, then this will be true for spaces of positive curvature as well. (In dS/CFT the boundary is purely spacelike and lies in the infinite past and future, rather than being timelike as in AdS/CFT, so it's as if the timelike direction in the Lorentzian gravitational space is emerging from Euclidean field theory on a sphere in the infinite past.) So not only are people still doing flat-space string phenomenology, complete with branes and extra dimensions, but branes and extra dimensions have proved to be implicit in standard quantum field theory, where they emerge from the existence of a continuous degeneracy of ground states. That multidimensional moduli space of ground states is where the extra dimensions come from, in this case! Branes are domain walls separating regions in different ground states, strings are lines of flux connecting these domain walls. Furthermore, in gauge theories with a small number of colors, it looks like the extra dimensions will be a noncommutative geometry, it's only in the "large N" limit of many colors that you get ordinary space. (Consider that the noncommutative standard model of Connes et al is a theory of gravity on an "almost commutative" space - product of a Riemannian space and a finite noncommutative geometry - with the gauge bosons coming from gravity on the noncommutative part of the product geometry. This seems to be consistent with the picture coming from string theory.)
{"url":"http://www.physicsforums.com/showpost.php?p=3263502&postcount=94","timestamp":"2014-04-21T12:13:54Z","content_type":null,"content_length":"9707","record_id":"<urn:uuid:73f4f209-50ad-40df-88bb-b4736973a26f>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
SuperMemo Algorithm Home News Shopping FAQ Library Download Help Support Contents : Technical articles : Algorithm Algorithm used in SuperMemo 8 for Windows Dr P.A.Wozniak, Sep 10, 1995 Below you will find a general outline of the sixth major formulation of the repetition spacing algorithm used in SuperMemo. It is referred to as Algorithm SM-8 since it was first implemented in SuperMemo 8. Although the increase in complexity of Algorithm SM-8 as compared with its predecessor, Algorithm SM-6, is incomparably greater than the expected benefit for the user, there is a substantial theoretical and practical evidence that the increase in the speed of learning resulting from the upgrade may fall into the range from 30 to 50%. Note that a newer version of the algorithm exists: Algorithm SM-11. ┃ Historic note: earlier releases of the algorithm ┃ ┃ ┃ ┃ Although the presented algorithm may seem complex, you should find it easier and more natural once you understand the evolution of individual concepts such as E-Factor, matrix of optimum ┃ ┃ intervals, optimum factors, and forgetting curves. ┃ ┃ ┃ ┃ • 1985 - Paper-and-pencil version of SuperMemo is formulated (Algorithm SM-0). Repetitions of whole pages of material proceed along a fixed table of intervals. See also: Using SuperMemo without ┃ ┃ a computer ┃ ┃ • 1987 - First computer implementation makes it possible to divide material into individual items. Items are classified into difficulty categories by means of E-Factor. Each difficulty category ┃ ┃ has its own optimum spacing of repetitions (Algorithm SM-2) ┃ ┃ • 1989 - SuperMemo 4 was able to modify the function of optimum intervals depending on the student's performance (Algorithm SM-4) ┃ ┃ • 1989 - SuperMemo 5 replaced the matrix of optimum intervals with the matrix of optimal factors (an optimum factor is the ratio between successive intervals). This approach accelerated the ┃ ┃ modification of the function of optimum intervals (Algorithm SM-5) ┃ ┃ • 1991 - SuperMemo 6 derived optimal factors from forgetting curves plotted for each entry of the matrix of optimum factors. This could dramatically speed up the convergence of the function of ┃ ┃ optimum intervals to its ultimate value (Algorithm SM-6) ┃ ┃ • 1995 - SuperMemo 8 Pre-Release 1 capitalized on data collected by users of SuperMemo 6 and SuperMemo 7 and added a number of improvements that strengthened the theoretical validity of the ┃ ┃ function of optimum intervals and made it possibility to accelerate its modification, esp. in early stages of learning (Algorithm SM-8). New concepts include: ┃ ┃ □ replacing E-factors with absolute difficulty factors: A-Factors ┃ ┃ □ fast approximation of A-Factors from the FirstGrade-vs.-A-Factor correlation graph and ForgettingIndex-Grade graph ┃ ┃ □ real-time adjustment of the matrix of optimal factors based on introducing D-Factors and power approximation of the decline of optimum factors ┃ SuperMemo computes optimum inter-repetition intervals from the grades scored by individual items in learning. This record is used to estimate the current strength of a given memory and the difficulty of the underlying item. This difficulty expresses complexity of memories and the effort needed to produce unambiguous and stable memory traces. SuperMemo takes the requested recall rate as the optimization criterion (e.g. 95%), and computes the intervals that satisfy this criterion. The function of optimum intervals is represented in a matrix form (OF matrix) and is subject to modification based on the results of the learning process. This is a more detailed description of the Algorithm SM-8: 1. Inter-repetition intervals are computed using the following formula: ☆ OF - matrix of optimal factors, which is modified in the course of repetitions ☆ OF[1,L+1] - value of the OF matrix entry taken from the first row and the L+1 column ☆ OF[n,AF] - value of the OF matrix entry that corresponds with the n-th repetition, and with item difficulty AF ☆ L - number of times a given item has been forgotten (from "memory Lapses") ☆ AF - number that reflects absolute difficulty of a given item (from "Absolute difficulty Factor") ☆ I(n) - n-th inter-repetition interval for a givent item 2. The matrix of optimal factors OF used in Point 1 has been derived from the mathematical model of forgetting and from similar matrices built on data collected in years of repetitions in collections created by a number of users. Its initial setting corresponds with values found for a less-than-average student. During repetitions, upon collecting more and more data about the student s memory, the matrix is gradually modified to make it approach closely the actual student s memory properties. After years of repetitions, new data can be fed back to generate more accurate initial matrix OF. In SuperMemo 2000, this matrix can be viewed in 3D with Tools : Statistics : Analysis : 3-D Graphs : O-Factor Matrix 3. The absolute item difficulty factor (A-Factor), denoted AF in Point 1, expresses the difficulty of an item (the higher it is, the easier the item). It is worth noting that AF=OF[2,AF]. In other words, AF denotes the optimum interval increase factor after the second repetition. This is also equivalent with the highest interval increase factor for a given item. Unlike E-Factors in Algorithm SM-6 employed in SuperMemo 6 and SuperMemo 7, A-Factors express absolute item difficulty and do not depend on the difficulty of other items in the same collection of study material 4. Optimum values of the entries of the OF matrix are derived through a sequence of approximation procedures from the RF matrix which is defined in the same way as the OF matrix (see Point 1), with the exception that its values are taken from the real learning process of the actual student. Initially, matrices OF and RF are identical; however, entries of the RF matrix are modified with each repetition, and a new value of the OF matrix is computed from the RF matrix by using approximation procedures. This effectively produces the OF matrix as a smoothed up form of the RF matrix. In simple terms, the RF matrix at any given moment corresponds to its best-fit value derived from the learning process; however, each entry is considered a best-fit entry on it s own, i.e. in abstraction from the values of other RF entries. At the same time, the OF matrix is considered a best-fit as a whole. In other words, the RF matrix is computed entry by entry during repetitions, while the OF matrix is a smoothed copy of the RF matrix 5. Individual entries of the RF matrix are computed from forgetting curves approximated for each entry individually. Each forgetting curve corresponds with a different value of the repetition number and a different value of A-Factor (or memory lapses in the case of the first repetition). The value of the RF matrix entry corresponds to the moment in time where the forgetting curve passes the knowledge retention point derived from the requested forgetting index. For example, for the first repetition of a new item, if the forgetting index equals 10%, and after four days the knowledge retention indicated by the forgetting curve drops below 90% value, the value of RF[1,1] is taken as four. This means that all items entering the learning process will be repeated after four days (assuming that the matrices OF and RF do not differ at the first row of the first column). This satisfies the main premise of SuperMemo, that the repetition should take place at the moment when the forgetting probability equals 100% minus the forgetting index stated as percentage. In SuperMemo 2000, forgetting curves can be viewed with Tools : Statistics : Analysis : Curves: 6. The OF matrix is derived from the RF matrix by: (1) fixed-point power approximation of the R-Factor decline along the RF matrix columns (the fixed point corresponds to second repetition at which the approximation curve passes through the A-Factor value), (2) for all columns, computing D-Factor which expresses the decay constant of the power approximation, (3) linear regression of D-Factor change across the RF matrix columns and (4) deriving the entire OF matrix from the slope and intercept of the straight line that makes up the best fit in the D-Factor graph. The exact formulas used in this final step go beyond the scope of this illustration. Note that the first row of the OF matrix is computed in a different way. It corresponds to the best-fit exponential curve obtained from the first row of the RF matrix. All the above steps are passed after each repetition. In other words, the theoretically optimum value of the OF matrix is updated as soon as new forgetting curve data is collected, i.e. at the moment, during the repetition, when the student, by providing a grade, states the correct recall or wrong recall (i.e. forgetting) (in Algorithm SM-6, a separate procedure Approximate had to be used to find the best-fit OF matrix, and the OF matrix used at repetitions might differ substantially from its best-fit value) 7. The initial value of A-Factor is derived from the first grade obtained by the item, and the correlation graph of the first grade and A-Factor (G-AF graph). This graph is updated after each repetition in which a new A-Factor value is estimated and correlated with the item s first grade. Subsequent approximations of the real A-Factor value are done after each repetition by using grades, OF matrix, and a correlation graph that shows the correspondence of the grade with the expected forgetting index (FI-G graph). The grade used to compute the initial A-Factor is normalized, i.e. adjusted for the difference between the actually used interval and the optimum interval for the forgetting index equal 10% 8. The FI-G graph is updated after each repetition by using the expected forgetting index and grade values. The expected forgetting index can easily be derived from the interval used between repetitions and the optimum interval computed from the OF matrix. The higher the value of the expected forgetting index, the lower the grade. From the grade and the FI-G graph (see FI-G graph in Analysis), we can compute the estimated forgetting index which corresponds to the post-repetition estimation of the forgetting probability of the just-repeated item at the hypothetical pre-repetition stage. Because of the stochastic nature of forgetting and recall, the same item might or might not be recalled depending on the current overall cognitive status of the brain; even if the strength and retrievability of memories of all contributing synapses is/was identical! This way we can speak about the pre-repetition recall probability of an item that has just been recalled (or not). This probability is expressed by the estimated forgetting index 9. From (1) the estimated forgetting index, (2) length of the interval and (3) the OF matrix, we can easily compute the most accurate value of A-Factor. Note that A-Factor serves as an index to the OF matrix, while the estimated forgetting index allows one to find the column of the OF matrix for which the optimum interval corresponds with the actually used interval corrected for the deviation of the estimated forgetting index from the requested forgetting index To sum it up. Repetitions result in computing a set of parameters characterizing the memory of the student: RF matrix, G-AF graph and FI-G graph. They are also used to compute A-Factors of individual items that characterize the difficulty of the learned material. The RF matrix is smoothed up to produce the OF matrix, which in turn is used in computing the optimum inter-repetition interval for items of different difficulty (A-Factor) and different number of repetitions (or memory lapses in the case of the first repetition). Initially, all student s memory parameters are taken as for a less-than-average student, while all A-Factors are assumed to be equal Optimization solutions used in Algorithm SM-8 have been perfected over 10 years of using the SuperMemo method with computer-based algorithms (first implementation: December 1987). This makes sure that the convergence of the starting memory parameters with the actual parameters of the student proceeds in a very short time. Similarly, the introduction of A-Factors and the use of the G-AF graph greatly enhanced the speed of estimating individual item difficulty. The adopted solutions are the result of constant research into new algorithmic variants. The postulated employment of neural networks in repetition spacing is not likely to compete with the presented algebraic solution. Although it has been claimed that Algorithm SM-6 is not likely to ever be substantially improved (because of the substantial interference of daily casual involuntary repetitions with the highly tuned repetition spacing), the initial results obtained with Algorithm SM-8 are very encouraging and indicate that there is a detectable gain at the moment of introducing new material to memory, i.e. at the moment of the highest workload. After that, the performance of Algorithms SM-6 and SM-8 is comparable. The gain comes from faster convergence of memory parameters used by the program with actual memory parameters of the student. The increase in the speed of the convergence was achieved by employing actual approximation data obtained from students who used SuperMemo 6 and/or SuperMemo 7 Algorithm SM-8 is constantly being perfected in successive releases of SuperMemo, esp. to account for newly collected repetition data, convergence data, input parameters, etc. If you would like your own software to use the Algorithm SM-8, read about SM8OPT.DLL If you would like to use SuperMemo, but you would like to use a different repetition spacing algorithm, you might want to find out about repetition scheduling plug-in options Frequently Asked Questions (Zoran Maximovic, Serbia, Sep 25, 2000) In approximation graphs in Tools : Statistics : Analysis, some of the curves "jump out" of the graph area. What is wrong? This was a harmless bug in the algorithm in SuperMemo 98/99. The assumption is that intervals cannot grow beyond the value of A-Factor. For that reason, the maximum R-Factor should equal the relevant A-Factor. However, in plotting the forgetting curves, higher values of U-Factors are used as repetitions may be delayed (e.g. with Mercy, user procrastination, etc.). The algorithm puts a cap on the maximum R-Factor value (along the theoretical assumption that R-Factors cannot be greater than corresponding A-Factors). However, the implementation used the maximum U-Factor value as the cap (the one used in plotting the forgetting curve). Consequently, R-Factors could grow larger than A-Factors and the curve would "jump out" of the graph, which displays the correct cap. This bug should have little effect on the learning process. The higher cap does not invalidate the correctness of R-Factors. It just does not prevent very long intervals in case of very good repetition results. This bug has been fixed in SuperMemo 2000
{"url":"http://www.supermemo.com/english/algsm8.htm","timestamp":"2014-04-19T09:57:29Z","content_type":null,"content_length":"20338","record_id":"<urn:uuid:6cccb82a-e4ad-43cc-9eda-230f1c7b944d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
Bywood, PA SAT Math Tutor Find a Bywood, PA SAT Math Tutor ...I started my career in a self-contained pre-school program for children with autism where I gained extensive experience with autism and ABA trials. From there, I taught 1st/2nd grade self-contained classes and 3rd/4th grade self-contained classes of children with multiple disabilities including ... 20 Subjects: including SAT math, reading, dyslexia, algebra 1 I have been teaching Algebra and middle school math for 4 years in Camden, NJ. My experience includes classroom teaching, after-school homework help, and one to one tutoring. I frequently work with students far below grade level and close education gaps. 8 Subjects: including SAT math, geometry, algebra 1, algebra 2 ...I have tutored several students in this subject over several years, and I am a friendly, easy-going, real person who can relate to anyone. I'm a Yale biology graduate who has taken math through calculus of several variables. I have tutored dozens of students in high school math through precalculus level as well as SAT math. 66 Subjects: including SAT math, English, Spanish, reading ...I find knowing why the math is important goes a long way towards helping students retain information. After all, math IS fun!In the past 5 years, I have taught differential equations at a local university. I hold degrees in economics and business and an MBA. 13 Subjects: including SAT math, calculus, geometry, statistics ...The results of my tutoring are: renewed confidence by my students in their own abilities, improved grades, reduced stress. Students find me friendly and supportive, and I have innovative ways, based on each student's learning style, to help them understand each topic in math. I can provide references from satisfied clients. 22 Subjects: including SAT math, calculus, writing, geometry Related Bywood, PA Tutors Bywood, PA Accounting Tutors Bywood, PA ACT Tutors Bywood, PA Algebra Tutors Bywood, PA Algebra 2 Tutors Bywood, PA Calculus Tutors Bywood, PA Geometry Tutors Bywood, PA Math Tutors Bywood, PA Prealgebra Tutors Bywood, PA Precalculus Tutors Bywood, PA SAT Tutors Bywood, PA SAT Math Tutors Bywood, PA Science Tutors Bywood, PA Statistics Tutors Bywood, PA Trigonometry Tutors Nearby Cities With SAT math Tutor Carroll Park, PA SAT math Tutors East Lansdowne, PA SAT math Tutors Fernwood, PA SAT math Tutors Kirklyn, PA SAT math Tutors Llanerch, PA SAT math Tutors Merion Park, PA SAT math Tutors Millbourne, PA SAT math Tutors Nether Providence, PA SAT math Tutors Oakview, PA SAT math Tutors Overbrook Hills, PA SAT math Tutors Penn Wynne, PA SAT math Tutors Primos Secane, PA SAT math Tutors Primos, PA SAT math Tutors Upper Darby SAT math Tutors Westbrook Park, PA SAT math Tutors
{"url":"http://www.purplemath.com/Bywood_PA_SAT_Math_tutors.php","timestamp":"2014-04-19T02:29:58Z","content_type":null,"content_length":"24122","record_id":"<urn:uuid:176c7bf3-9b19-44d7-bec4-10a3d8751a09>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: somebody please help me with this question. Mr. Thompson has been asked to add storage space to a garage. He estimates 6 square feet per individual will be required to meet the needs of 12 people. If the storage space is 15% of the total amount of space required, how much total space does Mr. Thompson have? Mr. Thompson had______ square feet. • one year ago • one year ago Best Response You've already chosen the best response. A good start is to see how much total storage space is required. So, that part is given by 6 square feet for each person and there are 12 people. Best Response You've already chosen the best response. Best Response You've already chosen the best response. Good. Now that you have that number. The initial number that you are starting with is only 15% of that. Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. Yes, and you are answering your own questions beautifully, which is the goal. Except that you have to multiply out 72 by 0.15 to get 10.8. So, yes to your first of the 3 answers above. Best Response You've already chosen the best response. An alloy weighing 30 lbs. is 11% tin. The alloy was made by mixing a 15% tin alloy and a 9% tin alloy. How many pounds of each alloy were used to make the 11% alloy? ___ lbs. of the 15% alloy and ___ lbs. of the 9% alloy. Best Response You've already chosen the best response. The 15% is basically saying that he needs 72, but he only has 15% of that to start with, so by eyeballing the 72 x 0.15, we know that it will be a fairly small number. For checking. Best Response You've already chosen the best response. can you help me with this one i dont get it Best Response You've already chosen the best response. This one is a little harder, but I can definitely help you. Best Response You've already chosen the best response. A lot of problems like this are made easier by first correctly conceptualizing the problem, and then the math is actually the easier part. We start by asking how much tin is actually in the 30 lbs. In 30 lbs., we have 0.11 of it as tin, so we have 30 x 0.11 lbs. or 3.3 lbs. Believe it or not, that's half the problem done. Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. i got 3.3 first answer and second one i got 7.2 Best Response You've already chosen the best response. Now, let's call the 15% alloy as x lbs. and the 9% alloy as y lbs. The total weight we know from the problem is 30 lbs. So, we know that x + y = 30 because that is the total weight. We also know that (x)(0.15) + (y)(0.09) = 3.3 because we know we have 3.3 lbs. of tin total. So, we have 2 equations in 2 unknowns. Best Response You've already chosen the best response. can you put the in one equation little confused Best Response You've already chosen the best response. ok what is next i got this one Best Response You've already chosen the best response. now we have to solve for x right??? Best Response You've already chosen the best response. So, are you ok with the 2 equations in the 2 unknowns, or are you stuck at this point? Best Response You've already chosen the best response. i need to know this ______ lbs. of the 15% alloy and ___ lbs. of the 9% alloy. Best Response You've already chosen the best response. there is a third part to this right??? solve for x Best Response You've already chosen the best response. 3.3 is the first ansnwer right. Best Response You've already chosen the best response. Yes, that's from the problem statement. You have x + y = 30 and you have (0.15)x + (0.09)y = 3.3 These are your 2 equations for going forward with the problem. What I am asking is if you understand the derivation of the equations and the steps up to this point. Plus, a second question is are you able to solve simultaneous equations. I asked 2 questions. What are your 2 answers? Best Response You've already chosen the best response. Best Response You've already chosen the best response. I'm looking for 2 yes/no answers at this point. Then we can proceed. Best Response You've already chosen the best response. i dont know what you looking for sorry Best Response You've already chosen the best response. Please read my posts. Best Response You've already chosen the best response. yes i do understand Best Response You've already chosen the best response. So, then all that is left is to solve the simultaneous equations for x and y, and from your last answer, you indicate that you are able to do that. Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. x=3.21 y=3.15 Best Response You've already chosen the best response. Your 2 simultaneous equations are x + y = 30 and (0.15)x + (0.09)y = 3.3. Solving the first equation for x, we have x = 30 - y. We substitute that into the second equation and solve for y. Y=20. We go back to the first equation with that y and find that x=10. So, 10 lbs. of the 15% alloy and 20 lbs. of the 9% alloy. And that's your answer. Best Response You've already chosen the best response. will you tell me how you got that Best Response You've already chosen the best response. (0.15)x + (0.09)y = 3.3 -> (0.15)(30-y) + (0.09)y = 3.3. And solve for y. This is a little bit more detail flowing from my previous post. Best Response You've already chosen the best response. (0.15)(30-y) + (0.09)y = 3.3 -> 4.5 - (0.15)y + (0.09)y = 3.3. Combine y's. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/507313d1e4b04aa3791e2a00","timestamp":"2014-04-20T08:40:13Z","content_type":null,"content_length":"115265","record_id":"<urn:uuid:02ca3289-97e4-4192-8e4a-1a7dd1ad6a9c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
max shearing stress problem Hey guys, I got a problem that goes like: A shaft with a circular cross section is subjected to a torque of 120ft-lb. Its shaft's diameter is 0.750 in and its length is 15 in., determine the maximum shearing stress. I did the following: I tried to use shearing stress = Tr/J in which i plugged in as followed (120*.750/2)/(pi/32)*.750^4 and i get 1448, but the answer should be17.39ksi. I assume i am not getting the right answer because I am not factoring in the length but I am not sure. Any help is appreciated, thanks
{"url":"http://www.physicsforums.com/showthread.php?t=88498","timestamp":"2014-04-19T19:52:06Z","content_type":null,"content_length":"22092","record_id":"<urn:uuid:52163508-17e2-4cd2-97ae-fead9a0d7fba>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
High School Geomtry Problems September 21st 2013, 10:12 AM #1 Junior Member Nov 2011 High School Geomtry Problems Hey, I have these 3 problems that I need to make sure I can do right for an exam Monday. Here's are the three: (screenshots) Screenshot by Lightshot Screenshot by Lightshot Screenshot by Lightshot I've tried to do them but had no success. Here's what I did for each problem: Problem 1: I didn't try for x, but I tried y. So we know all the interior angles add up to 180. So, (y+8) + 74 + x (where x is the top angle) = 180. Solve for y: y + 84 + x = 180. y+84=180 - x. y =96-x. And there I'm not sure where to go since it's not one of the answers. If I solve for y without the x, I just get 96 and that isn't an answer either. Problem 2: Ok so I know if the two lines are parallel, the same side interior and exterior angles are supplementary to each other. So can't I just take one of the expressions, say angle 3, 8x + x (where the other x is the supplementary angle, angle 4) and solve for x? I tried that: 8x+x=180 9x=180 x=20, but 20 isn't in the answers. Problem 3: For this problem, my teacher explained it the last 5 minutes of class so I didn't really get to take notes on it. I know that within the 2 parallel lines 2 triangles form and the 56 degree angle is part of a 360 circle. I just need to know how to work it out. Re: High School Geomtry Problems Do not redefine x. The value of x is defined by the picture: x - 3 is the leftmost angle. There is no reason to believe that x is also the top angle. I am afraid, this is an ill-posed problem because the only restriction is (x - 3) + 41 + (y + 8) = 180 (one equation with two unknowns). Here y + 8 can vary between 0 and 180 - 74 = 106 degrees (exclusive). Each value of y defines x uniquely, but without additional information, such as that the triangle is isosceles, I don't see why y has to be unique. In other words, the vertex of the 41 degrees' angle can slide left or right. What do the arrows represent? Is this the path of light reflecting from the horizontal line by chance? Then x - 3 = y + 8, but this does not fit any answer either... Here you are redefining x again. The picture defines x as 1/22th of angle 8 (which, by assumption, is the same as 1/8th of angle 3). It is incorrect to say that x equals angle 8. Instead, angle 3 + angle 8 = 8x + 22x = 30x = 180 degrees. Continue the bottom inclined line to the intersection with line l. Then 56 is an exterior angle to the resulting triangle and, as such, equals the sum of the two interior angles adjacent to l. Re: High School Geomtry Problems For problem 1 I beleive the the arrows represent congruency? I sort of understand what you did for problem 2, buy why add the two expressions? Aren't the 2 supplementary angles the ones I highlighted below? Screenshot by Lightshot For problem 3 I have worked this thus far: Screenshot by Lightshot It's funny, I've done all these things in class but I just get confused here cause I have to look carefully for all the angles and extending lines etc.. Re: High School Geomtry Problems Congruency of what? Yes, 3 and 4 are supplementary, but so are 3 and 8 as interior angles of a transversal. For problem 3, x and one of the 28's are alternate interior angles, so they are equal. Re: High School Geomtry Problems For problem 1 I went over my notes and those little signs mean parallel and they keep going according to my teacher. Here, I've drawn it out on paper what it's supposed to look like to solve: Screenshot by Lightshot I see what you mean about problem 2, I had that written down in my notebook. It just didn't seem right because it doesn't look like a traditional pair of angles like so: http://www.mathsisfun.com Okay, think I have problem 3 solved, this look right? Screenshot by Lightshot Thanks for the help so far. Re: High School Geomtry Problems This changes things. Then x - 3 = 74 as corresponding angles, and from (x - 3) + 41 + (y + 8) = 180 you can find y. I see what you mean about problem 2, I had that written down in my notebook. It just didn't seem right because it doesn't look like a traditional pair of angles like so: http://www.mathsisfun.com There are many equalities related to parallel lines. You can explore them on the site I linked to. If you used equality of alternate interior angles to conclude that the lower left angle is 28 degrees, then you can use the same principle to immediately conclude that upper left 28 equals x. Then you won't need to subtract stuff from 180. Re: High School Geomtry Problems Okay, so for problem 1: Where did you get that 41 from? And problem 2: I see, they are indeed supplementary but they don't share a vertex therefore they aren't a linear pair. That's why they don't look like that picture I linked. Got it. Re: High School Geomtry Problems Re: High School Geomtry Problems Excellent, I found x and y. For y, I solved the equation you gave and for x, since I knew x-3=74 due to corresponding angles, I just solved that equation for x. Thanks for the help. September 21st 2013, 11:15 AM #2 MHF Contributor Oct 2009 September 21st 2013, 02:03 PM #3 Junior Member Nov 2011 September 21st 2013, 02:28 PM #4 MHF Contributor Oct 2009 September 21st 2013, 02:57 PM #5 Junior Member Nov 2011 September 21st 2013, 03:14 PM #6 MHF Contributor Oct 2009 September 21st 2013, 05:32 PM #7 Junior Member Nov 2011 September 22nd 2013, 12:02 AM #8 MHF Contributor Oct 2009 September 22nd 2013, 07:52 AM #9 Junior Member Nov 2011
{"url":"http://mathhelpforum.com/geometry/222145-high-school-geomtry-problems.html","timestamp":"2014-04-18T12:06:26Z","content_type":null,"content_length":"61740","record_id":"<urn:uuid:9dde66b2-70fe-49f8-ad4f-53735cff6a9e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Concrete Anchor Foundation Bolt Design Calculations With Example According to ACI 318 Appendix D-Part1-Steel Strength in Tension Example of Concrete Anchor Bolt Design Calculation – Part-1: Determining Steel Strength of Anchor Bolt in Tension Anchor bolts are used extensively as foundation bolts for rotating equipments like machines and structural members like towers. The American Concrete Institute (ACI) 318 Appendix D has extensive guidelines for designing concrete anchor bolts. This series of eight articles will cover all the design guideline of the ACI code with the help of the following concrete anchor foundation bolt design calculation example: Problem statement of the design example See the above two figures (Fig.1 and Fig.2) and design the cast in place anchor bolts according to the arrangement shown. Consider the factored tensile load as 20000 lb, factored shear load as 2300 lb and compressive strength of the concrete as 3500 psi. Also assume that the column is mounted at the corner of a large concrete slab. Design solution The aim of this whole exercise is to calculate the design tensile strength and design shear strength of the group of anchor for a selected anchor bolt diameter and check if the design strengths are higher than the applied loads. If they are then we will declare that the selected bolt size is safe or else we will go for next higher size of the anchor bolts. We will start with the anchor diameter of 0.75 inch and do the design calculations through the following eight parts: Part-1: Determining Steel Strength of Anchor in Tension (presently we are here) Part-2: Determining Concrete Breakout Strengths of Anchor in Tension Part-3: Determining Concrete Pullout Strength of Anchor in Tension Part-4: Determining Side-face Blowout Strength of Anchor in Tension Part-5: Determining Steel Strength of Anchor in Shear Part-6: Determining Concrete Breakout Strength of Anchor in Shear Part-7: Determining Concrete Pryout Strength of Anchor in Shear Part-8: Interaction of Tensile and Shear Forces The calculation of steel strength of anchor in tension according to the ACI code goes like below: Steel strength in tension, φN[sa] = φnA[se,N]f[uta]……………………..D-3 Φ – Strength reduction factor and its value for ductile anchor bolt in tension is 0.75 N[sa] – Nominal material (steel) strength of the group of anchor in lb n – Total number of anchors A[se,N] – Single anchor bolt’s effective cross section area (to be obtained from manufacturer’s catalog) in square inch f[uta ]- Specified tensile strength for a single anchor (to be obtained from manufacturer’s catalog) in psi The 0.75 inch anchor typically has the following cross section and tensile strength values: A[se,N] = 0.334 square inch f[uta]=75000 psi So, by putting these values, we can get the nominal material strength for the group of anchors in tension from the equation D-3 as Φ N[sa] = 0.75*4*0.334*75000 = 75150 lb In the next part (part-2) we will calculate concrete breakout strength. Let me know if you have any suggestions. thank you thank you thank you thank you thank you thank you thank you I’m interested in the end result of anchor bolt strength as to how proper installation of the bolts plays in the ultimate results of it’s strength. If you install the bolts in wet concrete by pushing it into the concrete, you get air gaps and bubbles, off set bolts, height variations and this plays into it’s strength calculations. If you do the calculations based on a perfect installation, it becomes invalid when considering the real time install. What can one do to make correct calculations based on installation variables?
{"url":"http://blog.mechguru.com/machine-design/example-of-concrete-anchor-bolt-design-calculation-part-1-determining-steel-strength-of-anchor-bolt-in-tension/","timestamp":"2014-04-19T02:17:02Z","content_type":null,"content_length":"44838","record_id":"<urn:uuid:dc0a1fcd-7308-4435-a82b-111f95f7ba6a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
Fractals With Stars From Math Images Fractals with stars The final star creation. Basic Description Start by trying to make a star tool. Plot point A and plot point B far away from it. Mark the first point as center and rotate B around it 4 times at 72º until five points are created around a center point. Then construct a segment between points A and B and create a midpoint of that line. Rotate the midpoint around point A at 36 degrees to create point C and then rotate C at 72º to create point D and then rotate point D by 72º 4 times to create the outline points of a star. Then hide C and connect all the points to form an outline of a star. Then fill in the star and highlighted everything and create a tool for it. Then select the tool and utilize it, marking point D as the center and an adjacent point as the end point. The adjacent point should be as equidistant from the center point as point B. The second star should be 2.2676 times smaller than the original star. Keep doing this in succession on each leg of the star until you have 5 stars on each leg (26 total). Then locate the conversion point, which is point D or the point equidistant from point A as point D, and plot that point as the origin and map a polar grid around it and then try to find a parametric function that intersects the centers of the stars. The final product of my design and the polar equation that runs along the center points of it. As you can see, the conversion point of the star fractals is the origin of the grid and the starting point of the function. A More Mathematical Explanation Formula for the spiral that matches the system of points is: r=(.192/θ), θ=(.1005*(.53/r)^5 The lengths of the distances between the center points of the stars is shown above. The lengths vary depending on how big the stars are but they all are 1.50588 times longer than the next smallest one. In addition, Each star is 2.2676 larger than the next smallest. Everything stays consistent. Why It's Interesting I find this interesting because of all the assignments and projects, specifically on GSP, that we have done in class, we have never done anything involving stars, which for some reason are my favorite shape. It is also the first time that i have ever played with a polar grid and plotted polar equations. Teaching Materials There are currently no teaching materials for this page. Add teaching materials. If you are able, please consider adding to or editing this page! Have questions about the image or the explanations on this page? Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"url":"http://mathforum.org/mathimages/index.php/Fractals_With_Stars","timestamp":"2014-04-16T16:59:19Z","content_type":null,"content_length":"23612","record_id":"<urn:uuid:10f28c55-3e51-427e-925b-aeba387171a0>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
Equivalence class proof 1. The problem statement, all variables and given/known data Prove that if (a1, b1) ~ (a2, b2) and (c1, d1) ~ (c2, d2), then (a1, b1) + (c1, d1) ~ (a2, b2) + (c2, d2) and (a1, b1) [tex]\bullet[/tex] (c1, d1) ~ (a2, b2)[tex]\bullet[/tex] (c2, d2). Let [a, b] denote the equivalence class with respect to ~ of (a, b) in Z x (Z-{0}), and define Q to be the set of equivalence classes of Z x (Z-{0}). For all [a, b], [c, d] in Q define [a, b] + [c, d] = [(a, b) + (c, d)] and [a, b][tex]\bullet[/tex] [c, d] = [(a, b) (c, d)]; these definitions make sense, i.e., they do not depend on the choice of 2. Relevant equations (a, b) + (c, d) = (ad + bc, bd) and (a, b) [tex]\bullet[/tex] (c, d) = (ac, bd) (a, b) ~ (c, d) if and only if ad = bc 3. The attempt at a solution I tried using those definitions. I know you have to assume that (a1, b1) ~ (a2, b2) and (c1, d1) ~ (c2, d2). But I get stuck afterwards. Where do I go from there? Do I need something more?
{"url":"http://www.physicsforums.com/showthread.php?t=311812","timestamp":"2014-04-19T22:44:12Z","content_type":null,"content_length":"23667","record_id":"<urn:uuid:1b7f64b5-b922-47d7-8c06-52c14ca630af>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
The Opposite Sides of A Parallelogram Are Congruent The Opposite Sides of a Parallelogram are Congruent A parallelogram is a quadrilateral whose opposite sides are parallel. In this post, we show that asides from being parallel, they are also congruent. In the figure below, $ABCD$ is a parallelogram; $ \overline{AB}$ is parallel to $\overline{CD}$ and $\overline{AD}$ is parallel to $\overline{BC}$. To prove that the opposite sides of $ABCD$ are congruent, we have to show that $\overline{AD} \cong BC$ and $\overline{AB} \cong CD$. Theorem: The opposite sides of a parallelogram are congruent. Given: Parallelogram $ABCD$. Proof: Draw $\overline{BD}$ Notice that $\overline{BD}$ serves as a transversal to the parallel line segments. Clearly, $\angle 1 \cong \angle 3$ because they are alternate interior angles (A). Also, $BD \cong BD$ since any segment is congruent to itself (S). Lastly, $\angle 2 \cong \angle 4$ because they are alternate interior angles (A) Since the side is included by the two angles, by ASA Congruence, triangle $ABC \cong CDB$ Therefore, $AB \cong CD$ and $AD \cong BC$ since corresponding sides of congruent triangles are congruent. $\blacksquare$ So, the opposite sides of a parallelogram are congruent.
{"url":"http://proofsfromthebook.com/2013/03/03/the-opposite-sides-of-a-parallelogram/","timestamp":"2014-04-20T14:11:46Z","content_type":null,"content_length":"79655","record_id":"<urn:uuid:47f44ece-7b28-4a1c-b550-2cfb90f7b2d0>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Rate of change April 16th 2007, 01:34 PM Rate of change I'm pretty familiar with the calculator settings but I don't know how to answer these questions. It would be appreciated if you would show me the steps in how you answer these questions. 1. The per-share dividends for Wachovia Corporation increased from $1.38 in 1995 to $2.06 in 1999. Assume that the rate of change was constant. (a) Using data points of the form (x, dividend), where x is the number of years since 1995, determine whether a linear function describing this information would be a direct variation. Why or Why not? (b) If the function describing this information is in the slope-intercept form, what is m and what is its interpretation? 2. Gateway manufactures and sells computers. This company's scale rose from $5.0 billion in 1996 to 10.4 billion in 2000. Assuming a constant rate of increase, find the average rate of change in sales during this period. I entered 0 (representing 1996) & 4 (representing 2000) in my L1 column corresponding to 5.0 and 10.4 under L2 in my calculator . I got 1.35 as the average rate of change in sales. Is that April 18th 2007, 07:23 AM I'm pretty familiar with the calculator settings but I don't know how to answer these questions. It would be appreciated if you would show me the steps in how you answer these questions. 1. The per-share dividends for Wachovia Corporation increased from $1.38 in 1995 to $2.06 in 1999. Assume that the rate of change was constant. (a) Using data points of the form (x, dividend), where x is the number of years since 1995, determine whether a linear function describing this information would be a direct variation. Why or Why not? we have (x1, dividend1) = (0,1.38) and (x2, dividend2) = (4, 2.06) the slope between these two points, m = (dividend2 - dividend1)/(x2 - x1) => m = (2.06 - 1.38)/(4 - 0) = 0.68/4 = 0.17 using the point-slope form, we can find a linear function for the information: y - dividend1 = m(x - x1) => y - 1.38 = 0.17(x - 0) => y = 0.17x + 1.38 ...........this is the linear function this is not a direct variation, since two quantities vary directly if one quantity is a constant times the other. that is, if x and y are our quantities then for direct variation we must have: y = kx, where k is a constant. you see here that is not the case, we have y = kx + c, where k and c are constants (k happens to be the slope, which we call m) (b) If the function describing this information is in the slope-intercept form, what is m and what is its interpretation? i suppose m is what you are calling the slope. as you can see, we found it above (these questions were asked in the wrong order!). i'm not sure exactly what you are looking for in this answer but i can tell you, since m is positive, we have increasing dividends with each passing year, the increase is constant April 18th 2007, 07:27 AM 2. Gateway manufactures and sells computers. This company's scale rose from $5.0 billion in 1996 to 10.4 billion in 2000. Assuming a constant rate of increase, find the average rate of change in sales during this period. I entered 0 (representing 1996) & 4 (representing 2000) in my L1 column corresponding to 5.0 and 10.4 under L2 in my calculator . I got 1.35 as the average rate of change in sales. Is that you are correct:D
{"url":"http://mathhelpforum.com/statistics/13803-rate-change-print.html","timestamp":"2014-04-19T05:39:49Z","content_type":null,"content_length":"8044","record_id":"<urn:uuid:7e5e1af9-0594-4f74-9f93-3b5145380903>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
etd AT Indian Institute of Science: The Effect Of Interference Of Strip Foundations And Anchors On Their Ultimate Bearing Capacity And Elastic Settlement & Collections Thesis Guide Submitted Date Sign on to: Receive email Login / Register authorized users Edit Profile About DSpace etd AT Indian Institute of Science > Division of Earth and Environmental Sciences > Civil Engineering (civil) > Please use this identifier to cite or link to this item: http://hdl.handle.net/2005/985 Title: The Effect Of Interference Of Strip Foundations And Anchors On Their Ultimate Bearing Capacity And Elastic Settlement Authors: Bhoi, Manas Kumar Advisors: Kumar, Jyant Foundations (Civil Engineering) Loads (Civil Engineering) Strip Footings Strip Anchors Keywords: Strip Footing - Bearing Capacity Strip Footing - Electrical Settlement Finite Element Analysis Multiple Footings Multiple Anchors Strip Anchor Submitted Jul-2009 Series/ G23443 Report no.: Due to close proximity of different civil engineering structures, the ultimate bearing capacity and failure pattern of adjoining footings/anchors are often influenced by their mutual interference. The present thesis is an attempt to examine the interference effects on the ultimate failure loads and the elastic settlements for a group of closely spaced strip footings and anchors. In this thesis, a new experimental setup has been proposed to examine the response of interfering strip footings and strip anchors subjected to vertical loads but without having any eccentricity. Through out the investigation, it has been assumed that the magnitudes of loads on all the footings/anchors at any stage of settlement remain exactly the same. Unlike the existing experimental works of the previous researchers reported in literature, in the proposed experimental setup, there is no need to use more than one footing/anchor. As a result a much smaller size of the tank, in which the soil sample needs to be prepared, is required. In the proposed setup, it has been attempted to satisfy the boundary conditions existing along the vertical planes of symmetry midway between any two adjoining footings/anchors. To satisfy the governing boundary conditions, along the planes of symmetry, the interface friction angle is kept as small as possible, with the employment of a very smooth high strength glass sheet, and the associated horizontal displacements are made equal to zero. For two interfering footings/anchors case, only single plane of symmetry on one side of the footing needs to be modeled. On the other hand, for an infinite number of multiple footings/ anchors, two vertical planes of symmetry on both the sides of the footing need to be simulated in the experiments. The proposed experimental setup is noted to yield reasonably acceptable results both for the cases of interfering footings and interfering anchors. The magnitudes of ultimate failure loads for the interfering footings/anchors are expressed in terms of the variation of the efficiency factor ( ξγ) with respect to changes in the clear spacing(s) between the footings/anchors; wherein, an efficiency factor is defined as the ratio of the magnitude of the failure load for an intervening strip footing/anchor of a given width to that of an isolated strip footing/anchor having exactly the same width. From the experiments, the values of the efficiency factors are obtained for a group of two and an infinite number of multiple strip footings/anchors. The effect of two different widths of the footing/anchor on the magnitudes of the failure load is also studied. It is noted that for a group of two and infinite number of multiple footings, the magnitude of the ultimate failure load for an interfering footing becomes always greater than that for a single isolated footing. For the case of two footings, the value of ξγ becomes maximum corresponding to a certain critical s/B Abstract: between two footings. At a given spacing, the value of ξγ is found to increase further with an increase in the value of φ. It is observed that, for a group of an infinite number of equally spaced multiple strip footings, the magnitude of ξγ increases continuously with a decrease in s/B; when the clear spacing between the footings approaches zero, the magnitude of ξγ tends to become infinity. The value of ξγ associated with a given s/B for the multiple footings case is found to become always greater than that for a two footing case. The effect of s/B on ξγ is found similar to that reported in theories in a qualitative sense. The value of ξγ at a given s/B associated for B = 4 cm both for two and multiple footings is found to become smaller as compared to that with B = 7 cm. In contrast to a group of interfering footings under compression, the magnitude of ξγ in the case of both two and multiple interfering anchors decreases continuously with a reduction in the value of s/B. For given values of s/B and embedment ratio ( λ = d/B ), the values of ξγ for the case of multiple anchors are found to be always lower than those for the case of two anchors; d = depth of the anchor. In comparison with the available theoretical values from the literature, the values of ξγ are found to be a little lower especially for smaller values of s/B. The comparison of the present experimental data with that reported from literature reveals that the interference of strip anchors will have relatively more reduction in the uplift resistance on account of interference as compared to a group of square and circular anchors; the present experimental data provides relatively lower values of ξγ as compared to the available experimental data (for square and circular footings). The value of s/B beyond which the response of anchors becomes that of an isolated anchor increases continuously with an increase in the value of λ. The magnitude of ξγ for given values of s/B and λ associated for B = 4 cm is found to become slightly greater as compared to that with B = 7 cm. Both for the cases of interfering footings and anchors, the ratio of the average ultimate pressure with the employment of the rough central plane (glass sheet glued with a sand paper) to that with the smooth central plane, is found to increase with (i) a decrease in the value of s/B, and (ii) an increase in the value of φ. The finite element analysis, based on a linear elastic soil-constitutive model, has also been performed for interfering footings and anchors to find the effect of interference on elastic settlements. The computations have revealed that for both the footings and anchors, a decrease in the spacing between the footings leads to a continuous increase in the magnitudes of the settlements. The increase in the settlement due to the interference becomes quite substantial for an infinite number of footings/anchors case as compared to two footings/anchors case. The effect of the Poisson’s ratio on the results is found to be practically insignificant. URI: http://hdl.handle.net/2005/985 Appears in Civil Engineering (civil) Items in etd@IISc are protected by copyright, with all rights reserved, unless otherwise indicated.
{"url":"http://etd.ncsi.iisc.ernet.in/handle/2005/985","timestamp":"2014-04-16T05:32:00Z","content_type":null,"content_length":"26215","record_id":"<urn:uuid:5f9c5d25-3210-46ff-a3a4-0929af3f49cd>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
Possible Answer Paired samples t-tests typically consist of a sample of matched pairs of similar units, or one group of units that has been tested twice (a "repeated measures" t-test). - read more Repeated Measures ANOVA Introduction. Repeated measures ANOVA is the equivalent of the one-way ANOVA, but for related, not independent groups, and is the extension of the dependent t-test. - read Share your answer: repeated measures t test? Question Analizer repeated measures t test resources
{"url":"http://www.askives.com/repeated-measures-t-test.html","timestamp":"2014-04-18T00:28:10Z","content_type":null,"content_length":"35275","record_id":"<urn:uuid:fcbc7167-692a-48a9-948c-15e7f85ba970>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
An improved inductive definition of two restricted classes of triangulations of the plane , 1995 "... ... The results of the enumeration were used to systematically search for certain smallest non-Hamiltonian polyhedral graphs. In particular, the smallest non-Hamiltonian planar graphs satisfying certain toughness-like properties are presented here, as are the smallest non-Hamiltonian, 3-connected, D ..." Cited by 8 (1 self) Add to MetaCart ... The results of the enumeration were used to systematically search for certain smallest non-Hamiltonian polyhedral graphs. In particular, the smallest non-Hamiltonian planar graphs satisfying certain toughness-like properties are presented here, as are the smallest non-Hamiltonian, 3-connected, Delaunay tessellations and triangulations. Improved upper and lower bounds on the size of the smallest non-Hamiltonian, inscribable polyhedra are also given. "... One of the most noted construction methods of 3-vertex-connected graphs is due to Tutte and based on the following fact: Any 3-vertex-connected graph G = (V, E) on more than 4 vertices contains a contractible edge, i. e., an edge whose contraction generates a 3-connected graph. This implies the exis ..." Cited by 1 (1 self) Add to MetaCart One of the most noted construction methods of 3-vertex-connected graphs is due to Tutte and based on the following fact: Any 3-vertex-connected graph G = (V, E) on more than 4 vertices contains a contractible edge, i. e., an edge whose contraction generates a 3-connected graph. This implies the existence of a sequence of edge contractions from G to the complete graph K4, such that every intermediate graph is 3-vertex-connected. A theorem of Barnette and Grünbaum gives a similar sequence using removals on edges instead of contractions. We show how to compute both sequences in optimal time, improving the previously best known running times of O(|V | 2) to O(|E|). This result has a number of consequences; an important one is a new linear-time test of 3-connectivity that is certifying; finding such an algorithm has been a major open problem in the design of certifying algorithms in the last years. The test is conceptually different from well-known linear-time 3-connectivity tests and uses a certificate that is easy to verify in time O(|E|). We show how to extend the results to an optimal certifying test of 3-edge-connectivity. 1 "... Recursive generation of simple planar quadrangulations with vertices of degree 3 and 4 ..." , 707 "... A balanced graph is a bipartite graph with no induced circuit of length 2 (mod 4). These graphs arise in linear programming. We focus on graph-algebraic properties of balanced graphs to prove a complete classification of balanced Cayley graphs on abelian groups. Moreover, in Section 5 of this paper, ..." Add to MetaCart A balanced graph is a bipartite graph with no induced circuit of length 2 (mod 4). These graphs arise in linear programming. We focus on graph-algebraic properties of balanced graphs to prove a complete classification of balanced Cayley graphs on abelian groups. Moreover, in Section 5 of this paper, we prove that there is no cubic balanced planar graph. Finally, some remarkable conjectures for balanced regular graphs are also presented. Key words: Cayley graph, balanced graph 1 , 2010 "... Recursive generation of simple planar 5-regular graphs and pentangulations ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=5169668","timestamp":"2014-04-21T13:54:54Z","content_type":null,"content_length":"20699","record_id":"<urn:uuid:afc42670-f168-44c6-94c9-db8d36bb18f8>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
Monthly Centering and Climate Sensitivity In our recent discussion of Dessler v Spencer, UC raised monthly centering as an issue in respect to the regressions of TOA flux against temperature. Monthly centering is standard practice in this branch of climate science (e.g. Forster and Gregory 2006, Dessler 2010), where it is done without any commentary or justification. But such centering is not something that is lightly done in time series statistics. (Statisticians try to delay or avoid this sort of operation as much as possible.) When you think about it, it’s not at all obvious that the data should be centered on each month. I agree with the direction that UC is pointing to – a proper statistical analysis should show the data and results without monthly centering to either verify that the operation of monthly centering doesn’t affect results or that its impact on results has a physical explanation (as opposed to being an artifact of the monthly centering operation.) In order to carry out the exercise, I’ve used AMSU data because it is expressed in absolute temperatures. I’ve experimented with AMSU data at several levels, but will first show the results from channel 4 (600 mb) because they seem quite striking to me and because troposphere temperatures seem like a sensible index of temperature for comparing to TOA flux (since much TOA flux originates from the atmosphere rather than the surface.) In the graphic below, the left panel plots the CERES TOA Net flux (EBAF monthly version) against monthly AMSU channel 4 temperatures. (Monthly averages are my calculation.) The right panel shows the same data plotted as monthly anomalies. (HadCRU, used in some of the regression studies, uses monthly anomalies.) The red dotted line shows the slope of the regression of flux~temperature, while the black dotted line shows a line with a slope of 3.3 (chosen to show a relationship of 3.3 wm-2/K). Take a look – more comments below. Figure 1. CERES TOA Net Upward Flux (EBAF) vs AMSU Channel 4 (600 mb) Temperature. Left – Absolute; right – monthly anomaly. The differences between regressions before and after monthly centering are dramatic, to say the least. Considering absolute values (left) first. Unlike the Mannian r2 of 0.018 of Dessler 2010, the relationship between TOA flux and 600 mb temperature is very strong (r2 of 0.79). TOA flux is net downward when 600 mb temperature is at a minimum (Jan – northern winter/southern summer) and is net upward in northern summer (July) when global 600 mb temperature is at its maximum. The slope of the regression line is 7.7 wm-2/K (slopes greater than 3.3 wm-2/K are said to indicate negative feedback.) There is an interesting figure-eight shape as a secondary but significant feature. This residual has 4 zeros during the year – which suggests to me that it is related to the tropics (where incoming solar radiation has a 6-month cycle maxing at the equinoxes, with the spring equinox stronger than the fall equinox.) In “ordinary” statistics, statisticians try to fit things with as few parameters as possible. In this case, a linear regression gives an excellent fit and, with a little experimenting, a linear regression plus a cyclical term with a 6-month period would give an even better fit. There doesn’t seem to be any statistical “need” to take monthly centering in order to get a useful statistical Now let’s look at the regression after monthly centering – shown on the same scale. Visually it appears that the operation of monthly centering has damaged the statistical relationship. The r^2 has been decreased to 0.41 – still much higher than the r^2 of Dessler 2010. (The relationship between TOA flux and 600 mb temperatures appears to be stronger than the corresponding relationship with surface temperatures, especially HadCRU.) Interestingly, the slope of the regression line is now 2.6 wm-2/K i.e. showing positive feedback. I’ve done experiments comparing AMSU 600 mb to AMSU SST and both to HadCRU. The results are interesting and will be covered on another occasion. In the meantime, the marked difference between regression results before and after taking monthly centering surely warrants reflection. I try to avoid speculations on physics since I’ve not parsed the relevant original materials, but, suspending this policy momentarily, I find it hard to visualize a physical theory in which the governing relationship is between monthly anomalies as opposed to absolute temperature. Yes, the relationship between absolute quantities still leaves residuals with a seasonal cycle, but it would be much preferable in statistical terms (and presumably physical terms) to explain the seasonal cycle in residuals in some sort of physical way, rather than monthly centering of both quantities (24 If there is a “good” reason for monthly centering, the reasons should be stated explicitly and justified in the academic articles (Forster and Gregory 2006, Dessler 2010) rather than being merely assumed – as appears to have happened here. Perhaps there is a “good” reason and we’ll all learn something. In the meantime, I think that we can reasonably add monthly centering to the list of questions surrounding the validity of statistical analyses purporting to show positive feedbacks from the relationship of TOA flux to temperatures. (Other issues include the replacement of CERES clear sky with ERA clear sky and the effect of leads/lags on Dessler-style regressions.) I suspect that it may be more important than the other two issues. We’ll see. PS – there are many interesting aspects to the annual story in the figure shown above. The maximum annual inbound flux is in the northern winter (Jan) and the minimum is in the northern summer – the difference is over 20 wm-2, large enough to be interesting. The annual cycle of outbound flux and GLB temperature reaches a maximum in the opposite season to the one that would expect from the annual cycle of inbound flux. I presume that this is because of the greater proportion of land in the NH as Troy observed. In effect, energy accumulates in the SH summer and dissipates in the NH summer. An interesting asymmetry. Note: AMSU daily information is at http://discover.itsc.uah.edu/amsutemps/. I’ve uploaded scripts that scrape the daily information from this site for all levels and collate into monthly averages. See script below. My collation of monthly data is also uploaded. The CERES data used here is the EBAF version from downloaded from http://ceres-tool.larc.nasa.gov/ord-tool/jsp/EBAFSelection.jsp as ncdf file getting TOA parameters. Collated into time series and placed at CA: load(“temp”); tsp(ebaf) The graphic is produced by: A=ts.union( ceres=-ebaf[,"net_all"],amsu=amsum[,"600"], ceresn=anom(-ebaf[,"net_all"]),amsun=anom(amsum[,"600"])) month= factor( round(1/24+time(A)%%1,2)) A=data.frame(A) #reverse sign by convention nx= data.frame(sapply(A[,1:2], function(x) tapply(x,month,mean,na.rm=T) ) ) # if (tag) png(file=”d:/climate/images/2011/spencer/ceres_v_amsu600.png”,h=480,w=600) layout(array(1:2,dim=c(1,2) ) ) plot(ceres~amsu,A ,xlab=”AMSU 600 mb deg C”,ylab=”Flux Out wm-2″,ylim=c(-10,10),xlim=c(251.5,255),xaxs=”i”,yaxs=”i”) lines(A$amsu,A$ceres ) title(“Before Monthly Normal”) for(i in 1:11) arrows( x0=nx$amsu[i],y0=nx$ceres[i],x1=nx$amsu[i+1],y1=nx$ceres[i+1], lwd=2,length=.1,col=2) i=12; arrows( x0=nx$amsu[i],y0=nx$ceres[i],x1=nx$amsu[1],y1=nx$ceres[1], lwd=2,length=.1,col=2) text( nx$amsu[1],nx$ceres[1],font=2,col=2, “Jan”,pos=2) text( nx$amsu[7],nx$ceres[7],font=2,col=2, “Jul”,pos=4) fm=lm(ceres~amsu,A); summary(fm) abline( 3.3* b[1]/b[2], 3.3,lty=3) text(251.5,9,paste(“Slope:”, round(fm$coef[2],2)),pos=4,col=2,font=2) plot(ceresn~amsun,A ,xlab=”AMSU 600 mb deg C”,ylab=”Flux Out wm-2″,ylim=c(-10,10),xlim=c(-1.75,1.75),xaxs=”i”,yaxs=”i”) lines(A$amsun,A$ceresn ) title(“After Monthly Normal”) fmn=lm(ceresn~amsun,A); summary(fmn) round(fmn$coef,3) # 2.607 text(-1.5,9,paste(“Slope:”, round(fmn$coef[2],2)),pos=4,col=2,font=2) 227 Comments 1. One advantage of working with seasonally detrended data must be the reduced risk of spurious correlations as many variables have annual cycles, and so will likely appear to correlate even if there is no causal relationship. 2. Does it “matter” when one “gets” a result that happens to side with a particular POV? □ Of course it matters when a result agrees with ones point of view. The result is “successful.” Further investigation goes in that direction. Then someone comes along and says, for example, “Why did you do that? That’s dumb! You cannot do that. You must do it ‘this’ way.” Then… 3. Why is the left hand graph reminding me of that ‘Analemma’ thing on my globe? (except upside down and tipped a bit)… □ Because the sun drives climate? /yeah, cheap shot, I confess. 4. RuhRoh, exactly the first thing that came to my mind. Could be a pure conincidence, but then again . . . Thanks, Steve, for the excellent work and explanation. This is significant. 5. That’s just fun. Thanks. 6. I have a sense of Deja’ Vu about these numbers. Is this topic related in some way to a previous topic, Some Simple Questions? Steve – yes. The annual cycle of outbound flux and GLB temperature reaches a maximum in the opposite season to the one that would expect from the annual cycle of inbound flux. I presume that this is because of the greater proportion of land in the NH as Troy observed I’d love to take credit for this idea (assuming I’m the Troy referenced here), but I don’t recall saying it (probably one of the more physics-oriented commentors). Someone else should come forward with a bow for this one. I’ve done experiments comparing AMSU 600 mb to AMSU SST and both to HadCRU. The results are interesting and will be covered on another occasion. I’m guessing that the AMSU 600 mb showed better correlation than either SST or HadCRUT? We found that there was a lag between TLT and SST temperatures, and that since the bulk of the Planck response (80-85%) is coming from the atmosphere rather than the surface, it only makes sense that you would get a better correlation when regressing the TOA fluxes against atmospheric rather than surface temperatures. I raised this issue also in a comment at SoD, because I’m curious why both the Spencer and Forster camps seem to agree that feedbacks occur instanteously with surface temperatures, when the bulk of the OLR change is expected to come from atmo temperature changes occuring months later. Steve – someone wrote about this at one of the technical blogs. I thought it was you, but I guess not. I’ll re-check sometime. □ The first person to say that the phase lag between insolation and global temperature should be about 180 degrees in the “simple questions” thread was, it would appear, me: Others seconded this, and David Smith seems to have illustrated that indeed, the global temperature cycle for the atmosphere follows what essentially amounts to insolation weighted more heavily toward the Northern Hemisphere, rather than the whole Earth: I assume that is the technical blog Steve is talking about. □ Re on the influence of land on the 600mb (AQUA ch 5) annual cycle: Nature, unfortunately, doesn’t always follow the Julian calendar. The daily data, in my opinion, can be handled in ways which hint at interesting intramonthly and intraseasonal features which are poorly-visible in monthly averaging. An example is here: The plot shows anomaly oscillations, possibly MJO-related, which get watered-down in any monthly averaging. There are also variations in amplitude. The reduced amplitude in the fall of 2007, for instance, preceeded a tropospheric temperature drop associated with a La Nina. Was the reduced amplitude an indication of the global atmosphere “depressing a clutch” to disengage from one state to another? A similar change in amplitude was associated with the 2010 La Nina. It’s a silly analogy, I realize, and climate science likely has the answer, but it’s nevertheless an intriguing data-wiggle to a non-meteorologist, one that gets lost in monthly averaging. The annual cycle of outbound flux and GLB temperature reaches a maximum in the opposite season to the one that would expect from the annual cycle of inbound flux. I presume that this is because of the greater proportion of land in the NH as Troy observed Erl Happ thinks along similar lines. That cloud loss in the southern hemisphere in mid winter is not a product of reduced evaporation from a cooling ocean but increased downdraft and the expansion of high pressure cells to take in more of the continents including Australia. This shows up on the maps. It indicates that the hemispheres are interactive. The loss of cloud in the southern hemisphere in July is forced from the northern hemisphere. Equally the gain in cloud in January has a lot to do with the cooling of the atmosphere in January that is associated with the strong decline in surface temperature over land in the northern hemisphere. The cloud maps I have referenced should be compared with maps of top of atmosphere radiation to see where the energy is coming from. There is a disproportionate amount of energy emitted from the winter hemisphere in the regions occupied by subtropical high pressure cells. It is plainly not coming from the surface but the atmosphere. It’s the Foehn effect on an inter-hemispheric scale. … I find it hard to visualize a physical theory in which the governing relationship is between monthly anomalies as opposed to absolute temperature. Read here – but often wondered about before that. I suspect that it may be more important than the other two issues. The great strangeness of climate is that such basic questions have been drowned out by ‘the science is settled’. But that’s not science, this is. 9. Your improved regression reflects, of course, the common seasonal influence. But that’s not of interest here. There’s no point in being purist about the time component of averaging. Your absolute temps, varying from 251K to 255K, and far more significantly averaged in space, from pole to equator. This averaging also masks most of the local seasonal variation. The reason for going to anomalies is that they are looking to a regression to relate changes. ΔR vs ΔT. And there’s actually not much point in trying to localise those effects in time and space. The reason is that we don’t expect to find them linked on that local scale. People here have been looking at lags of months to years. That means looking for some effect that survives a huge amount of mixing, in both space and time. That comes back to conserved quantities, particularly heat, for which T is a proxy. On your last point about the 20 W/m2 difference – isn’t this comparable to the orbital eccentricity differential? Your improved regression reflects, of course, the common seasonal influence. But that’s not of interest here. The seasons carry out a natural experiment in which the incoming flux varies by more than 20 wm-2. why wouldn’t that be of interest? ☆ “why wouldn’t that be of interest?” It could be if you were looking at the response of temperature to flux. But the papers recently discussed have been looking at the response of flux to temperature. And while the seasonal oscillation of temperature might seem to be worth investigating for its effect on flux, it is too confounded with the other annual effects (like TSI) for attribution. “why wouldn’t that be of interest?” It could be if you were looking at the response of temperature to flux. What a strange comment. Isn’t the response of temperature to flux a pretty important issue in respect to doubled CO2? But the papers recently discussed have been looking at the response of flux to temperature. They’ve been discussing feedback which involves both elements. Not that the scope of the papers precludes the discussion of related topics. And while the seasonal oscillation of temperature might seem to be worth investigating for its effect on flux, it is too confounded with the other annual effects (like TSI) for Nick, you’re just arm-waving again. Unless you’ve done statistical analysis on this precise point, you don’t know. yes, there are annual effects, but that doesn’t in itself preclude It seems to me that the approach of the Dessler-type articles – with their Mannian r2 of 0.018 – indicate that effects are being confounded in the academic litchurchur (or else they’d get a more impressive statistical relationship.) There’s no point in being purist about the time component of averaging. Of course there is. There may be valid reasons for monthly centering, but merely complaining about purism is not one of them. The reason for going to anomalies is that they are looking to a regression to relate changes. ΔR vs ΔT. But this does not necessarily entail monthly centering. One could define an anomaly relative to an annual mean. People here have been looking at lags of months to years. Some commenters have. I have no particular views on the matter as it’s a new topic for me. The regressions in the litchurchur appear to be premised on the idea of rather rapid adjustment. I see no logical inconsistency in examining the implications of both alternatives. On your last point about the 20 W/m2 difference – isn’t this comparable to the orbital eccentricity differential? Yes. It is the orbital eccentricity difference. It’s a large number compared to doubled CO2. If feedbacks operate rapidly, then why wouldn’t this potentially yield interesting information. Monthly centering erases much of this information. My point here is that monthly centering needs to be parsed and not merely assumed as Forster and Gregory and their successors have done. Please note that I am open to reasons on this, but they need to be more than simple armwaving of the type that you’ve offered so far. ☆ “One could define an anomaly relative to an annual mean. “ But if you did, using the analyses of SB/LC/D or even the systems analysis methods discussed at CA, you would then be attributing the orbital cycle of 20 W/m2 to temperature changes. Steve; defining an anomaly with respect to an annual mean is purely a definition and implies NOTHING about attribution of the orbital cycle. why do you say such things? ○ “defining an anomaly with respect to an annual mean is purely a definition” It may seem so, in the sense that since the annual mean is a single global number, it’s really just using a different temperature zero point. But generally anomaly calculation is more significant. It’s like postulating a linear model and calculating residuals. You make some prediction of the temperature based, and investigate the deviation after you’ve allowed for So if you take an anomaly relative to an annual mean, you do not take into account annual cycles. They become part of the data you try to interpret in terms of the hypothesis you are investigating. In this case, it is that flux changes can be attributed to surface temperature changes. So when you see an annual flux cycle which is largely due to orbital eccentricity, your model will attribute that to temperature change, with bad effects on the relationship that you deduce. If you take the anomaly relative to a monthly mean, you take out (most of) all annually periodic cycles. That may include some genuine effects of temperature on flux. But the information that remains, though reduced, can be more revealing about the actual relation between temperature and flux. Masking effects have been removed. ○ I find it difficult to understand the defence of the general use of anomalies on the basis they “take account of” a particular phenomena in a model. This is an assumption that is empirically testable (unless we truly are just normalising a dimension). So, even if the data set being used is highly preprocessed, it should be done, shouldn’t it? (And if the preprocessing is such that it makes absolute measures dodgy, then the same applies to the anomaly). ○ This is post normal science. GEt with the program. ○ Actually, deseasonalizing data has a much longer history in economic statistics. ○ Where considerable effort goes into testing the assumption that one is dealing with a structural time series, and that the assumptions used in the detrending methodology are not being Yes. It is the orbital eccentricity difference. It’s a large number compared to doubled CO2. If feedbacks operate rapidly, then why wouldn’t this potentially yield interesting information. Monthly centering erases much of this information. Actually considering that CO2 concentration itself changes by 10ppm over a year, and does not change equally among the hemispheres, I would think that matching effects of TSI & CO2 versus flux would be an important factor in all modeling of CO2 effects on temperature when working under the presumption that CO2 drives climate change. The idea that this isn’t considered in models is fantasy, right? ☆ Yes. It is the orbital eccentricity difference. It’s a large number compared to doubled CO2. If feedbacks operate rapidly, then why wouldn’t this potentially yield interesting information. Monthly centering erases much of this information. Why is the number 20 watts/m2? Is this some averaging at work? I design and monitor the performance of spacecraft power systems and the difference from January 3rd to July 3rd (perihelion to amphelion) is about 7% or over 50 watts/m2. This has always bothered me in climate science. To prove my point, here is the power profile of the SMART-1 spacecraft, launched in 2004 to the Moon from a Geo transfer orbit. Unfortunately, the graph that I got from ESA obscured the dates a bit but the point is there. You can even see the variation in spacecraft power due to the Moon’s varying distance from the Sun as it goes around the Earth. What am I missing here? ○ “What am I missing here?” Geometry. The solar flux is being added to the outgoing IR etc, which is measured in W/m2 of Earth’s surface. So it’s divided by 4 – 341 W/m2. 7% of that is about 24 W/m2. But then part of that is directly reflected. ○ Geometry. The solar flux is being added to the outgoing IR etc, which is measured in W/m2 of Earth’s surface. So it’s divided by 4 – 341 W/m2. 7% of that is about 24 W/m2. But then part of that is directly reflected. Thanks but there is still a bit of confusion here. The SMART-1 spacecraft was not in Low Earth Orbit (LEO). I am familiar with the added outgoing IR as I have built spacecraft that take advantage of the extra flux when at altitudes of a few hundred km. SMART-1 began in a 400 x 43,000 km orbit at an inclination of 7 degrees from the equator. The added IR is not added to the solar flux at these altitudes and indeed by the second peak, the spacecraft would have been almost out to lunar distance as it went into lunar orbit shortly thereafter and you can see the flux variation in the SMART-1 EPS data.. In the spacecraft power systems design world we use an incoming average flux at 1358 w/m2 with a number in January of 1390 w/m2 and 1328 w/m2 on July 3rd. This is a substantial variation in solar flux. This average flux is stated in the engineering world as AM0 or Air Mass 0 with zero being no atmosphere at the distance from the Sun at Earth’s average. In the solar energy world, we use a standard of Air Mass 1, which at sea level is stated at 1,000 w/m2. The rest is reflected, refracted, and or absorbed. I build solar power systems for use on the Earth as well and I just installed a system at Yellowstone National Park. This system has a name plate power of 8960 watts. This means at AM-1 conditions, the panels should put out that much power +/- 5%. However, at the place where our system is installed, which is Bechler station at 8200 ft altitude, the output of our system, as I measured it just two weeks ago, when I measured the output of the system, it hit almost 11,000 watts output. We have seen this before at high altitude sites with our hardware. That is 22.7% increase in power. If you use the worse case of 5% over name plate power and add to that the flux at that altitude, you get about 16% greater sunlight at this altitude, which roughly corresponds to the reduced atmospheric pressure at that altitude. I have never YET seen these insolation variations discussed in the literature when looking at CGM models or even estimates regarding absorption/emission of visible radiation. It is obvious to me as an engineer with an engineering physics degree that we are not doing a very good job with our computer models or data taking/manipulation to deal with the real world as it presents itself out there. This goes directly (to keep Steve happy) to the climate sensitivity issue. If the climate was as sensitive to the 0.012% variation in the concentration of a trace gas as the modelers tell us it is, then this should show up in much more obvious ways than it does, especially on a monthly basis. We should be able to see some pretty big differences between high altitude locations in the southern hemisphere in the summer vs high altitude locations in the summer in the northern hemisphere. In the Aqua data that has been presented here by some you can easily see the variation in flux at altitudes above 68,000 ft as it directly matches the variation of the distance of the Earth from the sun in its periodicity. Why in the world would the large flux variations (well over 50-60 w/m2 not show up as a variation in the climate between similar northern vs southern hemisphere high altitude locations? Steve I hope that I am not off topic here but it seems that if the climate was as sensitive as we are led to believe, that this would be a slam dunk to show up in the data sets. ○ Dennis, Thanks for your views. Are there similar plots available for the Aqua and Terra satellites (operational characteristics for the MODIS program) or other polar-orbiting sats? ○ Are there similar plots available for the Aqua and Terra satellites (operational characteristics for the MODIS program) or other polar-orbiting sats? Available, or available publicly? There is data available but you would have to get it out of the engineering teams. I design power systems for space as well as solar power systems on the ground so I get information like this. A better source that would be less influenced by the Earth’s IR would be from a GEO comsat. That data is generally protected by NDA’s but it could probably be obtained for scientific purposes from the right operator. 10. I am an amateur but: Nick says :”The reason for going to anomalies is that they are looking to a regression to relate changes. ΔR vs ΔT.” Seems to me the left graph “before” gives the feedback signal, and the right graph “after” gives the monthly noise. Totally meaningless to regress it n’est ce pas? □ “Seems to me the left graph “before” gives the feedback signal,” It gives a strong “signal”, but it isn’t feedback. The 20 W/m2 flux variation is mostly due to the Earth’s orbit. And the variation in T is the remains of the seasonal variation after adding NH and SH – it represents only the difference, produced largely by the disparity in land mass. It’s hard to see any feedback there. After monthly anomalies, obviously the variation is much reduced, and noise is a problem. But that’s where you have to look. ☆ Nick, if anomalies after monthly centering are the “right” way to analyse data, why aren’t they used in GCM parameterizations? Just asking. ○ Steve, I’m not sure what parameterizations you have in mind. here is CAM3.0 parameterizing aerosols, and they use monthly-mean climatology. I’m not sure what parameterizations you have in mind. here is CAM3.0 parameterizing aerosols, and they use monthly-mean climatology. Are you being intentionally obtuse? GCM output, as I’m sure you are well aware, is denominated in deg Kelvin not monthly anomalies. If deg K is as uninteresting as you say here, then why don’t GCM operate in monthly anomalies? ○ He’s just making cherry pie. And moving on. ○ Using anomalies does help because it eliminates the systematic bias shown in the output from different GCMs. ○ “Are you being intentionally obtuse?” No, you’re being inexact. Solving for field variables is not parameterization. But yes, of course the solutions are in absolute temp, velocity etc. There is an issue about discretization and sub-grid averaging. A very big issue in fluid mechanics. It lies behind my earlier comments about purism in time averaging. In CFD, you pay a lot of attention to the relation between time interval and space interval. Not only is it a waste to be “purist” but it often causes instability. So GCM’s work in intervals of 30 min or so, and space grids of about 100 km. Below that scale they implicitly average. As always, the ability to meaningfully average is based on conservation laws. ○ OK, Nick, then maybe you (or your colleagues) should do the necessary experiments in a reproducible manner that demonstrate this and/or show me the studies that do so, because I am not aware of any that do show that the ability to meaningfully average, as it has been done in paleoclimatology, is based upon conservation laws. TIA PS: It has been many years since I took physical chemistry, but I’d be willing to do some review in order to get up to speed on this. ○ CDQ, Nick is dissembling again. There are a couple of really good threads on CA archives with Dr Browning on his and Dr Kreiss and Gavel’s work. What Browning and Kreiss show mathematically is that it is physically impossible to extend the time one of these models are run before the exponentially increase in errors swamp the matrix. The amount of time depends on the grid size and step size. Where as Nick is calling it “purist”, it is a physical limitation inherrent to the too large grid size and too large time step. In order that they get around this, modellers use hyperviscosity and adiabatic adjustment in order according to Dr. Gavin or his source to prevent negative mass and/or energy in a grid. This makes the models engineering applications more so than physical (physics) applications. However, note that there is just one run of the independent actual response instead of the 10′s of thousands that the engineering models require to validate the usefullness. Also note, extrapolation, which is what all the GCM’s do, is known to not neccessarily correct, and requires a separate validation. In other words as two modellers stated in the the peer reveiwed literature, it will take 130 or more years to show that a 100 year model was correct, Tebaldi and Knutti. PS I am bad at spelling, my apologies if I have erred. ○ Well, John, thanks for replying. I was being a bit facetious as well as serious. I should not have left off the ;) in my post. I am aware of Dr. Browning’s posts. I also have a few points of contention with respect to the premises and axioms underlying these papers which seem to be in fundamental disagreement with the real system. Earth has a weather system. Our climate is a statistical summary of previously realized weather. How the statistical model gets specified is important. We also live in and adapt to the atmospheric surface boundary layer and that’s where the weather action is. These papers being discussed mostly don’t seem to take that into account. Is it really being too purist to expect the ‘experts’ to have done the foundational work showing that their procedures are justified? Just who’s fooling whom here :), really? ○ I did not mind the missing smiley. I like to point out that there are real reasons to take a model with a big grain of salt. Even more so, those that use a methodological approach incompletely. And those that extrapolate even with proven methodology are suspect, much less those extrapolate that don’t. So if it is overkill. I am a bit sorry, but not that much ;) 11. Not sure that the two plots capture the same trend. The problem is with the left plot. Part of the observed steeper trend will come from the dissimilar land areas of the Northern and Southern Hemispheres. The only way to eliminate that effect is to construct trend lines for the same month of many years (or combine such trend lines as in the right hand plot by combining monthly anomalies). In fact, you can rediscover the trend from the right hand plot in the cloud of summer data points in the left hand plot. This is, in essence, the same point that Nick Stokes was The data is still very interesting because the annual cycle provides a natural experiment. In particular when we also include the annual CO2 fluctuation in the picture. For example, it would be interesting to make comparisons between Northern and Southern Hemisphere positions taken at a six month offset from each other that have nearly identical conditions except that the CO2 content is different due to the annual cycle. If perfectly executed this would yield a net outgoing flux as a function of atmospheric temperature and CO2 fraction (limited to the specific type of location 12. Here’s a plot in the style of the figure in the head post relating AMSU 600 mb to AMSU SST temperatures – left absolute, right anomaly. In the anomaly version, the annual cycle does not exist. I realize that Nick Stokes thinks that the annual cycle is uninteresting, but I find it very interesting and potentially relevant to the regressions proposed in the academic literature. For example, it seems interesting to me that 1) GLB surface temperatures rise about 0.5 deg C in the southern summer (Jan) while GLB 600 mb temperatures don’t rise very much; 2) during the northern spring (Mar-Jun), GLB surface temperatures decline (~.4 deg C) while 600 mb temperatures rise quite sharply (~1.6 deg C) and then 3) 600 mb decline by about 1.6 deg C with a small (~0.1 deg C) decrease in GLB surface (AMSU). The diagrams presented here do not themselves constitute exhaustive analyses. They suggest various analyses e.g. NH, SH, tropics, for example. □ “I realize that Nick Stokes thinks that the annual cycle is uninteresting” Well, I said “not of interest here“. And the reason is that, while the annual variation of surface temperature may cause variation in the flux, and vice versa, the confounding effects means that you can’t make the necessary attribution. That’s not a consequence of sophisticated statistical analysis – it’s a practical issue. How could you do it? Flux, for example, has a more or less known cycle of 20 W/m2 from orbital TSI. There’s another big effect of NH/SH annual insolation variation, each hemisphere having a different albedo. There are the marked annual patterns of monsoons (clouds etc). The different altitude effects are of independent interest, but likely have more to do with the fact that the main heating of the atmosphere is at the surface, in latitudes where the sun is seasonally reasonably high in the sky, and mixing is more effective at higher altitudes. The former would cause a lag, and the latter an attenuation of the cycle, and these seem to be reflected in your plots. The different altitude effects are of independent interest, but likely have more to do with the fact that the main heating of the atmosphere is at the surface, in latitudes where the sun is seasonally reasonably high in the sky, and mixing is more effective at higher altitudes. The former would cause a lag, and the latter an attenuation of the cycle, and these seem to be reflected in your plots. Those two sentences are awkwardly confusing to me. Can you please re-state? ☆ Nick, Does this 20Wm-2 delta refer just to the instantaneous TSI at perihelion and aphelion? It seems to me that there is also an issue of accounting for those deltas over the amount of time that they are in effect (the orbit time spent near perihelion (in days or months) is less than than the same number of days/months near aphelion). Since perihelion and aphelion happen to appear in Jan and Jul, respectively, I’d expect those time-integrations to make noticeable impacts on temperatures seen in NH and SH at those points separate from just a difference in TSI, itself. Are these being accounted for? ○ OUH, I haven’t looked much at the details of the orbit effects. They are removed by removing annual cycles generally, not by direct quantification of the effect. ○ In other words, we average them out so we don’t have to worry about how to explain them. ○ Input signal with large variation (due to elliptic orbit) is modulated by the clouds and enters the thermal energy storages of Earth. In addition, the angle of arrival changes all the time, and you need Nautical Almanac to track the GP, geographical position of the Sun. Motion of GP causes daily cycles, changes in hours of daylight, and seasons. Too difficult to handle all this, it is much easier to downsample the observed data to monthly averages (matrix D) and then left-multiply it by M (as in the above code) and make gridded average of the result (matrix G). All you need to do is to make sure that the linear transformation G*M*D will not affect your results (physical or statistical). For purely statistical arguments it is quite easy to show the effects, as the transformation is linear (something I’ve been working on). Steve’s result is more on the physical side. Now that we have the argument ‘Natural factors cannot explain G*M*D*observations’, it is interesting to see what we can say about the actual observations. G*M*D is not invertible, but some statistical statements *) can possibly be backtracked into the observation domain. *) such as one in IPCC AR4WG1, “The Durbin Watson D-statistic (not shown) for the residuals, after allowing for first-order serial correlation, never indicates significant positive serial correlation.” It seems to me that there is also an issue of accounting for those deltas over the amount of time that they are in effect (the orbit time spent near perihelion (in days or months) is less than than the same number of days/months near aphelion). … transform true anomaly to mean anomaly to have a parameter that does vary linearly in time… By re-reading the lecture notes and with little programming I got one example out: Computed insolation per month / m2 for Trondheim, Norway, shifted one month (ad hoc, some delay is ok I guess) and then LS-fitted to observed temperature averages (Jones data). I did the same for circular orbit, and it seems that summer is too warm in that case: The eccentricity effect is very weak, axial tilt of course dominates. And these are not additive but multiplicative factors, so it is not easy to extract the global eccentricity effect out. ○ updated this a bit in here, http://uc00.wordpress.com/2014/03/19/insolation-vs-temperature/ , no need for ad hoc delay in this model (sorry for commenting old post, coding is slow process as I do this only on long-haul flights) □ Steve, could you clarify how the anomalies are calculated here. Are you referring to deseasonlised data rather than what is usually called an anomaly? ie deviation from the mean over some arbitrary period. Steve: I’ll post up code later today. In this case, they are deviations from the monthly mean over the period. 13. Physics and time lag: Today physicists use hPa (and not mb), W (and not w), °C (and not deg C). 14. I can’t discover the graph, saw it 3 months ago, but plots by month of global radiation at different altitudes, surface to tropopause, show different patterns at low altitude than at high. The NH summer rise becomes less prominent at higher altitudes, so comparison of 600 mb pressure altitude with surface needs some more qualification as to physics before math perhaps. At even higher altitude, IIRC, outgoing by month is about horizontal. (Disregard if the graph used anomaly data). 15. Your first graphic is very interesting but you are still seeing some regression attenuation. Though the improved correlation shows you have better S/N , this raises the question: what signal? The trivial model for lambda has _random_ ran and non-rad terms. It does not have a cyclic term. (This is an over simplification that I think it is essential to address before drawing any conclusions from it). It would be instructive as you say to add two cyclic terms to your regression model here that represent the NH and SH annual variation with amplitudes that would be determined by the regression. My guess is that when you have removed the significant error in X that this contributes your linear estimator will be a bit higher (let me guess it will be nearer to 9.2 ;) ). This would then be a third close result by different methods arriving at very similar figures. It seems that there is a strong component that can be modelled as linear. It now becomes necessary to ask what this linear relation represents in climate. It seems that there is a very strong short term negative feedback. The idea of removing the seasonal component is an attempt to eliminate these significant cyclic terms in order to get closer to a situation where the trivial model can be applied to infer lambda. (It is , of course, perfect right and proper to ask if this is being done correctly or distorting the result). The lower corr. of the deseasonalised data is not a surprising result and does not in itself suggest this process is bad. What it does underline and is STILL not being looked at effect of reg. attenuation due to noise in x . This is much more pronounced in such a case because of the reduced S/N. This is the ONLY reason why you are getting a “slope” of 2.61 . May I take this opportunity to suggest referring to this number as the “regression estimator” or similar. A fuzzy mess like that does not have a “slope”, calling it that makes the instant, subconscious and false inference that this value represents the linear relationship. BTW . I have not had time to decorticate LC11 but I suspect their careful selection of periods with good S/N is effectively doing something similar. I think this raises the same questions about what the resulting linear regression represents. I think their method has merit but am unsure about the physical interpretation. This is starting to get somewhere. :) 16. As I’ve remarked elsewhere, looking at the residual trend that is produced by R.stl there are oscillations with something like 3 and 5 year periods. (Poss. artefacts but I don’t think so). The larger part of these swings are a close match to ENSO variations. This requires a term reflecting this if the simple model is to be used to infer lambda for the climate system. ( This would also address one of the key issues raised by Dr. D. ) This is why I don’t give too much weight my 9.2 from fitting the simple model to satellite data. I am probably having to exaggerate some parameters to make the random terms partially simulate the missing cyclic term. Equally the 4.88 year time constant, that Bart and I both got by independent means, may be more to do with period of the cyclic forcing than exponential response of a model lacking this term. This would be in agreement with the divergence of Bart’s bode plots for real data and the model that kick in quite strongly around 0.2 per year (cf 5 y) I have also noted that Spencers lag regression plot of real data crosses the two points approx +/- 18mth that is not modelled by either the super-computer models nor the simple model. This shows they both fail to capture a significant feature of the data. This would be an expected result of the three year oscillation shown in the trend. Accounting for these two significant, non random forcings in the equation should lead to a situation where the estimated lambda gets closer to the physical meaning being sought. It may well also remove a significant amount of the noise in x problem that is confounding the use of OLS regression. □ P. Solar: I run processes that cannot be simplfied in the anomoly monthly average manner. Using such averaged data sets to look at residence times or control parameters is a very crude approximation at best. Worse, it can lead to wrong beliefs about a system. I wonder if you can comment on the F(t)=(F1+F2+F3..)dT assumptions that to me contra indicate the use of anomlies and averages. I am used to systems that do have some stiff components such that small errors acn cause the estimated system response to go from 200s to 2000s. But it seems to me in general that approaches that do not take such into account have assumed they do not exist. I do not see that this has been shown, just that it has been stipulated. ☆ John, I just sent a lengthy reply , maybe it’s held up in moderation but it’s not showing. In short I think it’s valid for linear processes. 17. John, I did comment on this to Steve on a earlier thread. I think this kind of split requires the quantities concerned to be linear. Temp and heat content are linear quantities as is radiation flux. Radiation is integrated over time to give a temperature increase. A time lag is a linear translation. Means, linear translations and integration are linear transformations. Eg. the mean of a sum is equal to the sum of the means. It seems that this is what is behind this kind of approach. It sounds like this is not applicable in your field. I don’t recall this issue having been addressed in the litchurchur. As Steve points out this seems to be assumed to be valid rather than being a stated assumption with justifications. Maybe this is obvious to those in the field and does not need to be restated in every paper. Some other factors may be more complex but it would seem that this is at least a fair first order approximation. □ Hmm, I’m not so sure linear translation is linear in this sense but that is not pertinent to the question of seasonal decomposition and the point John raised. ☆ P Solar the reason I asked is that I have a stiff situation with a simple forcing of a fuel, water, air system. Temperature, I do not think can be related to the linear system of forcings as stated. This is because there is an assumption of the mean temperature and mean energy being linear. But the water, air, heat system is in phase space. In other words the claim is made that the average state of the system IS found based on Temperature, but state is defined by H, T, wv%,P not average of T and wv. I do not consider it a bad assumption at the earth boundary, though not strictly true due to evapotranspiration, nor bad assumption at the TOA. But I find it a bit questionable to be looking, say at a water feedback, and not express its state(s) as they should be stated, a function of the phase space. I believe what they use is a psudeo equilibrium assumption that is questionable in a control volume that has phase change water vapor to water condensate, and not have enthalpy. An assumption of a constant adiabatic response I beleve is also one of the assumptions. Since this is going to thermo and the host has asked not to let threads get side tracked on this issue, perhaps we should discontinue. But i think those two assumptions above are part of the not stated every paper background. ○ I think you are misunderstanding the use of linear here. I’m not saying total energy has a linear relation to temperature. A physical quantity is said to be linear if its changes are additive. For ex. supposing surface temp rises 0.5K due to SW and 0.1K due to LW , the two incremental changes can be added to find the change due to total irradiance.Unlike air drag where you cannot add the drag at 20 mph to the drag at 10mph to find the drag at 30mph. the F(t)=(F1+F2+F3..)dT you refer to is an assumption that the physical quantities are additive. Yours is a very pertinent comment to the question of using deseasonalised data. I apologise that my reply was not clearer on this use of the term linear. ○ A better drag example would have been to say you cannot add the drag due to the forward motion of a vehicle to the drag caused by a head-on wind to find the total drag. ○ I agree about drag. But that is also true of the phase envelope of water in an air-water-watervapor system. In fact what is worse, is that defining the system as heat engine at TOA with boundary conditions means that it is actually an entropy engine. This definetly means it is like your drag example. This system is defined wrt entropy not temperature. Temperature at T^4 is the defining boundary condition, but as you say, I think it does highlight problems with “deseasonalized” data. To me a problem with deseasonilizing is that it is detrending. And several threads/papers have pointed out the problems with trending after detrending. If we were not trying to determine relationships except at TOA, I would agree it would not matter. However, that is not the case with feedback which occurs throughout the control volume. □ A minor point: “Eg. the mean of a sum is equal to the sum of the means.” P Solar, are you saying that ( 2+2+2 + 3+3+3+3 ) /7 is equal to { (2+2+2)/3 + (3+3+3+3)/4 } /2 ? ☆ Eduardo Coasta If the variable X is red2, blue2, green2 and the variable Y is red3, blue3, green3, yellow3 the variable X+Y is the 12 combinations of one of the 2s multiplied with one of the 3s Thus E(X+Y)=E(X)+E(Y)=5 ○ Sorry: Eduardo COSTA and it should be “added”, not multiplied, which gives 12 sums of (2+3) each ☆ No, sorry, it would have been better to write it mathematically: E(x)+E(y)=E(x+y) ; where E() is the expectation value aka mean. in your example 2+3=5 18. From Nick Stokes: “There’s no point in being purist about the time component of averaging.” What a profoundly weird comment. Is it a more correct or less correct method? Does it aid understanding or inhibit understanding? Does it lead to spurious results or robust results? Maybe these are the key “points”. In regard to anomalies, I think the widespread use of these in climate science is a systemic problem. It is like plugging an anomaly into the ideal gas law and expecting useful information. □ I found it extremely revealing but I guess I’ve been studying climate science too long to find it weird. Nick’s point was that other, prior averaging of temperature meant that it made no sense to be ‘purist’ about taking monthly anomalies. In for a penny of impurity, in for a pound of it – climate science in a nutshell. David Wojick made the wider point well I thought on Climate Etc. in February. □ But I would agree. Nick is right that there’s no point in being “purist” … about anything. But the pea in the thimble here is that Steve’s concern is purely *pragmatic*, not *purist*. What’s the effect of taking one piece of signal and folding it up into some other signal of interest, basically pretending it doesn’t exist? This is not purism, it’s pragmatism. This is where Nick mis-diagnoses Steve’s line of inquiry. The fact is there are those that will argue that a model is inadequate just because it can be made more detailed. Steve’s inquiry is not of that nature. Purism is not the issue. ☆ But Purism is an interesting word, both the linguistic variety, which most sociolinguists decry, and the offshoot of Cubism. Thanks to Steve, bender and Google for jogging my memory on ☆ “What’s the effect of taking one piece of signal and folding it up into some other signal of interest,” If, for example, you want to unfold the annual variation of temperature and take it as representing the seasonal effect, you’ll be wrong. That has already been swamped in the spatial averaging, which adds together NH summer and SH winter. All that remains is the difference based largely on the NH having more land. I say there’s no point in being purist because averaging means you are only going to get information that survives mixing in time and space. That reflects conserved quantities like heat. If you want to refine the scale to try to incorporate more physics, you have to look at it systematically. You can’t recover by tinkering with time averaging what you have lost in space You can’t recover by tinkering with time averaging what you have lost in space averaging. I think this is another of the sort of off the top of you head remark on your part that Steve complains about. How about trying to demonstrate that mathematically? I get the feeling that if you’re correct, the whole of the paleo-climate project would collapse. ○ My example was straightforward. Most of the seasonal information was lost in combining NH and SH, spatially. A shadow remains, because NH has more land. But if you really want to find an effect based on seasonal temperature variations, you have to recover the space resolution. Otherwise it’s lost, and can’t be recovered by modifying how you look at the space averages in time. ○ Rather than refuting it, here you are *making* my argument, but now in the spatial domain. I like this. For the record, I did not advocate *any* approach, despite your invitations to do so. So I’m glad you prefaced your supposition of wrongness with “if”. 19. Given that long-wave radiation varies with the 4th power of temperature, one really needs to work with absolute temperature. Either taking a monthly mean or an anomaly as described is not valid in any sense because it violates radiative physics. IMHO. □ T^4 can be approximated as kT for small deviations about a large value. It’s a bit like sin(x)=x for small values around zero. 300K is large compared to the variations in question. I too thought this was bad until I actually tested the error as a percentage. Maybe you should have tried that too ;) ☆ Since T^4 is about 10^10 and kT is about 10^(-21) with different units (if k is the Boltzmann constant (or even the Stefan-Boltzmann constant)), you lost me this time. But I would agree that T^4 is more or less a straight line around 300K for the anomalies used. ○ My k was an arbitrary const not the S-B const. So we are in agreement. I was simply stating the linear approximation. ○ Then you might want to add a second constant: k1*T+k2 (k2 negative) otherwise if T^4=kT (follows k=T^3) the derivatives would be 4*T^3 and just T^3 ☆ The reason why that works is that T1^4-T2^4 factors to (T1^2+T2^2)(T1^2-T2^2), which further factors to (T1^2+T2^2)(T1+T2)(T1-T2). When T1 and T2 are close, the (T1^2+T2^2)(T1+T2) part is essentially constant, and it behaves like (T1-T2). Remember, in radiative heat transfer, both temperatures matter. ☆ Isn’t the problem the fact that T itself is an average of temperatures that do have significant variation? ☆ Where I live, a hot summer day can be 300 K, and a cold winter night 265 265^4 is only 61% of 300^4 I respectfully suggest that integration is required, to avoid losing information that may prove to be significant. 20. I don’t think I have posted here before because I don’t have the maths to properly understand much of the discussion, but this has now touched on something that has bothered me for a long time. The way I understand it, the whole point of using an anomaly (regardless of what it is based on, daily. monthly, yearly) is to try and remove the effects of yearly changes due to orbital issues so that other effects can be observed. Do I have that right? However, by removing the absolute values and the changes in those values we are losing sight of the enormous energy transfer during this yearly period. For example, as Steve has noted, the NH summer has a much higher atmospheric temp than the SH summer: OK, this is explained as the effect of NH land mass being bigger than SH, but considering the fact that the incoming energy is virtually constant, [Steve: actually the annual variation is over 20 wm-2 due to orbital eccentricity and the greatest incoming flux is in the SH summer.] this difference in atmospheric temperature represents a serious flow of energy – presumably into and out of the upper oceans. With such a large flow of energy in each direction, twice a year, the minor fluctuations in the “anomalies” are almost meaningless and – very probably – well below the the sensitivities of our instruments. Hence the pathetic r^2 values when we try and plot these. [Steve: I don’t think that it’s an instrumental problem. If there’s an issue, it’s a conceptual and methodological one. Maybe I am stating the obvious here and people already just “know” this stuff, but I really do think we are struggling to look past the log in our eye, to find the spec of dust in someone else’s. 21. An early (1978) paper by Ellis, Vonder Haar, Levitus and Oort comparing the annual cycles of net radiation flux and of ocean heat storage may be of interest: ‘The annual variation in the global heat balance of the earth’, JGR 83, pp 1958-1562. □ Interesting reference. The structure of the annual cycle described in it seems to stand up with the better recent data. It would be interesting to compare what early ideas of the parameters were to more recent ones. 22. I have not followed these discussions in detail and my stats are very weak. So, maybe these comments are way off base and topic. Denizens of CA are always on top of the situation and will let me know if that is the case. The Earth’s systems have never been and will never be in radiative-energy transport equilibrium. The TOA radiative-energy imbalance will always wiggle and it’s not clear that it wiggles about some kind of roughly-constant average energy level. There are no driving potentials to obtain that kind of wiggling. What happens to the energy after it enters the systems is always changing and thus affecting the radiative-energy transport states within the systems and thus the emitted radiative energy. The fraction that makes it to the surface is always changing, too. The lack of equilibrium, and the consequent constantly-changing states within the system, contribute to both temporal and spatial heterogeneities. The radiative-equilibrium concept, so far as I know, has never been quantified relative to the time scale over which the concept is assumed to be valid. The spatial averaging scale is very roughly taken to be some limited aspects of the contents of the entire Earth systems. The papers generally report yearly-average values of the temperature of the atmosphere near the surface ( 10 m I think ). What I have not yet seen discussed are the effects of all the heterogeneities, both temporally and spatially, when the above hypotheses are introduced. Somehow, it seems that it is again assumed that these real-world effects cannot be sufficiently large to in-validate the averaging. There are, however, physical situations for which temporal and spatial heterogeneities sufficiently dominate that averaging which does not account for these can never produce estimates of the states of the system with sufficient fidelity to be called predictions. I’ve often wondered if that is not the case for the Earth’s climate systems. Application of concepts such as equilibrium sensitivity to the Earth’s systems might work out if the systems were approaching an actual equilibrium state after perturbations of, say, CO2 content in the atmosphere. If the systems were in fact returning to an equilibrium state, and the present state was way way out on the long tail of that approach, the concept might be valid. None of this obtains for the Earth’s systems. That’s not a realistic concept. Neither is the concept of a transient sensitivity because of the constantly changing nature of all aspects of the systems of interest. There might be some time period over which the system responses are sufficiently more-or-less repetitive that would allow for a rough estimate. However, that approach does not address the effects of spatial heterogeneities. The far southern latitudes, the tropics, and the far northern latitudes are all different from each other. □ If anyone can point to any IPCC-cited literature that squarely addresses Dan Hughes’ concern, it would be appreciated. Failing that, where would you expect this topic to be covered in the Topic probably not appropriate for this thread. Still. 23. What happens if one substitutes weekly centering instead of monthly centering? □ You’ll get interesting figures. This http://climateaudit.org/2011/09/13/some-simple-questions/#comment-303279 seems to hold all the time ( I don’t really have Dirichlet-style perfectly rigorous proof ). 180 ‘month’ anomaly for UAH: ☆ UC – can you explain these figures some more? I think I know what you mean, but your comments are pretty oracular so far. ○ Hah, I thought I was the only one who didn’t understand them. ○ You could think of this presentation as somewhat more “oracular” then the UC notes: It is about “Climate Sensitivity” — really.See the work in the area of page 30. Just don’t skip any slides on the way there. Then let me know if the work here is more useful, or less useful. ○ For any (n by m) matrix [tex]x_{ij}[/tex] where i is month and j the years (and number of points = n*m), such that the mean over each month is zero (i.e. [tex]\sum^m_{j=1} x_{ij} = 0 [/tex] for all i), the cumulative sum of OLS trends is [tex]C=m*\sum^m_1 a_j[/tex] where [tex]a_j = (n\sum^n_{i=1} i*x_{ij} – \sum^n_{i=1} x_{ij} \sum^n_{i=1} i)/(n \sum^n_{i=1} i^2_ {ij} – (\sum^n_{i=1} i)^2 )[/tex]. The denominator is constant for all j, and so C depends on the double sum over i and j of the numerator. The first term is zero (swapping the summation order) because each the time-series for each month sums to zero, and the second is zero because the overall sum must be zero also. Thus C=0. ○ can you fix the latex? Obviously got the short code syntax wrong… ○ % 12-month anomaly: % for 180-’month’ anomaly use: % yrs=2; % mons=180; ○ Now you’re speaking my language. ○ % Sum of annual trends: x=[ones(mons,1) (1:mons)']; % blkdiag(x,x,x, …): X=[x repmat(xo,1,(yrs-1))]; X=[X; xo x repmat(xo,1,(yrs-2))]; for i=2:(yrs-1) X=[X;repmat(xo,1,i) x repmat(xo,1,(yrs-1)-i)]; %ans = % 8.6086e-017 % ave*pX*M seems to be vector of zeros. Sum of annual trends ave*pX*M*Raw is then % zero, % whatever is the input series Raw. ○ Note that Dessler adds a trend to the data: The impact of a spurious long-term trend in either DRall-sky or DRclear-sky is estimated by adding in a trend of pm 0.5 W/m2/decade into the CERES data. so my trend-analysis is not completely OT. One could ask whether he inserted a trend or staircase function (result of adding the trend before the anomaly operation)? Interestingly, it doesn’t matter for his result (pm 0.18 W/m2/K). M is symmetric and idempotent, so one gets the same slope in both cases. Furthermore, there is no need to deseasonalize DRcloud at all to get the result 0.54 for the cloud feedback. 24. Steve I would be very interested to see what Ross thinks about these regressions. It looks like an identification problem, where a one-variable model is inadequate. □ this is simply a first cut at the data to examine the issue of monthly centering. It looks like the figure-eight could be modeled relatively easily. ☆ Yes, it looks like a Lissajous figure. Try y<- 8*sin(t) + 2.7*sin(2*t) ○ Ah , at last someone who knows what a phase plot represents. What is the “slope”? ;) LOL Seriously, That look very close. Can you express that as an R formula? That is one area of R I’m having trouble mastering. ○ This is quite a good illustration of the regression attenuation problem. If we set the phase lag of 0.22 lm(y ~ x) returns a slope of 8.0 despite the presence of the second sine. This is because there is zero correlation between sin(t) and sin(2t) [over an integer number of cycles] With the phase shift , lm gives a regression estimator of 7.75 .(cf Steve’s 7.7 on the real data) The data is starting to decorrelate due to the lag and a simple regression starts to deviate, this increases with the lag. Note that the figure does not “tip” , it gets “fatter”. The “slope” of the figure remains the same but the regression estimator gets lower and lower. That is a pretty clear demonstration of why Andy Dessler is fooling himself (and anyone else who gives any credence to his results) about the value of climate feedback. This, gentlemen, is the sorrowful bottom line of where the “science” of positive climate feedback comes from. ○ “This, gentlemen, is the sorrowful bottom line of where the “science” of positive climate feedback comes from.” No, this is Steve’s graph. Climate scientists would use monthly anomalies to avoid this situation where orbital variations are regressed against seasonal NH/SH temp differences. The phase plot earlier was in R code. It should run as pasted. ○ Nick I realise this is usually done with deseasonalised data and have said in several places why I think that may be more useful in searching lambda. Here, I am using the formula you derived as an abstract demonstration of the problem of doing naive OLS regression on data where it is not appropriate. The real data used to estimate feedback has a whole world of other uncorrelated junk and different lags. The phase plot is not a pretty lissagou figure it’s regurgitated hairball full of puke. ;) That is why Dr D. is seeing such awful R2 values. Even with this nice clean example we see the effect of regression attenuation. It’s a nice clear demonstration because we can play with the lag and see correlation drop but the “slope” of the figure stays the same. I do see this issue being dealt with or even acknowledged ANYWHERE in the litchuchur. Are you able to see how this may be a problem? I do see this issue being dealt with or even acknowledged ANYWHERE in the litchuchur. The not I take it is implied? ○ I’m glad to see someone is paying attention ;) ○ I think the reason D sees low r2 values is just that there isn’t a strong relationship. That’s what he says, anyway. He’s arguing against LC and SBCM, who say that there is. As to the graph in the post, if you do a regression you actually get a slope which is not meaningless, but doesn’t capture the oscillatory structure. That should show up as non-random residuals. But there’s no indication that such structures are a problem in the deseasonalized analysis. ○ I did not say the regression result was “meaningless”. I said it was suffering from regression attenuation caused by the lag that produces an artificially low result. Saying “I think…” in the face of such a clear demonstration is rather weak. Are you seriously suggesting that the data D. is working with does not have any oscillations, lags, correlated or uncorrelated noise or errors-in-x-variable that could cause a similar artificial reduction in the regression estimator? Please try to answer yes or no rather than diverging elsewhere. ○ No, D’s data is similar to the kinds of data on which millions of regressions are performed, in many fields (eg econ). An error range is quoted which is meant to embrace these issues. But D’s claim has been repeatedly misrepresented here. He isn’t claiming to have established any particular trend. He isn’t even claiming to have shown that it is positive. He is just showing that it is unlikely that there is a large negative trend. ○ He claims that in the media and Trenberth claims it too. ○ When you were making similar claims on an earlier thread that D was not “claiming to have established any particular trend. He isn’t even claiming to have shown that it is positive.” I pointed out D10 concludes: “My analysis suggests that the short-term cloud feedback is likely positive …. “. So what you should have said is: “Unfortunately D has claimed more than he showed saying ‘that the short-term cloud feedback is likely positive’, he simply showed that his analysis was unable to detect any significant feedback”. ○ Well, he says “My analysis suggests that the short-term cloud feedback is likely positive” and goes on to say: “However, owing to the apparent time-scale dependence of the cloud feedback and the uncertainty in the observed short-term cloud feedback, we cannot use this analysis to reduce the present range of equilibrium climate sensitivity of 2.0 to 4.5 K” ○ Probably time to give this a rest, but good to see you no longer feel it is a misrepresentation of D to suggest he claimed a (likely) positive trend. (My understanding is that in IPCC speak this amounts to > 66% probability.) Getting an acknowledgement from you that he was over egging the pudding in this regard is probably a bridge too far. ○ A graph of the Lissajous phase plot is here. ○ my question was how you got the coeffs. Hand-rolled or regression? If you have a regression method , would you like share it ? ○ Hand-rolled. It’s a sequential process. The shape of the curve says you’re looking to plot the first two harmonics. Start with first harmonics without phase lag. That’s a line segment, and regression could be used, though I did it by eye. Then tweak phase shift to get an ellipse that looks about wide enough. Then introduce the second harmonic, tweak the coef until the crossover is about right. I’m sure it could be done much more scientifically. □ No opinion, I haven’t looked at any of the papers or data or debates. Offhand I would think a VAR model is the way to deal with bi-directional causality, but I don’t have any time myself to try it out. 25. “I find it hard to visualize a physical theory in which the governing relationship is between monthly anomalies as opposed to absolute temperature.” I think that this is a more profound statement than you realize. When I first started reading blogs about climate science I was struck by this and couldn’t understand why all the detrending and anomalies rather than using the physical quanties. After all, a physical theory should predict physical quantities. My suspicion is that it was originally done to hide bad behavior by the models and then became common practice. If you look at the temperature maps produced by different models, they can differ among themselves by 5C or more. An anomaly makes this problem go away. The same is true for such nonsense as the global average temperature, which has no physical meaning, and its anomaly. You can hide a lot of dirt under that rug. After a while this propagated to all kinds of analysis. My conspiracy theory of the day. 26. A slightly different type of Monthly Centering: □ Bidirectional causality at its best. 27. I find these results absolutely fascinating. Here we have evidence that feedbacks seem to exist on a very strong negative scale. Since UC’s comment, I’ve wondered what would result from this sort of plot and now I wonder just why such strong evidence of a negative feedback isn’t important. The anomaly approach made no sense once I thought about his comment. We have gigantic annual forcings and the response by outward radiation exceeds anything climate science would expect according to atmospheric theory. I wonder where the experts are in the comment thread. Certainly, Gavin could shoot this down in a second. Why isn’t it an important result? Certainly, cloud feedback should occur on a 3 plus month scale and were it positive, steve should have a different result. Since the slope is so strongly more positive than 3.3, the feedback has certainly occurred and it is negative. I have to be missing something please tell me. □ See my two main comments early in this thread. This is the response to a strong cyclic driving force. I don’t think it can be attributed to what is being called lambda in a model without any cyclic term. I don’t think one feedback factor is any more realistic than a one slab ocean model. Shallow waters could react much faster and provide a stronger but less sustained feedback. If we are seeking the long term feedback response I’m not sure month to month variations are where we need to look. ☆ PS, I’m not saying one slab of mixed layer is necessarily bad for decadal time scale. □ The thing I would warn about is that feedback must be determined by the regression of (N – TOA Flux) against T, where N is the forcing, not simply TOA flux against T. This is because the TOA flux observed is a combination of the forcing + feedback (which goes to Spencer’s point about unknown N corrupting our feedback estimates), so we must remove the forcing to isolate the feedback parameter (lambda). If we are using the absolute flux measurements, not monthly anomalies, we have a significant solar forcing (as you say, “gigantic annual forcings”) to take into account before regressing the TOA flux against T. This is why I don’t think that 7.7 W/m^2/K slope necessarily reflects the climate feedback parameters, unless that Y axis is actually removing the different solar forcing associated with each month. ☆ Here is plot of CERES.net vs hadCRUT global showing just the long term trend extracted. this confirms your point in a way but on the other end. Even when the monthly variations are removed, there is a very strong, long term component (circa 3yr) that is part of “N”. ○ As the caption indicates that last plot was UAH, here’s hadCRUT Very similar but a clearer oscillation. (Recall this is a lag plot but the lag response of a sine is also a sine) ○ P. Solar, I’m having a hard time understanding what those charts are showing. Merely regressing the flux against temperature won’t tell you much about feedback unless the forcing (N) is small relative to the feedback*T term, correct? ☆ Troy: Your comment appears to be correct, but regressing against N – TOA flux assumes that there is no lag between the forcing N and the resulting change in TOA flux. I suspect the forcing must be integrated over time before it becomes a temperature anomaly (not necessarily a surface temperature anomaly). Further time may pass before the temperature anomaly dissipates into space as a TOA flux anomaly. There is an annual forcing (forcing anomaly?) associated with the eccentricity of the earth’s orbit. That appears to produce immediate (maximal in January) large temperature anomalies in the stratosphere (low heat capacity?) and delayed (maximal in April) smaller anomalies at the surface. The troposphere, which doesn’t absorb nearly as much solar radiation as the surface or the stratosphere, lags behind the surface (maximal in July, when the earth is furtherest from the sun). Superimposed on this annual cycle due to eccentricity may be smaller effects due to seasonality. In temperature zones, the warmest temperatures over the land occur about 1 month after the longest day, while SSTs (and some coastal temperatures) lag about three months. The Northern Hemisphere has much more land area than the Southern, producing asymmetry. ○ Frank, While I agree it that it takes time after the forcing for the temperature to increase and yield a feedback, what I’m saying is that the forcing itself is contained within the TOA flux anomaly…that is, the TOA flux = forcing – feedback * T. This is what I mean by needing to remove the forcing from the TOA flux anomaly before regressing to get the feedback term. Merely regressing TOA flux against T leaves the forcing term in there (and your resulting estimate won’t be lambda), and that forcing term has a large seasonal component. 28. Steve says: “This residual has 4 zeros during the year – which suggests to me that it is related to the tropics (where incoming solar radiation has a 6-month cycle maxing at the equinoxes, with the spring equinox stronger than the fall equinox.) ” Canadian Spring equinox being late SH summer, benefiting from the orbital eccentricity. This cycle presumably looks like a halfwave rectified version of the cycle seen from higher latitudes that we are more familiar with. One side will have a somewhat larger magnitude due to the hemisphere differences already noted and as witnesses in your plot. Does it bear any resemblance to the seasonal component extracted from hadSST by R.stl ? Estimating the cyclic contribution to X data to be about half the linear component and 1/5 in Y, would suggest a rough and ready correction for reg attenuation of 7.7*sqrt((1+1/2)*(1+1/5))= 10.33 I’ll make a less clunky estimate once the code it up. I’d be very interested to see the slope estimate from a regression model including a cyclic term . This is approaching LC11 numbers , which is perhaps not entirely surprising. I think it is measuring a similar situation. 29. Could you throw a bone occasionally to those of us who don’t eat and drink climate acronyms and data sets? Would it kill you to give the URLs and column numbers of the data you’re using once in I’m sure the cognoscenti here all know the URL and column number for “AMSU 600 mb deg c” by heart, but after more than an hour the best I could come up with was http://vortex.nsstc.uah.edu/data/ msu/t2/tmtday_5.4, which doesn’t appear to give absolute temperature, and, in any event, the related readme is opaque as to what’s in the various data columns. Why present your readers with such I’m positive there are a lot of folks out there who could contribute mightily to analyzing these issues, but they don’t, because they find the effort of obtaining the data and decoding the jargon just too frustrating. My experience, after dealing with a wide range of technologies over several decades, is that the biggest impediments to understanding usually are not the technical concepts themselves but rather the jargon and poor exposition in which they’re cloaked. And, this site, I’m sorry to say, is consistent with that experience at least as far as the jargon and exposition go. Steve: In this case, I didn’t post up a turnkey script.However, I’ve provided dozens if not hundreds of turnkey scripts and have provided many materials for interested parties to examine, including a few posts ago on dessler v spencer. Yes, you’re entitled to ask, but I think that whining is unwarranted. I agree that clear description of source data sets is important and in my scripts I try to carefully document exact provenance – something that is seldom done in the peer reviewed litchurchur. □ Joe, I think the data referenced is at the UAH Discover website: Choose channel 5, which refers to the 600 mb pressure layer, and then you can go to Show Data As Text to get the actual daily temperature values, separated into columns by year. ☆ Oops, chop off that end part: ☆ troyca, Bless you. □ Steve, Yes, in a moment of weakness I was churlish, and I apologize for the tone. But I believe–no, I know–that your work’s influence, great as it is, would be many times greater if you would not write as if you expected everyone to have read and internalized all your previous posts for the past five years. Sure, a neophyte can’t expect to understand everything instantly without effort. But an occasional review of the bidding, e.g., reviewing exactly what is a “chronology” is or including a link to an explanation, would draw in many readers who, rightly or wrongly, are not otherwise going to do the research. And, speaking (as I believe you did) of scripts, your commendable practice of providing so many is largely compromised by your failing to repeat often enough where to find them. It was a long time before I was aware of http://www.climateaudit.info/scripts/. Clicking on the “Steve’s Public Data Archive” link was no help in finding it. Maybe a “Scripts” hyperlink on your page would be of value? People may find what you write on your blog plausible–I certainly do–but what they’re really persuaded by is what they can work out for themselves. And the number of people who will indeed work it out for themselves decreases exponentially with the number of hoops they must jump through before they can start their analyses. ☆ Many of us who are less adept at the mathematics sympathize with your point, Joe. However Steve is best at exploring the details. It takes special skills and adequate time to synthesize, condense, and translate to a different audience. The job is waiting for the right candidate to apply. □ It’s a tough standard to live up to 100% of the time. □ For acronyms, please see (on the sidebar) – and please add new ones you find! Thanks, Pete Tillman 30. These 2 figures illustrate what I have mentionned on the other thread. Most people think that using monthly anomalies, removes the yearly periodicity (e.g the signals with a period of 1 year due to the Earth’s orbit). It does much more. It removes all signals with periods 12, 6, 4, 3, 2.4 and 2 months. As signals with periods 3 and 6 (seasonal effects) are important for the system, a big part of the real correlation is due to signals having this period. Once one has removed all signals with the above mentionned periods (what happens in the right part of the figure 1) , the correlations are destroyed and the correlation coefficient dramatically This could be easily seen if a power spectrum is done. Then one would see that the power that is in the 12,6,4,3,2.4 and 2 periods in the left part of the Figure 1 is missing in the right part of the Figure 1. From the physical point of view it is obvious that the “system” depicted in the right part of Figure 1 has been stripped of its most significant (less than 1 year) periodic signals and one can only guess what significance was left in the leftovers. It is extremely stretched and actually has no justification to postulate that the 6 removed periods are irrelevant. Besides it is useless too. The real system being shown in the left part figure, it is this one which should be analysed and ONLY after the end of this analysis should attributions be attempted. When one removes wholesale 6 important periods and just handwaves them away as being not interesting for some particular question, then the value of the “analysis” of monthly anomalies keeps the same handwaving character. Unless, of course, one rigorously proves that all signals with 12,6,4,3,2.4 and 2 periods are external to the phenomenon under study and independent of it. This is clearly not done in the case analysed here. □ What you say is a rigorous and reasonable. Whether is it reasonable to that rigorous with a grossly non rigorous analysis like simpifying the whole climate system to lambda*T does not necessarily follow. It remains a good point to consider exactly what is being done. It maybe that the deseasonal approach, in subtracting something rather than averaging it out, is throwing the baby out with the bathwater. I’m not sure that is the case for lambda but I share your instinctive distrust of this kind of data mangling. □ Nick Stoles, is this a case of “purism”? ☆ Well, it’s wrong. Most people think that using monthly anomalies removes annual periodicity. Yes, that means the base annual sinusoid and all its harmonics. That’s obvious. Thay all have annual periodicity. And you lose seasonal effects. That’s why it’s often called 31. AMSU daily information is at http://discover.itsc.uah.edu/amsutemps/. CERES data (EBAF versoin) was downloaded from http://ceres-tool.larc.nasa.gov/ord-tool/jsp/EBAFSelection.jsp See post for details. □ http://www.climateaudit.info/scripts/satellite/amsu_retrieve.txt Many thanks , scripts save a lot wasted time digging. Steve: that’s why I try to place them online and, if I forget to do so, try to be quick responding. It seems to me that academic articles skimp on providing tools to get and retrieve the data as used is because it makes it harder for people to examine the statistics carried out in the litchurchur – which, as we see, is often surprisingly banal. ☆ Yes , this is a tactic akin to church using latin for centuries to ensure kept control of knowledge and the layman had to depend on them. They have the knowledge, we should believe. When you try to access that knowledge and question the tenets of faith we are given , we are denounced as heretics and pilloried . The parallels are amazing. I digress. The point of my post, that was a bit mangled by WP, was that I got a 404 on amsu.txt , was it meant to be amsu-retrieve.txt ? ☆ The non-availability of data is only a problem when academic science is being directly used to propose and produce public policy. To my knowledge, this is unique to climate science. This novel process of ‘from academic science directly to public policy‘ by-passes all the engineering studies and evaluations that are (supposed to be) done by, e.g., the USDA, the FDA, the EPA, and all the other regulatory bodies that evaluate the science and set up the field tests e.g., clinical trials for the FDA, that evaluate the academic reports, and that challenge and validate the claimed outcomes. The climate science process is a short-circuit and therefore entirely inappropriate. So, in virtually all cases except climate science, academic science stays in the academy. Data and methods are shared among academics typically on request, and usually there is no urgency because there is no public impact. I have to say, too, that in my experience the materials and methods sections of published papers are typically enough to reproduce results. In academic Chemistry I’ve never seen the methodological obscurantism that seems so systemic in AGW climate science. In any case, climate scientists pushing for direct policy outcomes have over-stepped their proper bounds (and violated their tenure responsibilities if they are employed at a public university). They have completely subverted the in-place systems required for translating physical results into public policy. In that, they’ve been abetted by politicians who have abandoned deliberative process, by the EPA regulators whose first commitment should be to their own methodological integrity (rather than to political dictates), and by science reporters who have committed to the policy while ignoring the complete circumvention of the test and evaluation process. ○ Pat I think an issue with “climate science” is that it is an observational science rather than an experimental science. A chemist can write down the procedures and technique used in the lab to measure the spectral lines of a new compound. You dont even need to publish the raw data because it’s verifiable – any one with a lab can also go and repeat the same measurement. But in climate science you cannot repeat the last ice age. You’re left reading the tea leaves. People going over and over re-analysing the same data set when in an experimental science they would be “repeating the measurement”. None of this re-analysis gets rid of systematic errors. You’re stuck trying to guess how some old thermometer worked, or how old tree rings grew. You cannot measure everything you’d like to. Its hardly a controlled experiment. And then they make public policy on it! ○ You’re right, Rob. But observational or not, the unique issue in climate science of deliberate obscurantism and willful subversion of science and process remains. But further, compare with the situation with another observational science, Astronomy. I’d suggest a serious ethical divergence between astronomers and AGW climate scientists. Especially when considering the real global threat represented by a potentially incoming large bolide. That reality is far more physically credible and potentially far more destructive than CO2-induced global climate disruption. Nevertheless, we don’t see astronomers jiggering data, subverting peer review, suppressing uncertainties, spreading alarmist propaganda, trying to impose policy, indulging in character assassination, and liberating billions per year for large telescopes and defensive satellite arrays. Astronomers have retained their integrity and remained ethical and modest. AGW climate scientists have recruited and built a Lysenkoistic cabal. The difference could not be more Astronomers have retained their integrity and remained ethical and modest. Wish I could agree there, Pat, but stories from certain key quarters suggest that they too have their own issues that parallel those of Climate Science in many ways. ○ It’s just that astronomers aren’t trying to take up back to the dark ages like some of the climate scientists. There is a known, huge price to pay for mitigation and an uncertain price for doing nothing. ○ I take your point Pat. Normally, nature itself keeps a scientist honest. An astronomer crying wolf and reporting an incoming asteroid will be kept honest by the telescopes of thousands of other professional and amateur astronomers. Nature will eventually keep the climate scientist honest, but we might be dead before then! In the meantime, considering the policy implications, these guys have to be kept honest by much greater public scrutiny. ☆ Steve: Dutch Uncle time Best not to go into assigning motives, I think: mind-reading, which you usually (and commendably) avoid. Though I don’t doubt that’s what happens sometimes…. RE: LITURCHUR This was amusing the first few times, but is growing tiresome (imo), and may make you look a bit silly to new visitors. We do get your point. Keep up the good work! Cheers — Pete Tillman ○ Since the threading is scrambling replies again: This is a cmt on SMc’s inline reply to PSolar, Sep 30, 2011 at 7:14 AM 32. can anyone express the formula Nick suggested as an R “formula” that can be used with lm() ? y<- 8*sin(t) + 2.7*sin(2*t) □ Yes, here is a lm() routine to fit the two-harmonic phase plot: x=cbind(cos(t),cos(2*t),sin(t),sin(2*t)) # two harmonics a=lm(y~x)$fitted; ###Regression fitted phase plot for(i in 1:11) arrows( x0=nx$amsu[i],y0=nx$ceres[i],x1=nx$amsu[i+1],y1=nx$ceres[i+1], lwd=2,length=.1,col=2) i=12; arrows( x0=nx$amsu[i],y0=nx$ceres[i],x1=nx$amsu[1],y1=nx$ceres[1], lwd=1,length=.1,col=2) Here is the picture – colors as in Fig 1, but phase plot added in blue □ The regression should really be normalised by standard deviation, or some other way of matching dimensions. Here is the code that does that: x=cbind(cos(t),cos(2*t),sin(t),sin(2*t)) # two harmonics af=cbind(1,x)%*%a #Regression fitted phase plot The fitted expression is: T = 252.98 +0.85*cos(u+2.66) + 0.22*cos(2u-0.42) Flux = -0.54 + 7.55*cos(u+2.59) + 2.64*cos(2u-2.12) u=2πt, t in years. ☆ Thanks , that intersting. 7.55/.85 = 8.882353 2.64/.22= 12 The first gives an idea of the attenuation in Steve’s original regression fit at the top of the post where he was getting 7.7 . A good example of how doing a regression on a linear model gives an artificially lower value even on fairly clean data when there is a lag that is not accounted for. The second figure is very much the kind of value LC11 came up with. This is the tropical 6mth seasonal cycle. Their study did centre on the tropics and in focussing on the periods of maximum change it may be the magnitude of this response that they were revealing. I still have had time to properly analysis their method. ○ I assume you left another not. 33. Btw an amusing test would be to make the analysis based on weekly anomalies (52 of them) instead of monthly. Then it would throw out 26 periods. No idea about the result but somehow I expect that it would be again something different. Yet taking an arbitrary averaging period should not change the results, should it? □ If all the data points were averaged together, there would be only one number for each series and the trend would be undefined. I’m guessing there is less information, is that is the correct term, as data points in fewer but longer intervals are averaged. Yet taking an arbitrary averaging period should not change the results, should it? Yes, but only so long as the measured system response is a result of the same physical phenomena and processes. Causality is the key. If causality is not considered, connections with reality are easily broken. If the same value of the system response is measured but the system arrived at that value due to different physical phenomena and processes, the values should not be averaged. And of course when the situation in the real world is that there are a multitude of phenomena and processes occurring ( the usual case ), the measured system response must be clearly dominated by the same phenomena and processes for each observation. Consider that convective heat transfer and fluid friction empirical data are characterized to be associated with laminar, transitional, and turbulent fluid-flow states under natural, mixed, and forced convection, plus the relative orientation of the fluid motion, a surface of interest, and gravity. All of these for simple, steady-state conditions and a homogeneous fluid. Other real-world situations introduce additional considerations. No one would consider averaging a measured system response from arbitrary combinations of these considerations. This issue is related also to the fact that partial derivatives, the usual ‘everything else being constant’ arm waving, in general, can not be actually measured in real world systems. Everything always varies. In computer model world, for example, changing a parameter and re-evaluating the model to observe its effects on the system response of interest, does not lead to evaluation of the effects of that parameter alone in the partial-derivative sense. Reduction of observations to solely a time series is just about the ultimate in suppressing considerations of causality. And maybe averaging observations over time periods is the ultimate suppression. I think the problem is introduced at the same time that ODEs are considered to be useful, and then equilibrium states are invoked so leading to algebraic equations. Justifications for suppression of the wiggles when data are considered as a time series should be presented on the basis of causality at the time the suppression is applied. Corrections of incorrectos will be appreciated. ☆ “Reduction of observations to solely a time series is just about the ultimate in suppressing considerations of causality.” Oh , I can do better than that ! How about you then dump the time-series dependency of both variables and plotting them a scatter plot? 34. If I understand this right, the yearly global temperature cycle causes an outgoing radiation cycle with strong negative feedback at high confidence levels. Nick Stokes says this negative feedback does not necessarily apply for other types of forcing, because it is “too confounded with other annual effects (like TSI) for attribution”. On the other side, I think this yearly experiment produces feedbacks all the time which would occur under any forcing scenario, such as increasing/shrinking sea ice, increasing/shrinking snow cover, increasing/shrinking cloud cover etc. and still there is this stongly negative result, so it may not be just an outlier. 35. “with strong negative feedback at high confidence levels” If you’re referring to Fig 1, that’s not a correct inference at all. What is plotted is nett upward flux, and it is dominated by TSI variation from orbital eccentricity. That happens to vary negatively with temperature (high influx, low temp). The reasons may be interesting, but in no way can be interpreted as temperature modifying the Earth’s orbit. □ This is a forced oscillation. The overall magnitude of the response (7.7 or whatever) is the primary system response, in no way is this the “feedback”. However, the fact that there is a strong signal in clearly identified cycles is interesting and may tell us something useful about the system. Since the cause of the two main cycles is geometric (orbital eccentricity and earth tilt producing seasonal variations dominated by the tropical 6 month cycle) we can be pretty sure they are purely sinusoidal. Thus there |mayO be an indication of a feedback in the residuals. Clearly subtracting out the monthly trends will remove, forever, both the forcing AND any feedback that is present. I guess this is the point Steve and UC are making. Much of this discussion seems to have been based on the misinterpretation that this “slope” which is not a slope represents the climate feedback. It does not. It is possible that the regression estimator of the residual plot at the top of the post may include an indication of a linear, in phase feedback term if only the regression were done correctly. (NOT a la Dessler) The “slope” of 2.61 does NOT give a value of climate feedback. It is a value that is reduced by regression attenuation. Correcting that requires a study and knowledge of at least the magnitude of the noise and other grek in there that is causing the attenuation. There is no magic correct answer. However, the incorrectness of taking that deformed result to be the true climate feedback is incontrovertible. With R2 values seen that error will be very significant. The correct slope could be 2 or 3 times that 2.61 . 36. “However, the incorrectness of taking that deformed result to be the true climate feedback is incontrovertible.” I don’t think anyone did that. This is all-sky flux (I believe). Dessler, SB etc are looking at ΔR_cloud. And Dessler did not use this data. □ Again it is the METHOD I am criticising not the data source. I gave a clear demonstration of this earlier and asked you to give a yes or no response and not to diverge elsewhere. You proceeded to diverge elsewhere. I don’t care if it’s cloud flux or cloud fluff, if the data is full of errors and noise in the x variable as it is in all this work, OLS REGRESSION IS INVALID. Period. 37. Steve, re-reading your original post, there seems to be confusion of too very different things. “The right panel shows the same data plotted as monthly anomalies. (HadCRU, used in some of the regression studies, uses monthly anomalies.) ” This discussion and UC’s point, as I understand it, concerns using deseasonalised data, ie time series that have gone through something like R.stl decomposition and had the seasonal component You often seem to refer to these quantities as monthly anomalies. That is not at all the same thing that is given in hadCRUT3 which is a time series of monthly deviations from a _unique_ long term average. (1960-1990 or whatever) The term anomaly itself is pretty stupid. They are simple differences from some arbitrary mean. Steve: I think that you’ve misunderstood HadCRu anomalies. They are calculated relative to monthly averages 1961-1990, not one LT average. □ Apologies, too many hours staring at the screen. All these datasets , HadCRUT, UAH etc have the annual variations removed by one means or another, otherwise we’d be seeing the huge cyclic trends you posted above. It may also explain a feature of the SB lag regression plots that has been troubling me since I saw them… hmm. □ So what can be summarised from looking at non denatured data that record absolute physical properties? In summary, there are strong signals that may provide useful information about responses and lags in the global system. These signals do not give any obvious information about “climate feedback”. There may be something in the residuals if these major cycles are removed (as opposed to the usual abstract 1. Two clearly difinable sinusoidal cycles are evident. They seem to be attributable to orbital eccentricity and tropical seasonal variation. the temperature response of the orbital cycle is about 4x that of the tropical one. 2. There is a good S/N ratio. (approx 10:1 by eye) 3. The overall maximum extentions are about 15 W/m2 and 1.0K ; this gives an overall “slope” of 7.5 W/m2/K , this is what is found by fitting an inappropriate linear model by OLS (=7.7) 4. Fitting a model which better represents the data reveals two cycles that are synchronous but out of phase. The amplitudes are 8.8 and 12 , both individually greater than the supposed OLS 5. If we wish to regard the OLS result as representing the dominant cycle, we see that even with clean data and and a small phase lag the result is lower than the magnitude of the response. 6. The small phase lag and presence of the lesser decorrelated cycle produce regression attenuation via errors in the x variable that lead to errors if this is presumed to represent the “slope” of a linear relation. There is some valuable information about the system response here but it does not reveal anything directly that has bearing on feedback and climate sensitivity. There is good S.N but the signal is not the climate feedback. Maybe it needs to be taken further. Bottom line: don’t do regression fits of linear functions on noisy, lagged, cyclic data. 38. Steve, there seem to be a few errors in your scipts. the first line gets a 404 on : there is a http://www.climateaudit.info/scripts/satellite/amsu_retrieve.txt Is that what it should be or did you forget to post a differenct version called amsu.txt ? I took what there was, don’t know if that was the file you intended. :? A=ts.union( ceres=-ebaf[,"net_all"],amsu=amsum[,"600"], ceresn=anom(-ebaf[,"net_all"]),amsun=anom(amsum[,"600"])) Error in is.vector(X) : could not find function "anom" Should this make_anom() ??? A=ts.union( ceres=-ebaf[,"net_all"],amsu=amsum[,"600"], ceresn=make_anom(-ebaf[,"net_all"]),amsun=make_anom(amsum[,"600"])) Error in 1:n : argument of length 0 Corrections would be welcome. Steve: sorry about that. I have a habit of leaving my workspace open too long. I was trying to respond too quickly here and didn’t shut down and re-collate to ensure consistency. It is anom=function(x) { month= factor(round(time(x)%%1,2));levels(month)=1:12 anom=month; levels(anom)=norm anom=x- as.numeric(as.character(anom)) □ anom <- function(x) { return(x – rep(unlist(tapply(x, cycle(x), mean, na.rm = TRUE)), length(x)/12)) cycle() gets you the months from a ts so you dont have to turn it into a factor tapply will coerce cycle to a factor. Steve Mc: Mosh’s function presumes that the time series starts on month 1 and ends on month 12. If it starts in month 3 (as CERES), then a factor gives you the right result. ☆ Touche` ○ Actually steve, using factors may help me solve an interesting little problem of how to meterological years.. 39. I posted “What happens if one substitutes weekly centering instead of monthly centering?” TomVonk suggests “Btw an amusing test would be to make the analysis based on weekly anomalies (52 of them) instead of monthly.” Anyone involved in banking knows that not all months should be ascribed equal weight. Nine should be weighted 1.107, three by 1.071 and one unweighted (Of course every fourth year, except for the odd century, the weightings will be different). Of course someone will come in with “it does not matter, we are only dealing with anomalies”. But apples and pears come to mind. 40. sorry, eight not nine – I can’t count.. 41. Is the temperature of the pictures on the left side really in °C, and not in K? 42. pdtillman “RE: LITURCHUR This was amusing the first few times, but is growing tiresome (imo), and may make you look a bit silly to new visitors. We do get your point.” I would have drawn the precisely opposite conclusion. If you are right and it “was amusing the first few times” then surely a new visitor will not think it silly but will also be amused. Quite apart from this logical problem with your conclusion, I think the term ‘liturchur’ is wonderfully expressive – it highlights a mantra repeated ad nauseam by many who propound the theory of dangerous AGW, viz. if it is not in the ‘peer reviewed literature’ then it is apocryphal; and conversely, if it is in the ‘peer reviewed literature’ then it must carry with it an aura of received Apparently, one is supposed to leave one’s critical faculties at the cover sheet as one delves into the hallowed pages of the published work. Since much of the published work is drivel (an observation not restricted to climate science) this approach is ludicrous. For me, the term ‘liturchur’ says all this – it is as economical in expression as the finest poetry. □ Agreed. Let Steve be Steve. Minor stylistic criticisms come across as patronizing and self-indulgent. He’s developed a unique style over the years and knows what works. ☆ The correct spelling is ‘litchurchur’. ○ Which is important, because “church” is a substring. ○ Have to admit to not spotting this! I’d assumed it was based on the constant references to the phrase and how such things become elided over time (like “hella” for “helluva” for “hell of a”) ○ It’s chuckling. 43. There are a few others lurking within the ivory towers – iss-yew comes to mind. “There are some iss-yews to be discussed.” 44. [sorry posted this out of sequence higher up.] So what can be summarised from looking at non denatured data that record absolute physical properties? In summary, there are strong signals that may provide useful information about responses and lags in the global system. These signals do not give any obvious information about “climate feedback”. There may be something in the residuals if these major cycles are removed (as opposed to the usual abstract 1. Two clearly difinable sinusoidal cycles are evident. They seem to be attributable to orbital eccentricity and tropical seasonal variation. the temperature response of the orbital cycle is about 4x that of the tropical one. 2. There is a good S/N ratio. (approx 10:1 by eye) 3. The overall maximum extentions are about 15 W/m2 and 1.0K ; this gives an overall “slope” of 7.5 W/m2/K , this is what is found by fitting an inappropriate linear model by OLS (=7.7) 4. Fitting a model which better represents the data reveals two cycles that are synchronous but out of phase. The amplitudes are 8.8 and 12 , both individually greater than the supposed OLS 5. If we wish to regard the OLS result as representing the dominant cycle, we see that even with clean data and and a small phase lag the result is lower than the magnitude of the response. 6. The small phase lag and presence of the lesser decorrelated cycle produce regression attenuation via errors in the x variable that lead to errors if this is presumed to represent the “slope” of a linear relation. There is some valuable information about the system response here but it does not reveal anything directly that has bearing on feedback and climate sensitivity. There is good S.N but the signal is not the climate feedback. Maybe it needs to be taken further. Bottom line: don’t do regression fits of linear functions on noisy, lagged, cyclic data. 1. Two clearly difinable sinusoidal cycles are evident. They seem to be attributable to orbital eccentricity and tropical seasonal variation. the temperature response of the orbital cycle is about 4x that of the tropical one. There are two clearly definable sinusoidal cycles in the absolute flux, which yield the Lissajous type of pattern with an apparent 6-month cycle in addition to the 12-month cycle. I had speculateed that the 6-month cycle arose from something to do with the tropics (where the maximum is at the equinox and thus a 6-month cycle.) Further examination shows that the reason is rather different. Below is a plot showing absolute values of flux in (solar) and flux out (SW plus LW). Comments below. Black – Incoming solar flux (from CERES SYN); outgoing SW and LW flux (CERES EBAF). The latter version chosen for energy balance. The outgoing flux is the sum of two sinusoidals. Outgoing SW flux is in phase with incoming solar. (Albedo varies a little on an annual basis, but it is relatively constant.) On the other hand, outgoing LW flux is almost 180 degrees out of phase with incoming solar, reaching a maximum in NH summer. The amplitude of the LW sinusoidal is about 62% of the amplitude of the SW sinusoidal. LW flux appears to be a function of incoming solar flux over land (as noted in another blog discussion recently – I don’t recall which.) The combination of the two effects results in the amplitude of outgoing flux being damped from the amplitude of incoming flux. Unsurprisingly oceans are important in dampening the amplitude, as they accumulate heat in the SH summer and lose heat in the SH winter/NH summer. ☆ ============ Unsurprisingly oceans are important in dampening the amplitude, as they accumulate heat in the SH summer and lose heat in the SH winter/NH summer. Doesn’t this indicate that there is a significant capactiance and therefore lag in the system □ In comment 305964, Nick Stokes has a lm() routine with a graph with the phase in blue. If you use this, could you detrend the data, and plot the residuals as a sequence of time? 45. Here I am the amateur again: I can see an “average” figure 8 for the seasonal variation, assuming one can generate an average for each day over the yearly cycle, and generate the plot. The sensitivity to CO2 should be seen as earlier(before) years being slower to warm and faster to cool, as compared to latter(after) years where CO2 levels are higher. (Just another way of saying that the summer gets longer and the winter shorter up in Canada. The amount of that shift at any point is the integral of the CO2 “forcing” since equinox. Seems to me there is someone with better math skills than me needed to tease out that information from the “before” figure 8, “average” figure 8, and “after” figure 8. 46. Re the left figure above at Steve McIntyre Posted Sep 28, 2011 at 11:01 PM | If you plotted AMSU 400 mb deg C against AMSU 600 mb deg C, much or all of the character would vanish. This is because the seasonal rise in July Aug is not seen at higher altitudes. This is turn means that there is a particular feature of the process at/near ground level that produces the seasonal rise. If one is dealing in radiative physics (as converted from w m^-2 to K), then I am puzzled why the monthly rise vanishes with altitude, with the implication that radiation departing from the outer atmosphere has radially constant geometry all year, which one might intuitively So, what is the mechanism that gives the monthly rise close to earth surface each year, but not further from it? The answer probably stares me in the face, but I can’t see it. 47. The AMSU temperature at 600 hPa (=600mbar) is around -5°F (or -21°C = 252 K). 250°C (as in the figures) would be almost 500°F, which cannot be explained even as Trenberth’s missing heat. “Anomaly” (K or °C) and “slope” (W/m^2*K) have units, too. □ I’m with you all the way for correct units, so let’s get it right. W/m^2/K or W/(m^2*K) NOT W/m^2*K ☆ Thanks for bringing me up to date about the modern (Excel) way of defining the prorities of mathematical operations. When I was learning physics one could eg write the units of mu as Vs/ Am, and 2 equal / after each other were forbidden (one had to be longer). That does not change the fact that the way units are defined in the SI is exemplary, science at its best. 48. Maybe you are already aware of this recent reconstruction: I just found it amusing that in the abstract the authors express having great difficulty believing their own results! “At High Medieval Times, the amplitude in the reconstructed temperature variability is most likely overestimated; nevertheless, above-average temperatures are obvious during this time span, which are followed by a temperature decrease.” (Sorry for being off topic) □ Funny bits. Problems calibrating to temerpatures. Missing LIA due to human activity. I would not put too much stock in an exploratory study 49. ok so it’s all a load of fun but what does this mean for Climate Science…how many papers should be debunked before they appear in the official summary in AR5? 50. The plotting of monthly anomalie data seems to be removing information about the flux and temperature relation. Imagine you can go back to 1814 and plot a series of French artillery shots. You plot the position of where each missile lands on a grid with (0,0) being your artillery piece location, and the Y-axis being where you want the round to go. You likely will see a pattern that includes a normal distribution about some downrange value, with likely an offset in cross-range due to pointing errors and wind. Downrange errors are a function of different shot charges, elevation errors and downrange windage. If you remove the crossrange bias, you usually get the ballistic dispersion of the gun, and this is important. If you are the engineer responsible for minimizing the ballistic dispersion, this is you main interest. You might want to remove the bias by centering data about the centroid of the shots. On the other hand, if you are designing the aiming mechanism (both elevation and crossrange) then you need the bias information. You really want to include the crossrange and downrange deviation from the aim point. You center data depending on what is needed in the analysis. First plot all the data. Look at it. Think about it. Do not automatically go to the data centering button. 51. The monthly anomaly plot (regression line slope of 2.61) suggests only a weak positive feedback… (3.3/2.61) = 1.264, corresponding to 3.71/2.61 = 1.42 degrees per doubling of CO2. I find that result interesting all by itself. 1.42C per doubling is pretty low sensitivity compared to the IPCC ‘most likely’ estimate of ~3.2C per doubling. □ I don’t think it’s to be identified as feedback at all. It’s just the relation of temperature and nett flux after removal of annual periodicity. What Dessler and S&B, L&C have been doing is looking at a subset of flux that can be attributed to clouds, which may be a feedback from temperature. The big feedback is water vapor, not touched by this analysis. ☆ Nick, The value of 3.3 watts-M^(-2)/degree is the expected sensitivity absent all feedbacks (that is, in the absence of changes in water vapor, clouds, etc.) If the measured net flux TOA responds to a change in temperature by less than 3.3 that indicates the atmosphere has a positive net feedback, if more than 3.3, that indicates the atmosphere has a negative net feedback. It seem to me that how the TOA flux changes with temperature anomaly is pretty much the definition of climate sensitivity. Why do you think otherwise? ○ No, the definition of climate sensitivity is how much the temperature anomaly responds to forcing. With that expressed as W/m2, the units are °C/(W/m2). Nett flux imbalances are transient; in the long run they must balance, with or without AGW. So you can’t get equilibrium CS by looking at TOA flux. The question is, what surface T is needed to achieve balance. ○ Nick, Humm… OK I was sloppy in my word usage. I should have said “inverse of climate sensitivity” not climate sensitivity. Net flux imbalances may be transient (how could they not be with so much noise?), but it still seems to me reasonable that if a positive temperature anomaly regularly corresponds (over many years) with a positive anomaly in TOA outward flux (and vice-versa) then that correlation is at least consistent with causation. And besides, do you really mean to suggest that simply increasing temperature (all else equal) that will not increase TOA outward flux? Come on Nick, if you increase the solar intensity, which increases the surface temperature, then the TOA outward flux increases; it pretty much has to. Now just substitute radiative forcing for an increase in solar ○ “Now just substitute radiative forcing for an increase in solar intensity.” No, you can’t. Longterm, radiation out balances solar in. That’s just Cons En for the planet, and isn’t changed by forcings other than solar (unless they actually generate energy). “if a positive temperature anomaly regularly corresponds (over many years) with a positive anomaly in TOA outward flux (and vice-versa)” Putting a blanket on increases your temp. But it doesn’t increase heat flux to the environment – in fact, there is a transient decrease. But again, you just can’t have a sustained positive flux anomaly, unless solar has increased. ○ ‘Longterm, radiation out balances solar in.’ Almost. Some energy of sunlight is stored by the biotica. Maintaining an O2 atmosphere and storage of highly reduced biomass (swamps, peat and ocean organic carbon at the bottom of the oceans). Carbonates, like the chalk of the White Cliffs of Dover also took a lot of energy to deposit. Information is actually energy and the earth is a highly information rich environment. ○ Nick, Putting a blanket on increases your temp. But it doesn’t increase heat flux to the environment – in fact, there is a transient decrease. But again, you just can’t have a sustained positive flux anomaly, unless solar has increased. Sure, but I think that begs the issue a bit. Taking a blanket off decreases your temp. But it doesn’t decrease heat flux to the environment – in fact, there is a transient increase. How the system responds (in terms of changes in TOA flux) to temperature anomalies almost certainly has to be related to sensitivity to forcing. A modest increase from GHG forcing ought to be not much different in surface temperature response from a modest increase in solar forcing. If ‘natural variability’ produces temperature anomalies, and these temperature anomalies are shown to strongly correlate with anomalies in TOA outward flux, then that still seems to me pretty good information about the “sensitivity” of the system. □ No the 2.61 is not the relation of temperature and nett flux … it’s not anything at all. Fitting a linear relation to that data is worse than meaningless, it’s misleading. There is the instant assumption it shows an underlying relation. If some attempt were made to correctly do the OLS fit with error in x , one may look at inferring a relationship. It would not be positive feedback but negative. ☆ We’re at cross-purposes here. I’m not talking about the 2.61 or OLS. I’m saying that any relationship inferred between flux and temperature is not a feedback from temperature, positive or negative, without some argument as to why it can be interpreted so. I don’t think that argument has (or can) be made. 52. Interesting comments, my view is the change in TSI each season is a perfect test of whether clouds cause a negative feedback. Clearly, in summer clouds form a negative feedback and in winter their role is reversed. It seems obvious to me that the cloud feedback can be either direction subject to how far from equilibrium the system is pushed. Hence arguments about whether it is positive or negative are mute, its both and varies subject to absolute temperature and its distance from the “ideal” equilibrium temperature. If the mean temperature is increased due to increased atmospheric co2 concentrations it would initially raise temperatures only to later be offset as the mean annual feedback is pushed more towards being slightly more negative. I don’t know why climatologists find it so hard to observe the obvious, come up with logical concepts and correctly interpret useful empirical data. They prefer to look at 30 year averages than out the window to see whats happening around them! If the mean cloud feedback varies subject to deviation of mean temp from ideal, it can be measured statistically as positive today, but could turn negative in the future with increased co2. It means all this analysis becomes academic. The only way to REALLY understand how cloud feed backs will likely work is to study them on a day to day basis, not as averages over long periods assuming the relationships hold has mean temp changes. And that is the main point here, there is always the assumption that the cloud feedback is a set response and is not dynamic as is typical of our climate. 53. Nick Stokes wrote: “What Dessler and S&B, L&C have been doing is looking at a subset of flux that can be attributed to clouds, which may be a feedback from temperature. The big feedback is water vapor, not touched by this analysis.” Isn’t “cloud” a term that refers to a discrete region of the atmosphere that is saturated with water vapor? Did you mean “water vaporization,” as in evaporation? Are you saying that the important stuff happens at the surface and not at the TOA? □ I’m referring to the feedback of water acting as a GHG. T rises, more water evaporates from the ocean, specific humidity rises, and increases the IR opacity of the atmosphere, causing T to rise further. ☆ Nick, have you ever modeled a system with a positive feedback? The thing is that a positive feedback in a system isn’t stable unless their is an opposing rate with a rate constant about an order of magnitude large or is of a higher order. So if you have a first-order positive feedback you can stabilize it if the opposing flux is second order. If not, the system runs away until it saturates. Thus, one warm year, more water vapor. More water vapor, more GHG, more heat trapped. Next year warmer and so more water vapor, e.t.c. Finally the oceans boil away. Which ever way you analyze it, this runaway has not occurred so some mechanism must exist to stop this from happening. My guess is those large white fluffy things in the air that block sunlight. ○ “Nick, have you ever modeled a system with a positive feedback?” Yes, and I’ve built them. Oscillators, multivibrators… But the arithmetic here is fairly well known. As a radiating body (without allowing for feedbacks) the Earth would emit about 3 W/m2 for every 1°C rise in surface temp. So if a forcing of 3 W/m2 were imposed, a rise of 1°C would provide a balancing efflux. But wv as a GHG creates a feedback effective flux of about 1.5 W/m2 in response to that 1°C warming. With that, only a nett 1.5 W/m2 escapes for each 1°C rise, and it takes 2°C to balance the forcing. Doubling CO2 gives about 3.7 W/m2 forcing, so in the way sensitivity is usually quoted, that is about 2.4 °C per doubling. But of course there are other feedbacks, positive and negative. They would need to add up to about 3 W/m2/°C (twice wv) to create the runaway you describe. The system is non-linear, and these numbers describe gradients at a particular state. With runaway, the system moves to a new state where the feedbacks are below critical. This could be Venus-like, or something much less. There are indications that the Earth may have reached critical states in the past, and undergone limited but rapid changes. ○ Nick, “But wv as a GHG creates a feedback effective flux of about 1.5 W/m2 in response to that 1°C warming. With that, only a nett 1.5 W/m2 escapes for each 1°C rise, and it takes 2°C to balance the forcing.” Well, that depends quite a lot on the absolute temperature. Does a 1C rise in Antarctica, say from -35C to -34C, increase water vapor concentration enough to add 1.5 W/M^2 extra forcing? I kinda doubt that. ○ Steve, on the other hand, air temperatures in the high arctic get above freezing in the summer. You can see where this goes, if you extend the regions where, and durations for which, positive feedback occur as the globe warms. ○ Carrick, Sure, the warmer it is the more important water vapor feedback. I do not suggest that water vapor does not add to forcing, and it is clear temperature increases ought to on average yield positive feedback via increases in water vapor. I just don’t think it is so clearly defined as Nick suggests. The smallest rate of warming over the past 100 years (the tropics) is also where the water vapor concentration is the highest. The largest increase in average temperature is for the arctic winter, where temperatures are low enough for water vapor to be not so big a factor. ○ “The smallest rate of warming over the past 100 years (the tropics) is also where the water vapor concentration is the highest.” I think this is common but mistaken thinking. GHG warming accumulates over decades. On that time scale, the atmosphere is very well mixed. Spatially, cause and effect are separated by mixing. Whatever causes the accumulated heat to be unevenly distributed, it isn’t the location where it was generated. ○ “GHG warming accumulates over decades.” I think this is common but mistaken thinking. The entire system (even with ongoing increases in GHG forcing) is today remarkably close to ‘in balance’. If there is a current imbalance (as evidenced by ocean heat accumulation) that imbalance is at most ~0.35 watt/M^2, or ~0.15% of the average short wave solar flux absorbed by the Earth. That absorbed heat is too small to represent much ‘unrealized warming’ warming. The current temperature is quite close (probably 0.2C to 0.4C, depending on how much aerosol offset you think there is) to what it would be if there were suddenly zero net ocean heat uptake. There is very little of anything “accumulating over decades” except CO2 in the atmosphere. ○ >> except CO2 in the atmosphere However, because of Henry’s law, C02 is in a cycle. It is regularly absorbed by water in polar regions and also expelled from water in equatorial regions. It can’t really accumulate. 54. DocMartyn: The thing is that a positive feedback in a system isn’t stable unless their is an opposing rate with a rate constant about an order of magnitude large or is of a higher order. So if you have a first-order positive feedback you can stabilize it if the opposing flux is second order. What you need is a stabilizing nonlinearity in the damping sector (that is the energy loss must increase as the amplitude grows). Stefan-Boltzman does this stabilization by providing a radiative heat loss that depends (of course) on T^4. A classic system with positive feedback (negative damping) and stabilizing nonlinearity is the van der Pol oscillator. There is a variant on this that uses a time-delayed stiffness instead of the negative damping. The existence of oscillations in the climate system is (to me) evidence that there are net positive feedbacks at work on some scales,and that the system is already in a sub-critical operating 55. Carrick, if there is one thing we can be sure of it is that heat = more evaporation, pointing out the Wiki page to van der Pol oscillators does not get us anywhere. If increased water vapour = heat, then increased heat = water vapour; positive feedback. Additionally T^4 is in K so is pretty close to linear from 288-298. A feedback must exist, clouds are the obvious place to look. 56. I appreciate understatement of this: ” I find it hard to visualize a physical theory in which the governing relationship is between monthly anomalies as opposed to absolute temperature. ” Such a theory might give rise to an incredible equation like this: E(month) = kT(month)^4 57. For the record, there does seem to be some absolute temp data available for HadCRUT One Trackback 1. [...] (that is, they don’t take monthly anomalies) to show the radiative response. Steve McIntyre has explored this as well. The result is that you get higher r^2 values, but I think this may inflate the [...] Post a Comment
{"url":"http://climateaudit.org/2011/09/28/monthly-centering-and-climate-sensitivity/","timestamp":"2014-04-21T09:50:09Z","content_type":null,"content_length":"391812","record_id":"<urn:uuid:f42fd90e-83dd-4d7c-8211-95157aac7eec>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
Click here to see the number of accesses to this library. file readme for overview of fmm file decomp.f decomp.f plus dependencies for decomposes a matrix by Gaussian elimination and estimates the , condition of the matrix prec double file solve.f for solution of linear system, A*x = b, do not use if (fmm/decomp) has , detected a singularity prec double file quanc8.f for estimate the integral of f(x) in a finite interval, user provided , tolerance using an automatic adaptive routine based on the 8-panel , Newton-Cotes rule prec double file rkf45.f rkf45.f plus dependencies for Fehlberg fourth-fifth order Runge-Kutta method prec double file spline.f for compute the coefficients for a cubic interpolating spline prec double file seval.f for evaluate a cubic interpolating spline prec double file svd.f for determines the singular value decomposition, SVD, of a real , rectangular matrix, using Householder bidiagonalization and a variant , of the QR algorithm prec double file fmin.f for an approximation to the point where a user function attains a minimum , on an interval is determined prec double file urand.f for is a uniform random number generator based on theory and suggestions , given in D.E. Knuth (1969), Vol 2 prec double file zeroin.f for find a zero of a user function in an interval prec double
{"url":"http://netlib.org/fmm/","timestamp":"2014-04-16T08:02:34Z","content_type":null,"content_length":"2128","record_id":"<urn:uuid:5e35e2cf-d056-4ebc-a6ed-1a238b7400e0>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
Machinist Calculator CATEGORIES TOP DOWNLOADS NEW DOWNLOADS Related Downloads Popular Topics Machinist Calculator Machinist Calculator has been developed to quickly solve common machine shop math problems such as trigonometry, speeds and feeds, bolt circles, and much more. Get your free demo now. Apps for Android, Apple iOS and BlackBerry also available. The Machinist Calculator has been developed to quickly solve common machine shop trigonometry and math problems at a price every machinist can afford! As a machinist or CNC programmer, you often have to use trigonometry to calculate hole... Solve common machine shop and other trades math problems such as trigonometry, speeds and feeds, bolt circles, and much more. Trades Math Calculator is quick and easy to use. Get your free demo now. Solve common machine shop and other trades trigonometry and math problems at a price every trades person can afford! As a machinist or CNC programmer, you often have to use trigonometry to calculate hole positions, chamfers, sine bar stacks,... OS: Windows Software Terms: Ball Nose Cutter, Bolt Circle, Calculators, Chord Geometry, Cutting Speed, Drill Charts, Machining Math, Machinist Calculator, Machinist Helper, Milling Speed Ophthalmology Calculator is a convenient ophthalmology calculator. Ophthalmology Calculator is a convenient ophthalmology calculator. Ophthalmology Calculator is a professional ophthalmology calculator. The program contain two parts; IOL power calculator (SRK/T, Hoffer Q, Haigis, Holladay I, SRK II, and Binkhorst... Button Calculator is a small calculator. Button Calculator is a small calculator. Button Calculator is a calculator consists of buttons developed by Wonjohn Choi in G.Y.G.D. It offers buttons for 0,1,2,3,4,5,6,7,8,9, (.), +, -. *, /, =, ^, sin, cos, asin, acos, tan, atan, sqrt, cbrt, PI,... Calculator software with easy-to-use interface, various changeable beautiful skin, and paper tape feature (commercial version). Calculator software with easy-to-use interface, various changeable beautiful skin, and paper tape feature (commercial version). You can download new skins from our Web site http://www.9calculator.com. Our objective is to make the most common and... OS: Windows Software Terms: Books About Calculators, Calculator, Calculator Resources, Calculators, Design Calculator, Finance Calculator, Free Calculator Software, Handy And Powerful Calculator, Math Calculator, Old Calculators 97.7 KB | Freeware | Category: Utilities Missing Calculator 1. Missing Calculator 1.1 is designed to meet all your needs of an effective Programmer's calculator.Programmer's Hexadecimal, Decimal, Octal and Binary Calculator. The Calculator of Mac OS X has 3 modes. Basic, Scientific and Programmer. But the... OS: iPhone, iPhone OS 2.x Software Terms: About Blank Desktop Expression calculator is a free calculator for solving arithmetic expressions. Desktop Expression calculator is a free calculator for solving arithmetic expressions. The expression calculator evaluates a single line artihmetic expression such as sqrt(cos(60)+sin(30)). Expression Calculator has been developed for Win32... Software Terms: Alan Crispin, Calculator, Desktop Expression Calculator, Maths Video Lessons Advanced machining calculator and job planner for machining operations. ME Consultant Professional helps you do the engineering, estimating, planning, and programming necessary to get the most from your CNC machining centers and lathes. Using a bare minimum of input, it creates a detailed tooling plan and calculates material, machining, and overhead costs for a proposed machining operation. MEPro is a fast and accurate alternative to stacks of... OS: Windows Software Terms: Calculator, Cnc, Drill, Estimate, Lathe, Machining, Machinist, Metal, Metalworking, Mill PG Calculator (Second edition) is a power full scientific skinable calculator. PG Calculator (Second edition) is a power full scientific skinable calculator. It is an excellent replacement for standard Windows calculator. PG Calculator works in algebraic and RPN mode. It recognizes real and complex numbers and allows simple... OS: Linux Software Terms: Algebra, Calculator, Calculator For Linux, Calculator For Windows, Calculator Software, Calculators, Download Calculator, Financial, Geometry, Lin Ionic Calculator is a tiny ionic calculator. Ionic Calculator is a tiny ionic calculator. Ionic Calculator help you with ionic calculations. Basically you just pick an anion, then a cation and press the Calculate button to get the result on your screen. Fairwood Calculator is a small software simulating a hand calculator on a computer. Fairwood Calculator is a small software simulating a hand calculator on a computer. The calculator is specifically designed not to be bloat ware. This means that the calculator will do normal simple calculation very fast and efficiently with an... 3.1 MB | Commercial | US$15 | Category: Mathematics Econ Calculator Deluxe X 1. Econ Calculator Deluxe X 1.5.4 brings a highly efficient, high-quality home education program. This is the Deluxe version of Econ Calculator which is a simple to use calculator having three calculation modes, normal calculator, calculator with... OS: Mac Polynomials Calculator is a tiny polynomials calculator, offer you a calculator for polynomials. Polynomials Calculator is a tiny polynomials calculator, offer you a calculator for polynomials. You can use this utility that can do add, diff, multiplication, division and outstandingest common divisor. AIDEMs Talking Calculator is a calculator that not only show the number like normal calculator but also announced the number. AIDEMs Talking Calculator is a calculator that not only show the number like normal calculator but also announced the number. The crystal clear announcement of number is very cool ! When you are using the Talking Calculator,your friends or clients... OS: Windows 872.0 KB | Shareware | US$12 | Category: Mathematics The calculator can calculate and plot formulas with variables. The calculator can calculate and plot formulas with variables. Tooltip-Help for all functions of the calculator are available. The plotter can calculate null, intersection of functions, minima and maxima. The statisic can calculate average and... OS: Windows Software Terms: Calculator, Formula, Formula-calculator, Lucent, Lucent Calculator, Mathematic, Plotter, Statistics, Variable, Windows-calculator
{"url":"http://www.bluechillies.com/software/machinist-calculator.html","timestamp":"2014-04-20T13:59:35Z","content_type":null,"content_length":"59491","record_id":"<urn:uuid:c5c1d46c-432c-44b8-9101-71f5874893e1>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
Ronks Math Tutor Find a Ronks Math Tutor ...It's a very logical subject, and mastering geometry can help a student think in a more structured manner, and be able to present argument more effectively. I have a Master's Degree in Secondary Education, and I am certified in Pennsylvania and Virginia in Mathematics 7-12 and Mathematics 6-8. I... 20 Subjects: including calculus, algebra 1, algebra 2, grammar ...Through WyzAnt, I have tutored math subjects from prealgebra to precalculus; I have also tutored English writing, English grammar, and economics, and I am trained to tutor for standardized testing (SAT, ACT, GRE), philosophy, and music. In addition to tutoring, I work as a part-time teacher at a... 38 Subjects: including calculus, composition (music), ear training, elementary (k-6th) ...Ideally, those hours would be spread over about 8-10 weeks. However, briefer periods also can help, particularly with students who have taken the SAT previously. Emphasis is on doing many practice problems and working on strategies. 32 Subjects: including algebra 1, algebra 2, American history, biology ...The same basic method that worked for me has worked on other individuals that I tutored. I specialize tutoring in English, Math, Statistics, Biology, and Anatomy and Physiology. Additionally, I offer tutoring for GED, SAT, ACT, and GRE. 38 Subjects: including SPSS, Microsoft Excel, anatomy, ESL/ESOL ...I have taken a practice Praxis exam to learn more about the type of questions included. I learned that for all three subject areas, the type of questions are similar in structure to the type used on the SAT exams. I have extensive experience tutoring students for the SAT exams in Reading, Writing, and Math. 26 Subjects: including SAT math, logic, algebra 1, algebra 2
{"url":"http://www.purplemath.com/ronks_pa_math_tutors.php","timestamp":"2014-04-18T04:13:30Z","content_type":null,"content_length":"23358","record_id":"<urn:uuid:0ecc0dfc-b907-41f0-9441-faa0b1ce6847>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
DOCUMENTA MATHEMATICA, Vol. 17 (2012), 271-311 DOCUMENTA MATHEMATICA , Vol. 17 (2012), 271-311 Vesna Stojanoska Duality for Topological Modular Forms It has been observed that certain localizations of the spectrum of topological modular forms are self-dual (Mahowald-Rezk, Gross-Hopkins). We provide an integral explanation of these results that is internal to the geometry of the (compactified) moduli stack of elliptic curves $ \M $, yet is only true in the derived setting. When $ 2 $ is inverted, a choice of level $ 2 $ structure for an elliptic curve provides a geometrically well-behaved cover of $ \M $, which allows one to consider $ Tmf $ as the homotopy fixed points of $ Tmf(2) $, topological modular forms with level $ 2 $ structure, under a natural action by $ GL_2(\Z/2) $. As a result of Grothendieck-Serre duality, we obtain that $ Tmf(2) $ is self-dual. The vanishing of the associated Tate spectrum then makes $ Tmf $ itself Anderson self-dual. 2010 Mathematics Subject Classification: Primary 55N34. Secondary 55N91, 55P43, 14H52, 14D23. Keywords and Phrases: Topological modular forms, Brown-Comenetz duality, generalized Tate cohomology, Serre duality. Full text: dvi.gz 76 k, dvi 195 k, ps.gz 959 k, pdf 434 k. Home Page of DOCUMENTA MATHEMATICA
{"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/DMJDMV/vol-17/10.html","timestamp":"2014-04-18T16:07:40Z","content_type":null,"content_length":"2067","record_id":"<urn:uuid:eac6c32c-a145-43e5-b24f-8f2c42502134>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
Logical Paradoxes Logic is a powerful tool; it can be used to discern and to discover truth. Sometimes, though, this tool falls into the hands of those who would abuse it. Armed with the laws of logic and a few simple, plausible, and apparently harmless assumptions, philosophers can construct proofs of the most absurd conclusions. These proofs can give us pause; should we believe the unbelievable? This is the power of a paradox. This website is a celebration of such proofs. The most interesting philosophical arguments are those that proceed from undeniable premises, via inescapable logic, to incredible conclusions. When philosophy proves what is plausible it is mundane; it is only when philosophy appears to prove what is incredible that things really get interesting. This site explains many of the classic paradoxes, including Achilles and the Tortoise, The Paradox of the Heap, and The Liar Paradox, along with some less familiar paradoxes such as The Problem of the Specious Present. I hope that you’ll leave the site perplexed and confused.
{"url":"http://www.logicalparadoxes.info/","timestamp":"2014-04-16T13:56:38Z","content_type":null,"content_length":"7463","record_id":"<urn:uuid:6504f09a-170c-4adf-a517-1a3554632431>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Diffusion Equation Analytic Solution Model written by Dieter Roess The Diffusion Equation Analytic Solution Model shows the analytic solution of the one dimensional diffusion equation. A delta pulse at the origin is set as the initial function. This setup approximately models the temperature increase in a thin, long wire that is heated at the origin by a short laser pulse. The analytic solution is a Gaussian spreading in time. Its integral is constant, which means that the laser pulse heating energy is conserved in the diffusion process. Calculus Models are part of "Learning and Teaching Mathematics using Simulations – Plus 2000 Examples from Physics" ISBN 978-3-11-025005-3, Walter de Gruyter GmbH & Co. KG Please note that this resource requires at least version 1.5 of Java (JRE). Diffusion Equation Analytic Solution Source Code The source code zip archive contains an EJS-XML representation of the Diffusion Equation Analytic Solution Source Model. Unzip this archive in your EJS… more... download 4kb .zip Last Modified: October 25, 2011 Subjects Levels Resource Types Mathematical Tools - Differential Equations - Upper Undergraduate - Instructional Material Thermo & Stat Mech - Lower Undergraduate = Simulation - Kinetics and Dynamics = Diffusion Intended Users Formats Ratings - Learners - application/java - Educators Access Rights: Free access This material is released under a GNU General Public License Version 3 license. Rights Holder: Dieter Roess Record Cloner: Metadata instance created October 25, 2011 by Wolfgang Christian Record Updated: June 11, 2013 by Matt Mohorn Last Update when Cataloged: October 25, 2011 Other Collections: ComPADRE is beta testing Citation Styles! <a href="http://www.compadre.org/portal/items/detail.cfm?ID=11522">Roess, Dieter. "Diffusion Equation Analytic Solution Model."</a> D. Roess, Computer Program DIFFUSION EQUATION ANALYTIC SOLUTION MODEL (2011), WWW Document, (http://www.compadre.org/Repository/document/ServeFile.cfm?ID=11522&DocID=2446). D. Roess, Computer Program DIFFUSION EQUATION ANALYTIC SOLUTION MODEL (2011), <http://www.compadre.org/Repository/document/ServeFile.cfm?ID=11522&DocID=2446>. Roess, D. (2011). Diffusion Equation Analytic Solution Model [Computer software]. Retrieved April 17, 2014, from http://www.compadre.org/Repository/document/ServeFile.cfm?ID=11522&DocID=2446 Roess, Dieter. "Diffusion Equation Analytic Solution Model." http://www.compadre.org/Repository/document/ServeFile.cfm?ID=11522&DocID=2446 (accessed 17 April 2014). Roess, Dieter. Diffusion Equation Analytic Solution Model. Computer software. 2011. Java (JRE) 1.5. 17 Apr. 2014 <http://www.compadre.org/Repository/document/ServeFile.cfm?ID=11522&DocID=2446>. @misc{ Author = "Dieter Roess", Title = {Diffusion Equation Analytic Solution Model}, Month = {October}, Year = {2011} } %A Dieter Roess %T Diffusion Equation Analytic Solution Model %D October 25, 2011 %U http://www.compadre.org/Repository/document/ServeFile.cfm?ID=11522&DocID=2446 %O application/java %0 Computer Program %A Roess, Dieter %D October 25, 2011 %T Diffusion Equation Analytic Solution Model %8 October 25, 2011 %U http://www.compadre.org/Repository/document/ServeFile.cfm?ID=11522&DocID=2446 : ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications. Citation Source Information The AIP Style presented is based on information from the AIP Style Manual. The APA Style presented is based on information from APA Style.org: Electronic References. The Chicago Style presented is based on information from Examples of Chicago-Style Documentation. The MLA Style presented is based on information from the MLA FAQ. Diffusion Equation Analytic Solution Model: Is Based On Easy Java Simulations Modeling and Authoring Tool The Easy Java Simulations Modeling and Authoring Tool is needed to explore the computational model used in the Diffusion Equation Analytic Solution Model. relation by Wolfgang Christian See details... Know of another related resource? Login to relate this resource to it. Related Materials Similar Materials
{"url":"http://www.compadre.org/portal/items/detail.cfm?ID=11522&Attached=1","timestamp":"2014-04-17T04:13:20Z","content_type":null,"content_length":"36720","record_id":"<urn:uuid:c6e572b9-4260-42e8-b32f-6f28d96fe269>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
Microwave filters Updated April 18, 2014 Click here to go to a page that explains filter schematic symbols (link fixed thanks to Rhian!) Click here to go our page on lumped element filters Click here to go to a page on filter group delay Click here to go to a page on diplexers Click here to go to our page on YIG components A note from the Unknown Editor: many textbooks have been devoted to filter design. We don't intend to assimilate all of this knowledge here, our goal is, as always, to provide you with a basic understanding of the subject and hook you up with some vendors that can help you out. For the near future we will concentrate mostly on planar band-pass filters, then follow up with some lumped element examples. New for August 2012: Go to our download area and grab a free copy of Mattaei, Young and Jones "Microwave Filters, Impedance-Matching Networks, and Coupling Structures", which sells for $114 on Amazon Got some filter data you'd like to share with us? Shoot it in! Below is a clickable outline for our filter discussion (some stuff is still missing!) Common filter terminology Absorptive versus reflective filters Low-pass, high-pass and band-pass Multiplexers (separate page) Diplexers (separate page) Reentrant modes Resonances of RLC circuits Parallel LC resonance Series LC resonance Quality factor Order of a filter Poles and zeros Stopband attenuation Group delay flatness Some seemingly simple filter examples RF choke DC return DC block (moved to a new page) Bias tee (moved to a new page) EMI filter Filter response types Lumped element filters (separate page) Group delay of filters (separate page) Planar resonator filters for microstrip or stripline (coming soon on a separate page) Will include: topologies, design considerations, tolerance effects, cover effects for microstrip filters, design equations, detailed design procedure, and references. Waveguide filters - how about someone out there contribute on this topic for us? Commonly used terminology for microwave filters Filters are typically two port networks. They rely on impedance mismatching to reject RF energy. Where does all the energy go? That's up to you as a designer to figure out, and a big reason why filters are typically located between attenuators or isolators. Our page on transmission line loss will explain the difference between attenuation and rejection. Absorptive versus reflective filters Filters that are matched outside of their stop band are called "absorptive filters". One way to make a reflective filter into an absorptive filter is to add an isolator to the filter's input. Another way to do this is to use a diplexer and terminate the unwanted band. Lowpass filter (LPF) This is a filter that passes lower frequencies down to DC, and rejects higher frequencies. A series inductor or shunt capacitor or combination of the two is a simple low-pass filter. Yes we will add some figures here soon!!! High-pass filter (HPF) The opposite of a low pass filter, an HPF passes higher frequencies and rejects lower ones. A series capacitor or shunt inductor or combination of the two is a simple high-pass filter. Band-pass filter (BPF) A band-pass filter has filter skirts both above and below the band. It can be formed by cascading a LPF and HPF, or using resonant structures such as a quarter-wave coupled lines. Content has been mnoved here. Reentrant modes Sometimes when you design a band-pass filter for 10 GHz, it also passes RF at 20 GHz or 30 GHz or 40 GHz. These are called reentrant modes. Below is an example of an coupled-line filter, It uses quarter-wave sections as couplers, they couple similarly at their 3/4 wave, 5/4 wave, etc. frequencies. These are the third, fifth etc. harmonic frequencies. In the figure you can see the passband at 10 GHz, and the reentrant mode at 30 GHz (3/4 wave frequency). Often band-pass filters are followed by a low-order low-pass filter to dispose of the reentrant modes. Resonance of RLC circuits Resonance is a term used to describe the property whereby a network presents a maximum or minimum impedance at a particular frequency, for example, an open circuit or a short circuit. Resonance is an important concept in microwaves, especially in filter theory. One simple form of resonator are lumped element RLC circuits, sometime called "tank circuits". Why the term "tank?" Because an LC resonator can store energy in the form of an AC sinewave, much like a pendulum "stores" gravitational energy. The resonance of a RLC circuit occurs when the inductive and capacitive reactances are equal in magnitude but cancel each other because they are 180 degrees apart in phase. When the circuit is at its resonant frequency, the combined imaginary component of the its admittance is zero, and only the resistive component is observed. The sharpness of the minimum depends on the value of R and is characterized by the "Q" of the circuit. The formula for resonant frequency (in Excelese) of an LC circuit is: where F is in GHz, L is in nano-Henries and C is in pico-Farads. Click here to go to our resonant frequency calculator! Parallel LC resonance Resonance for a parallel RLC circuit is the frequency at which the impedance is maximum. Plotted below is the special case where the resistance of the circuit is infinity ohms (an open circuit). With values of 1 nH and 1 pF, the resonant frequency is around 5.03 GHz. Here the circuit behave like a perfect open circuit. Note that for R=Z0, at the resonant frequency the response would hit the center of the Smith chart (the arc would still start at the short circuit but would be half the diameter shown). At zero GHz (DC) as well as infinite frequency, the ideal parallel LC presents a short │Parallel Resonance, C=1pF, L=1nH, R=open circuit│ Series LC resonance Resonance for a series RLC circuit is the frequency at which the impedance is minimum. Plotted below is the special case where the resistance of the circuit is infinity ohms (an open circuit). With values of 1 NH and 1 pF, the resonant frequency is around 5.03 GHz. Here the circuit behave like a perfect short circuit. Note that for R=Z0, at the resonant frequency the response would hit the center of the Smith chart. At zero GHz (DC) as well as infinite frequency, the ideal parallel LC presents a open circuit. │Parallel Resonance, C=1pF, L=1nH, R=short circuit│ Some simple filter examples Sure, these look like very simple designs. But nothing is ever as easy as it seems in microwaves! RF choke An RF choke is what engineers call something that doesn't pass an RF signal, but allows a DC or low frequency signal to pass through. Series inductors are often used as RF chokes, as well as quarter-wave structures like the one shown below. Here a capacitor forms an RF short circuit, which is transformed to an open circuit at the input. Such a capacitor is called a "bypass capacitor". A high-value resistor can also be used to form an effective choke. If the resistance is high compared to your transmission line's characteristic impedance, it chokes off the RF. DC return This is used to add a DC ground to an RF line. For example, in a PIN diode switch, you need a path for a series diode's current to return to. DC block A DC block is nothing more than a capacitor that has low series reactance at the RF frequency, and allows you to separate DC voltages along a transmission line. A parallel coupled line can also serve as a DC block. DC blocks can be placed in the "hot" conductor of a transmission line such as coax, or the ground plane, or both, as shown below. Many vendors offer coaxial DC blocks in all three arrangements. When would you want a DC block in the ground plane? Perhaps you want to inject a voltage onto the source of a shunt FET, which is grounded to your fixture. Users of this type of DC block must be aware that their equipment could provide a voltage when they touch it. Careful where you drop that wrench! EMI filter EMI stands for "electromagnetic interference", but you'd already know that if you studied our Acronym Dictionary. EMI filters are used to keep stray signals from polluting your design. Commonly known as "feedthroughs", the basic EMI filter is a low-pass filter, and uses a combination of shunt capacitance and series inductance to prevent EM signals from entering your housing our enclosure. Filter response types Chebyshev (equal-ripple amplitude) The Chebyshev filter is arguably the most popular filter response type. It provides the greatest stopband attenuation but also the greatest overshoot. It has the worst for group delay flatness (OK for CW applications such as a frequency source). Check out our page on lumped-element filters. You should also check out the instruction page for our our free download for designing three, four and five-pole Chebyshev filters! Bessel-Thomson (maximally flat group delay) Best in-band group delay flatness, no overshoot, lowest stopband attenuation for given order and percentage bandwidth (ideal for receiver applications such as image-rejection filters). Butterworth (maximally flat amplitude) Best in-band amplitude flatness, lower stopband attenuation than Chebyshev, better than Chebyshev for group delay flatness and overshoot (usually used as a compromise). All of the above are realizable in parallel-coupled, direct-coupled, and interdigital filter topologies. This filter provides a Gaussian response in both frequency and time domain. It is useful in IF receiver matched filters for radar.
{"url":"http://www.microwaves101.com/encyclopedia/filters.cfm","timestamp":"2014-04-19T10:18:26Z","content_type":null,"content_length":"29877","record_id":"<urn:uuid:8317581b-5821-415b-b0cf-e7f4aed8ff85>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
It's the Friday Puzzle! It’s the Friday Puzzle! Please do NOT post your answers, but do say if you think you have solved the puzzles and how long it took. Solution on Monday. John weights much more than 20 stone. He goes to the shops to buy some scales, but the scales there only go up to 20 stone. However, he comes up with a way of accurately weighing himself every day using the scales. What was his plan? I have produced an ebook containing 101 of the previous Friday Puzzles! It is called PUZZLED and is available for the Kindle(UKhere and USA here) and on the iBookstore (UK here in the USA here). You can try 101 of the puzzles for free here. 52 comments on “It’s the Friday Puzzle!” 1. About one second. Easy. :) 2. Same here, straight away. 3. I think this one is very very easy. Or I’m missing something. I suspect the latter is more likely. 4. A few second thoughts. 5. Immediately, but I’m wondering if there’s another solution… 6. He ‘weights’ much more than…? Is this part of the puzzle? 7. About ten seconds, but it would be easier to buy a set is scales which cover John’s weight range. 8. Also he goes to “the shops” — perhaps one shop is allowed to sell only one unit to a person! 9. How much is “much more”? Up to 40 stones would be easy. Up to 80 kind of quirky. Above 80 I don’t have a solution. □ I think I’ve got the same solution as you. Though I think it would actually be quite a bit trickier in practice than it appears in theory. Alternatively, there’s always the method my brother used to weigh his luggage when he didn’t have any scales at all. 10. LOSE SOME BLOODY WEIGHT!!! 11. Too easy. Sort it, Wiseman. □ Nice one with the politeness and respect there, Eddie. □ Why not ask for a refund? Oh wait. 12. Two Seconds ;) 13. To spice things up: John weighs around 100 stones, and can only buy one scale. □ Has he got a swimming pool? □ Has he got a Saturn 5 rocket? 14. Yep, too easy, Richard! Solved it as soon as I read it. 15. Well, I think I have it, but it seems too easy to really be that easy. 16. Several solutions leap to mind but I guess he’s not planning to disembowel himself or waste the dwindling volumes of helium available so I think he’s going to pick the easy method. 17. As long as he doesn’t weigh more than 40 stone (at which point he’d probably have problems getting to the shops, so would have to order the scales online!), I have a potential solution which is easy to implement. 18. I have two solutions, one will work even if he is well over 40 stones, but I guess this is unlikely as he would never have made it to the shops in the first place. 19. found three solutions up to now, … first in few seconds the ohters a few seconds later… 20. got it quick 21. Found three solutions. Not sure I’ve got “the” answer. 22. What sort of a stupid question is this? Is the answer supposed to be something other that the bleeping obvious? 23. I predict a diet. □ I predict cardiovascular disease and diabetes. □ :: golf clap :: □ Golf clap. Haven’t heard of that disease before. Do you get that by swinging your wedge about in a bunker? 24. Even I found it easy. Say my answer Monday and check with you brainy ones 25. I’ve got a few answers, depending on just how dedicated John is. 26. No problem. Just a few more supplies to get–a good friend with a strong stomach and an open calendar and some heavy-duty trash bags. 27. Over here weight is measured in “pounds.” (And to be even MORE convoluted, it’s abbreviated as Lbs.) I guess we’re worth our weight in British currency. □ Yeah, well, if you stupid Americans could just learn to use the metric system, you’d be measuring in stone just like the rest of the world. Wait a minute . . . □ If he weighs much “more than 20 stone” he probably IS American. □ Over where? 28. Beyond the stones, I’m thrown by all this plural ‘shops’ and ‘scales’ litany of Brittany. I’m going to drive to the store, take the elevator up, buy a scale, and bring it home in the trunk of my car. (If they have one that can handle my stones, that is!) ;) □ LOL! My sentiments exactly! I’m still wondering how much a stone is. Must go google it to find out just how fat this dude is! 29. I know how he can weigh himself. 30. My first thought was he could tie helium balloons to his arms… O dear… I’ve got the answer now. :-/ 31. Piece of cake-chops his legs off and then ways every bit individually lol 32. I got it in a moment. 33. there are many possible answers to this. one is the use the archimedes principle. weigh himself in a bath of water and then subtract the weight of the water. this is easier in metric as one kilo is the same as a liter. or he could do the obvious and weigh himself twice. □ …please do NOT post your answers… □ Haha, this is not so exactly anyway ! 34. i definitely got two answers ! 3secs and 3mins 35. clue please? □ for a clue…..imagine the degenerate case where he has a scale that can only weight zero stones (or zero pounds or zero kilogrammes. in this case the conversion is easy, not like with from the soln you have there now imagine he can weight to to twice his weight (not half) on a single scales purchased from the shops. now what is the general case? (perhaps a step is to also imagine he weights zero). 36. Found a solution where John could weigh theoretically any number of stones, let’s say 1000 stones, but this would involve A LOT of scales and I am not sure if this would statically work. 37. I think I got it
{"url":"http://richardwiseman.wordpress.com/2013/03/15/its-the-friday-puzzle-203/","timestamp":"2014-04-18T16:39:45Z","content_type":null,"content_length":"109738","record_id":"<urn:uuid:3cfbdfa1-8949-4292-b641-fad6f19f2159>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
Gary's Nautical Information The middle or mid latitude (Lm) between two places on the same side of the equator is half the sum of their latitudes. Mid latitude is labeled N or S to indicate whether it is north or south of the equator. The expression is occasionally used with reference to two places on opposite sides of the equator, when it is equal to half the difference between the two latitudes, and takes the name of the place farthest from the equator. This is misleading, as it lacks the significance usually associated with the expression. When the places are on opposite sides of the equator, two mid Latitudes are generally used, the average of each latitude and 0 degree's. Longitude is the arc of a parallel or the angle at the pole between the prime meridian and the meridian of a point on the earth, measured eastward or westward from the prime meridian through 180. It is designated east (E) or west (W) to indicate the direction of measurement. The difference of longitude (DLo) between two places is the shorter arc of the parallel or the smaller angle at the pole between the meridians of the two places. If both places are on the same side (east or west) of Greenwich, DLo is the numerical difference of the longitudes of the two places; if on opposite sides, DLo is the numerical sum unless this exceeds 180, when it is 360 minus the sum. The distance between two meridians at any parallel of latitude, expressed in distance units, usually nautical miles, is called departure (p, Dep.). It represents distance made good to the east or west as a craft proceeds from one point to another. Its numerical value between any two meridians decreases with increased latitude, while DLo is numerically the same at any latitude. Either DLo or p may be designated east (E) or west (W). The basic equations for mid-latitude sailing are: p = DLo (in minutes of arc) x cos Lm. C = tan -1 (p/l where l = difference of latitude in minutes of arc. Distance = l x sec C
{"url":"http://nauticalinformation.blogspot.com/2008/01/explaination-of-mid-latitude-sailing.html","timestamp":"2014-04-21T07:19:13Z","content_type":null,"content_length":"93608","record_id":"<urn:uuid:345e127a-77b0-4e53-95af-416cb31943ac>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
categories: Topos theory and large cardinals [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] categories: Topos theory and large cardinals Andrej Bauer asked whether large cardinals other than inaccessible ones have a natural definition in topos theory. Indeed, like most questions of set theory which have an objective content, this too is independent of the a priori global inclusion and membership chains which are characteristic of the Peano conception that ZF formalizes. Various kinds of "measurable" cardinals arise as possible obstructions to simple dualities of the type considered in algebraic geometry. Actually, measurable cardinals are those which canNOT be measured by smaller ones, because of the existence on them of a type of homomorphism which is equivalent to the existence of a measure in the sense of Ulam. Specifically, let V be a fixed object and let M denote the monoid object of endomorphisms of V. Then the contravariant functor ( )^V is actually valued in the category of left M-actions and as such has an adjoint which is the enriched hom of any left M-set into V. The issue is whether the composite of these, the double dualization, is isomorphic to the identity on the topos; if so, one may say that all objects are measured by V, or that there are no objects supporting non-trivial Ulam elements. In any case, the double dualization monad obtained by composing seems to add new ideal Ulam elements to each object, i.e. elements which cannot be nailed down by V-valued measurements. Since fixed points for the monad are special algebras, and since algebras are always closed under products etc., it should be possible to devise a very natural proof based on monad theory that the category of these non-Ulam objects is itself a topos and even "inaccessible" relative to the ambient topos. Why is the above definition relevant? The first example should be the topos of finite sets with V a three-element set. There the monad is indeed the identity, as can be seen by adapting results of Stone and Post. Extending the same monad to infinite sets, we obtain the Stone-Czech compactification beta. The key example is a topos of sets in which we have V a fixed infinite set. As Isbell showed in 1960, the category contains no Ulam cardinals in the usual sense if and only if the monad described above is the identity. Further examples involve the complex numbers as V, where actually M can be taken to consist only of polynomials, with the same result; this example extends nicely from discrete sets to continuous sets, usually discussed in the context of "real compactness". Another kind of example concerns bornological spaces. The result always seems to be that the lack of Ulam cardinals is equivalent to the exception-free validity of basic space/quantity dualities. Ulam (and other set theorists since) usually in effect phrase the construction in terms of a two-element set V equipped however with infinitary operations. Isbell's remark shows that equivalently an infinite set equipped with finitary (indeed only unary) operations can discern the same distinctions between actual elements as values of the Dirac-type adjunction map and ghostly Ulam elements on the other hand. F. William Lawvere Mathematics Dept. SUNY Buffalo, Buffalo, NY 14214, USA 716-829-2144 ext. 117 HOMEPAGE: http://www.acsu.buffalo.edu/~wlawvere
{"url":"http://facultypages.ecc.edu/alsani/ct99-00(8-12)/msg00128.html","timestamp":"2014-04-21T08:24:40Z","content_type":null,"content_length":"6560","record_id":"<urn:uuid:da355fff-6305-477e-808d-343f20183cb5>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
(Angle-Angle) Similarity AA (Angle-Angle) Similarity In two triangles, if two pairs of corresponding angles are congruent, then the triangles are similar. (Note that if two pairs of corresponding angles are congruent, then it can be shown that all three pairs of corresponding angles are congruent, by the Angle Sum Theorem.) In the figure above, since Important Note: Similar figures: When figures have the same shape but may be different in size, they are called similar figures. Congruent figures: Figures that are the same size and the same shape are congruent figures.
{"url":"http://hotmath.com/hotmath_help/topics/aa-similarity.html","timestamp":"2014-04-17T06:45:10Z","content_type":null,"content_length":"4420","record_id":"<urn:uuid:4b9e1b1a-35e2-4ac1-81ef-aeb108cdb288>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
free simplify exponential expressions worksheets Google visitors found us today by entering these math terms: "balancing equations" + "math" + "ppt", balancing chemical equation worksheets, how do you find a power in algebra, sixth root of -1. Second order system by using matlab, system of linear inequalities free online calculator, parabola basics, prentice hall conceptual physics online answers, convert scales into square metres, how to factor polynomials using ti 83 plus. Solution of graph of hyperbola of two sheets, a chart or table of cube in algebra, convert decimals to radicals, Precalculus Answers. Adding negative and positive numbers worksheet, 9th grade algebra finding the numbers, mcdougal littell world history reading study guide chapter 21. Solving unknown variables with ti-89, mcdougal littell worksheets, quadratic equations and complex numbers powerpoint, LCD word problems about speed and time. Using math for love, solutions to exercises using harmonic oscillator differential equation, social skills activities worksheets for acknowledging students when raising their hands, variable percentage formulas, struggling with algebra. Third grade aljebra topics, solve by elimination online calculator, algebra for college students, 8th edition 8.6 combining functions test answers, factoring trinomials calculator, ode45 matlab second order, homework solutions "environmental engineering and science" third ed masters problem 6.10. Gallian chapter 5 solutions, factoring problem solver step by step for free, worksheet on solving equations with two variables answers, cubed roots on a ti-84 calculator'. Algebra and squaring fractions, divide and simplify calculator, least to greatest solver, maths questions level 5-7, rational root solver, laplace transform software, year 4 optional sats. Rearranging formula+free worksheets, real numbers decimal e+5, equation calculator substitution, tricks for +simplification problems in maths, how to solve homogeneous diff eq problems, multiplication and division of rational functions. Strategies for graphing in math finding the slope, integrated algebra solver, balancing algebraic equations applet. Nonhomogeneous partial differential equation, square numbers worksheet, long division of polynomials calculator program, online examination instructions, downloadable math workbooks, printable revision sheets. Adding radicals calculator, stretch factor in math, equivalent percentages, quadratic expression factoring calculator, interactive activity on radicals, maths rationalization activities. Slope intercept form worksheets, 5th grade printible free math papers, 8 standard question paper, percentage equations, LCD Fraction Calculator, solve rational equations online, long division online No fractions in the radicand, addition ks2, teaching like terms, why we expand brackets in maths, simplifying equations with variables calculator, easy formulas to calculater binomial coefficients. Monomial simplifier, circle graph, java add sum of prime numbers, factor the trinomial calculator, algebra help free with inverse graphs. Solve my quadratic equation factor, slope intercept form worksheet, software for algebra, coupon code for algebrator, ti 84 plus interpolation, foil equation online, maths translation worksheet ks2. Free non linear equation solver, help solve parabolas, Mean Substitution, monomial solver, middle school math with pizzazz book d answer key. Lcm polynomial calculator, foundations for algebra year 2 chapter CT answers, grade one structures lesson, converting second order partial differential equation to first order, symbolic method for solving a linear equation, how to solve radicals by finding conjugate. Multiplying and dividing integers riddles, radical expressions solver, glencoe math answers course 3, sats papers 1998, ti=89 log, 7th grade math formula sheet, differential equations calculator. Free online math answers algebra, real life applications of compound interest, 4th grade algebra worksheets, integral substitution theorem. Negative number worksheet, glencoe algebra 2 online textbook, " trigonometry TRIVIAS", online balancing equations calculator. Least common denominator fractions calculator, online t 89, ti-89 binomials, cube root formula, factor out problems, roots 4-th polynomial applet symbolic, order fractions from least to greatest (8th grade) solved - exercises "physics", fifth grade algebra worksheets free, vertex form calculator. Quadratic projects, polynomial equation solver, Symbolic method, ks3 level 7 algebra worksheets, simplifying complex fractions calculator, divisors calculator. I think of a number worksheet, simultaneous equation solver excel, activity to teach adding and subtracting integers. High school math problems solver, 8 grade Algebra free download, 8th Grade Alabama SAT Math Practice Test, ALGEBRA, SOLVING RATIO EQUATIONS CALCULATOR, what are the steps to solving a logarithmic equation graphically, algebra baldor download. Free word problem solver online, adding combinations, rudin real complex analysis solutin, getting rid of radical in the numerator, circle graphs 6th grade, physics vectors graphs. "how to simplify radicals, beginners fractions worksheets, least common denominator algebra 2, need help to see if go correct answers for equations and inequalities. Solving Square Root Problems, holt mathematics grade 9, worksheet for expanding brackets, how do turn decmials into radicals, creative way to teach multi-step equations. 6th grade math taks review worksheets, perform indicated operations and combine like terms, slope of quadratic, how to simplify numbers with a decimal to the power. Download aptitude test book, polynomial factor calculator, solving by substitution calculator, binomial expansion with fractions, Fractions for Beginners, how to solve special products. Parabola year 11 maths, mastery objectives, math multiplyin intergers, negative, highest common multiple 52, multiplying and dividing radical expressions on TI-89, Math: Volume Worksheets, online solving polar coordinates equation. Simplifying exponents calculator, grade 8 algebra worksheets, mcdougal littell algebra 2 teachers edition, Is there a difference between solving a system of equations by the algebraic method and the graphical method, factoring fractions with variables calculator, free online college algebra problem solver, abstract algebra an introduction solution. Matlab function roots multiple variables, solving simultaneous equations+difficult, simultaneous equation solver ecel, 5th grade algebra worksheets, exponent calculator, divide expressions calculator, arithmetic tutor. Multiplying integers worksheets, how to do optimization problems calculus, how do you work factorial math problems, fraction worksheet, how long is a linear metre, activity for multiplying and dividing integers, second order nonhomogeneous linear differential equations. Math investigatory projects, simplifying when d does not equal zero, online ti 85, eoc math of 2010 practice, math adding rational expressions, subtracting fractions positive negative. Powers and roots ppt, factoring foil calculator, 6 square root of 6 plus 4 square root of 6 in radical form, writing missing values in perfect square trinomials, hard maths problems year 9, elimination calculator. Algebra substitution problems calulater, solve simplify radical expressions, prentice hall mathematics pre algebra practice workbook, topic for investigatory project in mathematics -cbse, expanding linear expressions calculator, exponents grade 10. Property of radicands, roots of 3rd order polynomial calculator, ged math worksheets, square root simplifier not in decimals, polar graph calculator online, quadratic c++ using class, partial fractions differential equations. Solve algebraic expressions online, matrices problems 10th class, solutions to how to complete the square with a fraction, multiple variable equations, fractional exponents, calculator factor program, Free online secondary 1 test papers. Simultaneous equation solver matlab, 9th grade math worksheets, 7th grade math equation worksheets. Exponential calculator, Trigonometry Functions Worksheets, how to divide rational exponents. Math step functions.ppt, how to factor with ti-84, solving subtraction equations with fractions, solving polynomial function + program. Determine the domain, ppt algebraic simplification, 7th class maths poems, top secondary schools exam papers. The formula for square root, solve equations by elimination calculator, extrapolation formula. Integrate second order differential equation maple, changing percentsto a fractionor mixed number, linear equations powerpoint, basic absolute value. Factor on ti-84, square root problems with letters, how to write a number using the sum of integers, integers games multiply, the age of jackson chapter 12 test mcdougal littell inc, most difficult math test, solving the radical equation calculator. How to do cubic root on ti 83, complex numbers online calculators, multiplication 1-10 worksheet, combine pictures on graphing calculator, simplified radical form, quadrilaterals free worksheets, tutorials factor polynomials into two bionomials. Solve complex equations online, scatter plot worksheet, simplify exponential expressions worksheet. 3rd grade algebra, "greatest common divisor" decimals, rules for adding and subtracting integers, polynomial equation in C++, square root expression calculator, simplifying trig identities 6th grade pre algebra worksheets, printable linear equation worksheets, graphing calculator+rational expressions, java program to calculate solution set of equation, ks4 solving linear equations worksheets, solving system of equations by substitution calculator, quizz for calulating. Multiple choice linear equation questions, quadratic word problems, airbag balanced chemical equation, can ti 89 solve complex expression. Fun ways to solve linear equations, adding radical expressions calculator, adding math formulas to the ti 83. Lineal metre, simplify complex radical, what program solves math equations, x th matric maths. Online graping, how to cheat on algebra 2 in plato, solving math problems, algebra quadradic factoring calculator, stretch factor using quadratic equations, difficult dividing rational expression, online calculator that has sqaure root. Pictures of parabolas, year nine math, Babylonians learned how to solve quadratics in radicals., algebra problems for beginners, calculating velocity worksheets, substitution method calculator. "examples of linear programming", ellipse problems and solution, math poems about order of operations, runge-kutta matlab. Algabranator, circle graph worksheets 7th grade, algebra 1 mcdougal littell answers, how to use matlab to solve simultaneous nonlinear equations, free math problem answers algebra solutions equations, find all numbers for which the rational expression is undefined solvr, java code to evaluvate equation. Real life examples on arithmetic progression, expression calculator, conics calculation online, radical form of numbers. College algebra word problem solver, l'hopital calculator, explanation of rearranging equations, algebrator solving matrices, solving triangle equation using mathcad, free printable test papers. Substitution method, max value inhomogeneous wave equation, glencoe mathematics algebra 1 answer key, math test for laws of exponents, abstract algebra dummit foote solutions. What is multiplying integers and give some example of explanation, solve for x worksheet exponents, graph hyperbola from its equation online, base6 calculator, dividing radical expressions Radical functions and rational exponents solver, using algebra, quadratic factorer, inverse operations addition and subtraction problem solving worksheets, foil calculator. Best physics formula sheet, math trivia with answers geometry, write 2/7 as decimal on calcautor, Math AND Volume Dimensional Changes, pre algebra worksheets for seventh graders, dividing remainders give me the answer, calculating simultaneous equations ti-89. Glencoe math worksheets, negative fractions dividing learn, swf math edu usa. Free plato math program, algebra homework helpers, circle graph worksheet, number line solution set for algebra equations, programs for the ti 84+ factoring trinomials and polynomials, prerequisite principles for solving word problems, statistics for beginners online. Linear equations AND LIFE APPLICATION, solving multivariable polynomial equations, line graph exercise activities for 6th graders, simple algebra ks2, Aptitude questions and answers. Polynomial quiz, college algebra programs, Orleans Hanna Test study guides, where can i buy math assessment software, grade 9 math sheets, sum and product of roots calculator, Least Common Denominator Calculator. My TI 83 calculator shows an R instead of an x, algebraic substitution method, solving for square roots in matlab. Cuboid nets for seventh grade geometry, rational expressions lcd calculator, TI - 86 regular dimensions, third degree polynomial calculator, how to memory formula of optional maths, calculator radicals, answers to sheet 50 for geometry by houghton mifflin. Exponential long division, free online graphing calculator with table, math helper college algebra software, solve the equation by graphing: cos(x)=e^x, u-substitution finding zeros. Simultaneous equations solving in scientific calculator without graphing, example of a hyperbola problem with solution, consumer arithmetic worksheets, grade nine math, college algebra logarithmic equations/problem solver, free math worksheets for quadrilaterals. Sample Kumon Worksheets, ALGEBRATOR DOWNLOAD, cramer rule for nonlinear, Linear Algebra step by step explanation, how to use the cube root button on a calculator, online grapher plot point. Graph ellipse calculator, explain multiplying radical expressions, algebator, geography worksheet year 1, laplace transform calculator, convert to radical form calculator. Algebraic substitution, calculus, mcdougal littell geometry chapter 7, trigonometry in daily life, algebra discriminant problems. Solving inhomogeneous second order linear equations step by step, Free Online Square Root Calculator, equations with variables in the denominator, quadratic equations on a casio calculator, 11th grade math worksheets, sample worksheet balancing equation, McDougal Littell Math Course 3 chapter 10. Grade ten math worksheets, solving equations online calculator, how to add square roots, step by step math proof solver, problems and answers of polynomials, eigenvalue using ti84, grade 1 math exam. Larson quadratic functions worksheets, hungerford algebra solution, expanding cubed, simplify algebra equations online, TI 83 mixture problems, lcm and gcf of given numbers source code. A transition to advanced mathematics solutions 6th edition, adding and subtracting positive and negative numbers worksheets, how to factor expressions with square roots. Algebra 1 teacher edition iowa, online fraction calculator that explains it, distributive property with fractions, simplify radical expression calculator, algebra one story problems help free online tutor, balancing act algebra worksheet by peter gordon, automatic solver for expressions in terms of i. Algebra power of, how to do cubed on calculator, cube problems in aptitude, divisor calculature, resolving 5 grade balancing equeation. Co-ordinate pictures ks2, multiply expressions calculator, algebra trivia, rational expressions equations calculator, simplifying algebraic expressions calculator, 3rd grade sat prep, how do you figure squre footage on a home. Myalgebra.com, math 24 variable solver, uses of trigonometric function in our life, fifth grade maths practice printouts, example problems in solving parabola, Hard math Algabra questions. Principles of mathematical analysis solutions, online saxon math printables, ALGEBRATOR, quadratic equation multiple variables, algebraic formulas. Convert non-linear absolute values to linear term, worksheet on adding and subtracting integers, solve logic problem with matlab program, free online math equation solver. Mcdougal littell pre algebra textbook used cheap, quadratic completing the square year 9, LCD WORK SHEET, algebra ii, sequences and series worksheets, strategies for problem solving workbook answers, system of equations variable in denominator. Lcd of rational expressions calculator, worksheet simplifying square roots, math sheets for a 8th gradr in WI, hardest math problem in the world funny, equations 5th graders can do. Distributive property worksheet, root and index numbers numbers worksheet year 7, setting up formulas in t1-83 calculator free, adding and subtracting fractions with negative numbers worksheets. Solving complicated rational expressions, square root rules, automatic solving simultaneous equations, year 4 optional sats paper, balancing chemical equations worksheet, add and sub measurements. Rational expression in lowest terms calculator, multiplying fraction, solve math area and perimeter, printable maths quizzes with answers, graphing worksheets, 11 year old maths test free online, graph calculator ellipse. How to do algebra for beginners, subtraction in excel, liner and non liner, linear algebra solution, addition fractional exponent. Factorise calc, Free Algebra Problem Solver, algebra 101, least common mulitple java. Mcdougal littell geometry 2004 online, solve my simplify radical expressions, free printable linear equation graphs, intermediat accounting solution + pdf. Integer worksheets 6th grade, wrting equation into quad standard form worksheets, order from least to greatest calculator, ti-83 online calculator. Translation worksheets ks2, how to factor a polynomial with 4 terms and 2 variable, solving linear equations, multi step, worksheet, trig proof calculator, online calculator that converts fractions to decimals. Simplifying complex rational expressions, simplifying expressions calculator with exponents, hardest math questions, Year 9 math yearly online exams. Cube root in calculator, Second ODE Nonhomogeneous, free algebra solver, square root with k coefficient, online radical calculator, how to do adding and subtraction rational expressions lightning My y intercept equations/calculator, intermediat physics revisin questions, how do i calculate third roots on a calculator, adding subtracting integers rules, multiplying rational exponent calculator, maths quiz based on 5th textbook. Simplify square expressions calculator, solving differential equations matlab, math adding negative and positive fractions, square expressions, prolog examples simplify. Trigonometry factoring calculator, linear first order differential equation solver, interactive activities graphing quadratic equations. Volume of a parabola, exponents in numerator and denominator in fraction, do you subtract or divide?, math for dummies, cheat your homework, complex number roots book. Primary six maths, free math sheets on volume and measurements, solving rational equations calculator, ti84+ calculator software cd free download, explain saxon math course 2. " MATHTRIVIAS", calculator for solving rational expressions, help me cheat on my math homework, listing fractions from least to greatest calculator, solve the radical expression 8/z-3 - 18/3-z, free algebra answers, a balancing act algebra worksheet. Inequality calculator, non homogeneous partial differential equation, ode45 matlab algorithm, online usable calculator. Formula for dividing a greater divisor, how to find the 3rd root of something, properties of exponents worksheet, flowchart mathematics, free mental maths worksheets to print. Compare and contrast equations and inequalities, the hardest mathematics questions, free foil calculator, advanced summation ti 89. Algebra & Trigonometry: Structure & Method, Book 2, 2002, McDL, logarithmic form calculator, dilations in math, subtraction equations worksheet. Can you fail 8th grade with an F in pre algebra, convert mixed number to percent calculator, algebrator laplace tutorial, high school entrance exam, 物理第八版下載, surds powerpoint. .375 fraction, Algebrator instructin manual, ti 83 equation solver. Algebrator for trigonometry, how to solve operations with rational expressions, essentials of investments solutions, converting mixed numbers to a decimal, newton-raphson method for nonlinear systems of equations matlab code, domain of a equation calculator, percent proportion worksheet. Graphing algebraic equations, algebra worksheets and answer key, ti 89 polynomial solver, associative property online worksheets, multiplying integers worksheet. Ti 83 cramer's rule, download aptitude questions with answers, simplify square roots expressions. How to get the sum in java 6, inequality graph asymptote, algebra 1 games, how to make scientific calculator leave in fraction form radicals, exponetial roots calculator, inequalities for 5th grade, factoring trinomials generator. Adding square roots, 7th grade proportion worksheets, simultaneous differential equations matlab, can we cancel out same square roots, Word problem using positive and negative numbers, application of three by three system linear equation, abstract algebra is useless. Scale factor worksheet, real-life examples where polynomials are used in story problems, multivariable equation solver, online proportion solver, maths calculator online, free algebra printouts, LCM practical teaching with models. Math game for 9th graders, examples of mathematical induction solving, simplify the expression fractions algebra, standard grade algebra, finding the intersection of a parabola ti-83, can ti 89 solve for 3 variables with three equations. How to solve CPT coding complex question, graphing linear equations online calculator, differential equations exam questions in spring mass system, TAKS PREP 2nd grade, linear equation substitution calculator or solver, heath chemistry answers. How to simplify rational exponents and roots, how to use a casio science calculator, logarithms for dummies, how to calculate exponential expression on a calculator, ti-89 online. How to factor using ti-83 plus, ks2 percentage worksheets, algebra problems, grade 9 academic fraction formula sheets, working with variables in exponents, free logarithm solver. Finding least common denominator calculator, "math prayers" powerpoints, presentation about trig functions, sat prep for 3 rd grade. Eigenvalues on ti-84, multiply rational expressions calculator, systems of nonlinear equations, subtracting solving by substitution calculator, polynomial roots calculator, find quadratic equation table solve my problem, multication table printout. Free mathematics worksheets on plotting points, worksheet graphing non-linear inequalities, quadratic equation for ti 84. Glencoe pre-algebra worksheet chapter 4 form 2d, how to factor cubed trinomials, download algebra buster free, factoring quadratics calculator, multiplying/adding/subtracting/dividing fractions worksheets, find absolute max with ti 89, is there a rational expression calculator. Algebraic expressions of percentages, middle school math with pizzazz book topic 2-a, vertex form and second differences, simultaneous equation solver app online, square root of 48 x^7^8. Simplifying Exponents Game, graphing parabolas online free, union of two equations on a graph, standard form slope intercept form worksheet, 9 science fluid mechanics worksheets, topographic worksheets, math trivia for kids. Ratiomaker problem, FRACTIONS FOR KIDS, online exam templates, graph ellipses online, glencoe mcgraw hill algebra 2, solving reasonable domain problems in algebra. Simplifying Expressions Calculator, solving equations with fractional powers, matrix problems with summation notation, least common multiple exponents. Show vertical asymptotes on TI-84 silver edition, algebra word worksheet, mathematics chart for 6th grade, 2001 mental maths test. Delta function on ti calculator, WHAT IS A SQUARE ROOT OF 1500, free GCF exponents worksheets, principles of mathematical analysis solution, converting to a ogarithmic equation calculator, factoring trinomials calculator expression, free printable Ks3 algebra tests. Simplifying radicals interactive website, binomial equation, algebrator example, Non-linear functions: absolute value graphs worksheet, do my radical expression, Permutation Combination problems in real life, how to factor polynomials with 2 variables. Factorising calculator, online pre algebra quiz 7th grade, solving trig equations worksheet. What is thirty six sixth in lowest terms, algebra square rootworksheets, problems on cubes, solving inequalities with radical roots, holt algebra 1 textbook answers, factoring calculator. Smallest common denominator calculator, math for dummies online, highest common factor of periodicities of all variables, online graphing inequality number line, How do you write a the difference of a square on calculator, adding, subtracting, multiplying fraction worksheets. Compare unlike fractions, online thermometer integers, maths loop cards ks3 algebra, online trig solver. Algebra cube formula, convert decimal to radical form, graph section of line on calculator. College algebra software, quadratic function in standard form solver, how to take 3rd root, Multiplying and Dividing cube root of radicals, fractions nth term calculator, how to solve an algebraic Nonlinear differential equations Matlab, descartes rule of signs calculator, find a decimal notation for a negitive fraction, Pre-Algebra Equations. Boolean algebra solver, college algebra cheats, trigonometry helper, java code for polynomial, formula for solving pre algebra i, free math problems for 5th graders, algebrator.com. Subtractind algebraequations, freemathlab, roots of a quadratic equation, algebra formulas sheet and functions, algebra activities for wind and current problems, solve simulatious equation ti-89 sin. Intergers worksheet, addition and subtraction integers worksheets, matlab quadratic. Factoring cube numbers, equations with fractional coefficients worksheet, Mathpower nine sample questions, higher ability maths ks2, algebra worksheets combine like terms, answers to prentice-hall algebra 1 practice workbook, simplifying radicals lesson plan. Summation calculator, transformation math worksheet quiz, steps for balancing a chemical equation, algebra substitution calculator, how do you determine the difference between an algebraic expression and an equation, Matlab return matrix with fractions, algebra vertex form. Mcdougal littell algebra 2 key, math solver, answers for math worksheet what illness do you get from overeatng?, pictures in plotting points, solving quadratic equations with log, solving equations containing rational expressions. Solving equations usign fractions, free online ti-84, mcdougal littell pre algebra, how to use calculator integration, square of radical expression calculator, free algebra 2 problem solver. Algebra powers chart, inventor of simplifying and evaluating algebraic expressions, linear trig equation worksheet, prime binary number base 8, logarithms for beginners, cubic calculator shows steps, problem solving involving system of linear equation. Drawing linear graphs worksheet, rewrite division as multiplication, worksheet for like terms, student worksheets for square roots and cubes, convert mixed fraction to percent, rational expressions online calculator with steps. How to square a number on a to83 plus, second order polynomial matlab, my phone number in base 8. Step by step instructions for solving college algebra, pitures of fractions, nonlinear equations matlab, distributive property calculator, solving trigonometric equations worksheet, algebraic expressions worksheets free, greatest common factor of monomials calculator. College algebra; operations with functions worksheets, on-line algebra remedials, glencoe algebra 2 textbook online. Grade 8 algebra lessons, roots of real numbers worksheet, square roots cube roots lessons. How to graph sideways hyperbolas on ti 83, glencoe math answers, free math expressions worksheet 4th grade, exponent powerpoint, Addition and Subtraction of Algebraic Expressions, abstract algebra intro, year nine maths algebraic expressions. Calculus the graph problems, meters into square meters calculator, square root of two variables added to each other, Math Test on Adding, Subtracting, Multiplying, and Dividing Fractions. Complete the square practice, converting decimals to mixed numbers, solution sets for equations solver, quadratic square root property calculator. Completing the square calculator, distributive property rule on fractions, how do you solve a function with a squared variable in a, factorise a quadratic calculator. Factoring polynomials online calculator, square root method, How to apply geometrical facts and relashionships to form and solve equations?, casio calculators rational expressions, turning decimals into fractions on a TI-86 calculator, radical expressions with ti 84. Square root calculation explanation, how to solve equations on maple, general questions for 10 year olds, power point lessons algebraic expressions fractions, graphing imaginary numbers on a ti-83, a website that will factor my equations with me. Interpolation program + ti 89 titanium, quadratic formula TI-30xs, how to solve binomials, Difficult combining like terms worksheets, power point for like terms, multiplication of permutations, geometry problems with solutions. Ignore punctuation String java, simplified radical form calculator, quadratic expression in fraction form. Math Homework Answers, fractions from the least to the greatest calcutor, linear algebra exam, examples of math two voice poems, simplify radicals calculator, derivative calculator step by step. Best algebra 2 textbooks, slope and y graphing calculator, non-homogeneous differential equation. Compare algebraic powers, online ti85, 'how to find cube root of a decimal number', Mathmatical program software, Rational Expression Multiply & Division calc, combining like terms 7th grade math. How to solve simultaneous nonlinear equations for matlab, free partial fractions calculator, maths quiz questions ks2, ti 84 online calculator. Math problems for student of 3rd to 8 th std student, Worksheet answers Chapter 7 Geography in History, trouver delta ti89, chemical formula finder, how to reduce decimal ratios, topographic maps Equations of two variables matlab ode45, How to figure out polynomial word problems, simplifying exponential expressions calculator, example problem in ellipse formula with solution, simplifying radicals powerpoint. Factoring cubed, less common denominator, how to write an exponential expression, rational expressions calculators, solving second order ordinary differential equation+nonhomogeneous+nonlinear. Year 8 end of year maths test revision, maths translation worksheet, algebra based aptitude question, Glencoe Algebra 2 Worksheets, solving polynomial equations cubed, grade 9 mathematics made easy, algebra formulas list. Multiplying and dividing powers, replace sqrt using multiply, free download aptitude questions and answers. Writing radicals in simplest form, solve my math problem, determine wether a number is an interger in java, aptitiude questions free download, quadratic factoring calculator, factor polynomials for me online, integration calculator step by step. Secondary school entrance exam papers free, alegebra, find all pair (x,y) of positive integers 2010 sqrt(x-y), factoring binomials calculator, Simultaneous Equation solver, simplify in a+bi form Worksheet + rational expressions + add subtract multiply divide, facts about Quadratic Equations, basic aptitude questions and solutions, sOLVE LAPLACE IN ALGEBRATOR, conics graph software, using calculator for gcd. Non linear regression in vector field in matlab, circle graph worksheets, ti 84 program that changes radicals to radical expressions., perimeter of triangle algebra expression. Reading and math problem programs 6th grade, balancing chemical equation solver, solving binomial equations, calculator online radicali, mcdougal littell geometry worksheets, worksheet integers, algebrator downlOAD. Solution of the problem of the book of I N Herstein free download, free integers worksheet grade 6, scale factor geometry examples, enter cube root on calculator, chemical equations worksheet. Ks3 english worksheets, test papers 2003 ks3 maths, log button on TI-83, formula for factoring cubed binomial, print graphing hyperbolas, how to solve Linear Function Slopes and Intercepts on TI 83 Plus a graphing calculator, 7th order polynomial excel. Solving quadratic equations using tables, printable graphs about anything, ellipse online, improper integrals calculator, squaring fractions with variables. Algebra table of values calculator, step by step integral calculator, factoring calculator polynomials, worksheets for slope intercepts, whats a good website to do math, rules for adding radicals, ti-84 plus graph should intersect axis but does not. Multiply by conjugate square root, trigonometry everyday life, error for standard linear approximation derivation in multivariable calculus. Pre-algebra with pizzazz papers, rearranging logarithmic equations, solve two variable quadratic equation, 4th order equation java. Solving linear equation worksheet, mathcad-sheets download, how to simplify by factoring, how you solve the inverse of a linear function, calculator for simplifying exponents, adding and subtracting negative numbers woorksheet, what is the title of this picture. Factor binomial calculator, solving equations by elimination calculator, how to find linreg on calculator, free math problems for igcse, how to graph non-linear system of equations with fractions, Ontario Grade 10 Linear Systems, evaluating exponential expressions worksheet. Least common denominator online calculator, 7.33 - solution - mastering physics, change exponential expression to logarithm, equations in common denominator, liner graph. Vba code + exponential integral, free basis algebra worksheets for forth graders, how to find vertical shift, algebra problems y6, linear algebra ebook, math ratio formula, can someone do it for me free solve algebra by substitution. Simplifying binomial fractions, how to convert mixed fractions to percentages, factoring calculator program, how does the formula look for turning decimal into percents. Ellipse calculator, point slope equation with a quadratic, pre-algebra with pizzazz answer key. Decimals +powerpoint, decimal to fraction algebra, expanding and factoring worksheets, greatest common factor+least common multiple worksheet. Chemical equation worksheets for a sixth grader, saxon math course 2, nonlinear differential equations, how to change the signs on a trinomial equation for perfect squares, graph me a parabola, how to solve the vertex equation. Formula for ratios, math foil calculator, vertex and intercepts online calculator, algebra 2 poems. Dummit foote SOLUTION, KS3 PRINTABLE MATHS TESTS, use algebrator to solve monomials. Easy way to understand LCM, least common denominator algebra, minimum parabola definition, how to calc Greatest common divisor, integral calculator steps, dividing integers worksheets answers, polynomial word problems. Free college algebra solver, HOW TO TURN A DECIMAL INTO RADICAL FORM, calculating with radicals, online math tests free for 5th grade. Worksheets math 9th easy basic, solving combined inequalities for florida virtual school answers, year ten algebra, bounded homogeneous linear equation weak partition, Online algebra Calculator, zero factor property on ti-89 calculator. Logarithm table how-to, 8th grade Math Worksheets and answers, free mathcad worksheets. Aptitutde test free ebooks, online polynomials long division, algebra question sheet for ks4, how to simplify radicals, scale factor math, hyperbolas in real life. Program for newton raphson method for non linear equations in matlab, grade 8 rotations, reflections and translations, Solving non-linear absolute value equations, 7th grade formula sheet, turning square roots into exponents, solving the exponentail equation, maple to show nonlinear operators. 100 multiplication problems, adding and subtracting negative numbers, balacing chemical equations on the particle level, www.my calculator is missing how to solve my problem. Trigonometry problem solver, how to write an equation in vertex form, equation writer from creative software design, easiest way to learn integration, positive and negative decimals, geometric sequence with negative ratio in real life, explain algebra. How to find the common denominator in an algebraic fraction, how to divide metres -1, linear meters into square meters, "hardest math problem", square root radical calculator, ti-83 plus log base 2. Step by step on how to solve complex numbers, onfree calculator, positive and negative number worksheets, mcdougal little math course 2 workbook answers, operations with radical expressions tool, d) What is the difference between evaluation and simplification of an expression?. Simplifying square roots mixed fractions, free geometry sheets for grade 9, middle school long equations. Wisconsin ginseng 4 year ungraded, xth matriculation maths papers, basic algebra problems and answers, how to calculate permutation, math poems, Examples Polynomial word problems. Answers to glencoe algebra 2 practice workbook, answer generator, sqrt simplifier, simplest radical form of a decimal, adding subtracting multiplying and dividing square root worksheets, rationalisation solver. Half life worksheet algebra 2, story problem solver free, free adding positive and negative numbers worksheet for grade 4, factoring polynomials trinomials calculator, rudin problem solution chapter 7, cube root algorithm java bigdecimal. Dividing square roots radicals, how to work out radicals on TI-89, second order ode calculator, solve loop current using a graphic calculator, algebraic expressions worksheets fourth grade. Math terms containing "place", How to solve equations in a symbolic method, factor quadratics, roots with radicals, solve algebra problems input output, Glencoe Algebra 1 answer key, prentice hall algebra 2 with trigonometry chapter 7 answers. Dividing with synthetic division calculator, answers to saxon math course 2, ordered pairs pictures. Solve multiple variables ti 89, free simultaneous equations worksheet, java + how to multiply polynomials, online solving radicals. Online integral solver, finding zeros of a system of nonlinear equations matlab, fraction simplest form calculator, one step equations free worksheets 6th grade. Elementary algebra worksheets, fractions problems solvings, Ideas for Line Graphs, solution equation 3 variable ti 89. Adding, subtracting principle of equality, free subtraction of integers lesson plans, how to teach a class about graphing calculator, addition subtraction of integers work sheets. Converting decimals to square root, second order differential equation with matlab, problem solving of a midpoint w/ a solution. Rewrite square roots, free printable kumon math 1st grade worksheets, simplifying inverse radicals, quizzes for 8 yr olds to print out. Using Matlab to graph Couple Differential Equations, algebra 1 linear equations test answers, finding third root. Convert improper fractions to mixed numbers using TI!-84 plus calculator, solving formulas year 10 maths, trinomial factor calculator. Free math problems for 4th graders, sentinel while java example, simultaneous equations activities. Gauss nonlinear matlab roots, matlab numeric equation, prentice hall algebra II answer, expanding and factorising-year 9 maths test. Multiplication and division with rational expressions, 6th grade area of half circles, division decimals step calculator, equations of real life parabolas, solving radical equations and inequalities. Decimal to mixed number calculator, linear equation t calculator, maths sample paper for class 7th, functions and curves work sheets yr11, how to solve y intercept eguations, free algebra 101. How to add an integer and a fraction, square root for dummies, transformation quiz, area hardest physics. Need answer for my appititude questions, challenging math problems for 7th graders with solution, slope intercept equation matlab, what is a domain of a parabola. Maths worksheets ks4 number division, answer my algebra problem for free, games to teach adding and subtracting integers. Ks2 translation worksheet, algebra 2 online mid term, pie answers for math 208 university of phoenix. Pre algebra with pizzazz, what are the math properties worksheets, complex rational expression calculator, least common denominator variables, complex factoring, how to use the qaudratic eqaution with multiple variables. Simultaneous equations problems, online factoring machine, areas of interest of 9th graders, algebra permutations. Year 8 maths test algebra, prentice hall chemistry workbook answers, year 11 maths tests online. Y intercept worksheets, Calculator from Square Metres to Lineal Metres, get answers to rational expression, how to solve non homogeneous differential equations in mathematica, Long Division of Polynomials online, worksheets exponents. Algebra structure and method book 1 solution key, difficult question and answer in math trivia, multiplying radical expressions, long division lesson plan, adding using algebra, do my algebra homework for free, solving for variables in fractions. Lowest common denominator fraction calculator, solve for x calculator fractions, optional sat papers, free algabra graphing calculators, Polynomial Solver, learn +grammer in ppc, algebra problems for 6th graders. Dividing fractions with intergers, Algebra Programs, how to find an xvalue in a Ti 83 calc. Power point on solving linear equations, linear algebra problems and tips, online algebra solver, adding and subtracting negative and positive numbers calculator, how to foil a cubed expression, ti 84 usable calculator, college pre algebra help. Simplifying a radical expression calculator, solve simultaneous equation matlab, graph ellipse, british factoring, combine like terms with parenthesis, puzzles for 3rd class students. Simplifying natural log expressions, coordinate plane worksheets pictures, pre-algebra with pizzazz, quadratic simultaneous equations. Answer my algebra problem, how to find answer keys for algebra 2 worksheets, negative exponents grade 10, polynomial long division online, Free 4th grade math worksheets, scales & intervals. How to find the slope of a line on a graphing calculator TI-84 plus, answers for algebra 1 homework, holt algebra 1 workbook answers. Complex fraction calculator, word problem solver free, algebraic poem, algebra finding the value calculator, basic exponentials and logarithms, example of parabola problem solving. Printable practice alebra test, rudin solutions chapter 7, program to find factors of an equation, square root method calculator, solving for x calculator, non homogeneous heat equation. Log key on TI-89, free steps to solve grade 9 polynomials, what is the of square root of 125, online simultaneously with anyone, how to simplify a cubed radical. Square root of 6 simplied, answer key to glencoe algebra one, find the standardized test statistic, equation of a line solver, combining polynomials calculator, solving simultaneous polynomial equations in matlab. Mcdougal littell algebra 1 answers, solve quadritic equation with paper, algebraic addition, Review of exponents chapter 8 powerpoint, 6th grade who am i games, mathimatical trivia, my cube root is one quarter of me.who am i. Ti 83 programs induction solver, Algerbro software, free ks3 problem solving. Parabola calculator, sum of the roots of the polynomial calculator, maths sums for 6th. Real-life examples where polynomials are used, free math worksheets from glencoe / mcgraw -hill, kumon download, free square root worksheets, ti89 solve 3 nonlinear equations. Sums on algebra equations, how to take x root on ti 83, polynomial online solver, online useable ti84 calculators. Solving quadratics by decomposition, third order quadratic equation, parabola graph and direction. Free algebra factor calculator w solve, algebra I "coordinate graphing" activity pictures, multiplication division rational equations, ellipse sample problems with solutions, quadratic equation with real life topics. Free factoriser, how to solve a equation in vertex form, erb testing samples, rudin chapter 1, factoring algebraic expressions calculator. Solving high order polynomials, linear principle for non-homogeneous equations, polynomail factor online, graphing parabolas 8th grade algebra, class 8 sample papers, pre algebra textbook pdf. How do you find cubic squre root on ti 83, 2ngrade math word problems, Powerpoints on simplifying radicals. Result of 1 and -1 in boolean algebra, improper integral calculator, math a regents exam questions by topic- combining like term, what is the square root of 5 in fraction form, imaginary numbers worksheet, which calculator is best in solving Derivative?. Worksheet works for 6 graders/free, prentice hall biology chapter 16, quadratic equations what is the title of this picture?, algebra expression calculator, tutor 4th grade transformation of sentence type maths equations practical guide, percents and ratios formulas, prentice hall algebra 2 answer. Solving simplified radical expressions, Graph Solving Equation Free, inequality worksheet with multiple choice, lesson plan on simplifing equations for 6th graders, circle graphs worksheets, free detailed instructions on graphing equations. Greatest common denominator with variables calculator, maths easy method to calculate square root, how to solve exponential equations, integer worksheets grade 9. Example of ARITHMETIC EXPRESSION WITH FLOW CHART, 2nd grade worksheets on finding lines of symmetry, cheat college algebra, matlab equation solver, how to find lcm of monomials. Transforming Equations Containing Only Variables, online free ti 84, What is the highest or the lowest common factor, factorization questions, gcse complicated algebra sums with awnser, function composition solver. Algebra triangulo de pascal, solving equations with java, rearranging formulae ks3, KS3 algebra math revision SHEETS, solve nonlinear difference equation mathematica. Fraction solver calculator, Cupertino second grade math free worksheets, list of fractions, algebra answer generator, 9th grade worksheets, must satisfy the second order nonlinear differential Division expressions calculator, maths pythagoras, doing root locus on ti 89, free online help solutions for math problems, mcdougal littell biology study guide answers, math quizzes for 9th graders. Solve math problems online step by step for free, common denominator with variables, solving nonhomogeneous differential equations, problems that pertain to buisness math 105, online ellipse graphing Graphing inequalities on number line, Simplifying Radicals Calculator, linear eqations of square root examples. Exponential percentage problems, Homogeneous first order linear partial differential equation, t charts worksheets, solve variable power of power rule calculator, factoring algebraic expressions with calculator, pics for ti -84 plus, maths for dummies online. Math lesson plan 11th grade, 8th grade math worksheets printable, mix fraction to decimal converter, online integral solve, childrens maths ordering fractions, mcdougal littell algebra 1 answers free, square root and exponents. Radical division calculator, free algebrator trigonometry software, how do you do quadratic equations by graphical method. Skill 18: Division of Integers test 2, least common factor worksheet for 6th grade free, saxon math printables, summation solver, what grade do you learn combinations. Translation y6 worksheets, adding roots calculator, how do i Solve laplace with algebrator, equations with fractional coefficients, solving quadratic inequalities two variable, convert decimal answer into square root, algebra poem. How to simplify expressions of square roots, algebra substitution method calculator, indices roots calculator. Glencoe algebra 1 worksheets, subtracting positive and negative numbers worksheet, rotations grade 8, ti 89 domain 3 variable, quadratic equation by finding square roots calculator. Best calculator for solving algebra questions, general quadratic equation grapher, volume worksheets, programs on planet soiurce through loops binomial coefficients different formulas, solve formula specified variable, multiplication and division of expressions worksheets. Variable solver 2 unknown calculator, combine like terms in matlab, quadratic factor calculator, math trivia geometry, square root of exponents, hardest maths graph. Rudin real solution, simplify added roots calculator, 2004 optional sats year 3, pearson education inc worksheets math, factor a cubed polynomial. Homework solutions guide Solving Polynomial Equations, algebrator software homepage, write and simplify the difference between 5 and -12. Finding percent using formula approach, solving differential equations with squared terms, difference between rational expression and rational equation, how to calculate gcd, simplest radical form +fractions, komplex ti-30x iis. Product property of logarithms worksheets, mathematics trivia questions and answers, solving exponential inequalities in one variable, 10th maths matric standard. Www.free algabra testing, binomial denominator calculator, year 11 maths help free, adding and subtracting radical expressions solver, ti 83 cramers, cubic equation calculator. How to multiply fractions with a casio calculator, Smith Chart for TI84, maths problems for class 10, math algebra hungerford, grade 7 order of operation with faction. Matrices algebrator, holt pre algebra 2004 book answers, how to rewrite decimals as mixed numbers, algebrator by softmath, jelly bean worksheets mean median mode. Math simplify expression with directed numbers, slope intercept form to vertex form, calculator to find value of x. Circles in mathcad, simplifying radicals 2 squared 12/36, permutations and combinations worksheet, cheating on algebra 2 homework for free, simplify and determine the domain. Complex fractions calculator, free downloadable scientific calculator, greatest common division c code, quadratic calculator program, cube rule math. Integration by parts calculator, Math lcm and lcd, radical simplifier calculator, algebra 2 online textbook glencoe, adding and subtracting integers worksheets, dividing multiple terms on calculator. Interactive integers website, exponential expressions calculator, subtracting decimals calculator, practice questions properties of logarithms, perimeter worksheets algebra free, best chemistry workbooks, application of algebra. Trigonometry problems and answers, factor polynomial program, integers worksheets grade 7, how to simplify - grade 10 math help, algebrator calculator, math alge. Free gcse accounting questions, free mathematical aptitude test papers, saxon math sheet to do homework on from textbook, how to convert mixed fracion to decimals, template for online exam, Integration solver. Holt rinehart and winston answers, factoring FOIL calculator, solve ODE in TI-89, cubed equation, how to take a cube root on graphing calculator, 6th grade math worksheets 1st semester, probability Grade 11 math ontario, graph calculator of square root -x, convert a quadratic inot a linear. Subtracting integers calculator 3 numbers, simplifying roots and radicals, algebra buster free download, Polynomial Exponents and Radical Expressions, online graphing calculator. Power of 3 quadratic solver, free intermediate algebra calculator, complex fractional equations calculator. Exponential variables, linear function calculator, 135 in decimal, ppt multiplying algebraic terms, multiply Polynomial java, boolean calculator online, cubed roots of exponents. How do I calculate an equation with inequalities like -0.5 x < -30, solving equation using tI, +"compound angles" +freeware. Solving division of binomomials, numerical reasoning with solution free download, algebraic expression + application, how to enter a expression in a graphing calcultor to figure out plots. Graphing calculator hyperbola, scientific calculator with c#, college algebra fractions worksheets. Finding least common denominator worksheet, addition subtraction rational expressions calculator, teaching strategies quadratic flowchart, algebra 2 in chemical engineering, linear system substitution calculator. Solve limits with calculator, online calculator for solving monomials with negative exponents, lowest common denominator variable, simplifying square roots powerpoint. Foundations for Algebra: Year 1 answers, rudin chapter one solutions, multiple and give fraction in lowest term calculator. Online calculator printable, prentice hall algebra 2 answers, free presentation powerpoint about graphing the trig function, Teach how to solve grade 9 linear equations, "what is the expression for the most apparent nth term of this sequence 2, -4, 6, -8, 10 ?", www.math.glencoe.com/taks practice and sample test workbook, inverse functions sovler. My maths adding subtracting fractions answer sheet, factor my equation, representing patterns in a Variety of Forms - Interpreting Linear & Non-Linear Relationships, simple radical form ti 83. Maths pages to print out for year 7, simplifying algebraic expressions ks3, free worksheets order of operations for special education. Factoring trinomial calculator, radical expressions deffinition, hardest equation in the world, 6 grade math formula sheet, ordinary differential equation+nonhomogeneous+nonlinear. Divide algebra calculator, Graph of the theory of the binomial, simplify radicals online calculator , Factoring Trinomials Calculator. Real life math problems with solutions, solvıng equatıon worksheets, worksheet scale factor, aptitude test free download test paper. 7th grade math formula chart, solving equations by factoring classroom activity, cpm math lesson plans, KS3 free printable tests, rotation worksheets free, GRADE ONE SCIENCE - EVERYDAY STRUCTURES, teach yourself advanced math. English papers for free for KS2, algebra cheat online, second order differential equation matlab, algebra math calculator with square roots. Factoring solver, online EXPRESSION simplifier, percentages equivalents 85%. Rational calculator online, difference quotient solver, factoring numbers and even root property lesson plan, glencoe personal tutors sound and light, math 208 university of phoenix answers, how to use intersect on t83, pre algebra step by step. Conic calculator graph online, free math worksheets on dilations, multiplying by the lcd calculator, radical doble, difference of swuares calculator, calculus chain rule problem solver online. Grade 12 trigonometry solve without calculator examples, trigonometry word problems worksheet, proving identities, elementary algebra 6th by mark dugopolski, chapter test answer key, free printable algebra worksheets for forth graders, graphing inequalities in two variables calculator, college algebra formulas. Iowa algebra aptitude test practice, free permutation worksheet, free help for t1-83 plus calculator putting in formulas, simplify the numbers in indices, to the power of a fraction, open algebraic Free calculus problem solver step by step, free addition +subraction of rational numbers calculator, converting decimals to radicals, java example+equations+Graphing Equations+answer, convert square meters to lineal metres calculator, radical form calculator\, math faction. Difference between quadratic and trinomials, trig proofs solver, runge kutta 2 matlab, the difference between graphical and algebraic. Which expression means the same as "5 times a number"?, rudin real and complex analysis assignment, most difficult math major, basic real life application of math problems, dividing polynomials online calculator, nth term lesson. How do you solve rate problems, algebra worksheets with expanding the brackets, quadratic system of equations solver. Fractions with fractional exponents, sin and cos equation on TI 89, real life situations using quadratic formula, how to factor cubed polynomials, binomial calculator online, aptitude questions and solutions, simplest form calculator. Solve my algebra problem for free, how to use the elimination method, gcse maths bearings, solving non algebraic equations in excel, grade 10 math practice exam. Vertex calculator, calculating square root of an integer using bisection method in java, what is 2/8 as a decimal, two equations two unknowns in matlab. Math induction 3th power problem, pre algebra joke worksheets, free adding and subtracting integers worksheet, solve second order nonhomogeneous differential equation with maple, graphing absolute value in a plane, will the difference of two radicals always be a radical?. R2 on graph calc, basic prealgebra graphing inequalities, homework solutions guide intermediate algebra. Grade 8 algebra worksheet, coordinate grids to help with mathmaths homework, exponent button T1-83, trinomial with different variables, how to write a percentage as a decimal and as a mixed number or fraction in simplest form, inverse laplace transform calculator, free tutor for introductory algebra 6th edition. Algebrator reduces your, online ti-84 calculator free, year 5 optional sats papers, what is the hardest math problem in the world, easy math combination questions 8th grade permutation and combination basic math, identify f as being linear, quadratic. Word problems with costs grade 10, simple poem on fractions, highest common factor formula. Matlab ode45 second order, pre algebra formula sheet, solving square root equations worksheet, how to use the product rule, 3rd degree polynomial calculator. Ordered number pair equation, download graphing calculator ti-84, writing functions in vertex form, introducing algebra worksheets, my algebra ratio multivariate, taks practice worksheets. Gcd calculation, rational exponents expansions, solve the inequality for (x+5)(x-4)(x+2)>0, quadratic equation solverwith 2 variables, calculate simplifying rational expression. Pre algebra 1 pizzazz, 7th grade math formulas chart, trigonometry problems for year 9, real complex analysis rudin solution, easy scale models for math, solve second order differential equation matlab, grading 9th class mathematics problems. LCD worksheet, solving for eigenvalues using ti-84, how do i put a decimal in radical form?, multiply decimals calculator, relating graphs to events worksheet, worksheets graphing. Write in simplified radical form, second order equations differential equation calculator, math poems algebra 2, online graphing calculator with table, 6th permutation, solving second order ordinary differential equation+nonhomogeneous+nonlinear+matlab. 7th standard maths, free online calculator with exponents, grade 7 math sheets greatest common factor, factorising tool online. Slope solver, free factoring calculator online, how to expand condense logarithms ti89. Algerbra resources year 6, contemporaty linear algebra solution, linear algebra three unkown solver, free worksheets replacing the variable letter, factor trinomial calculator online. How to graph on the algebrator, combinations worksheets, give me a math problem of integer problems, finding lowest common denominator tool. Javascript check number, school homework sheets 8yrs old, 9th grade biology Eoct prep, square meters calculator, maths 8 testpaper, prentice hall mathematic algebra 2 answer key. Check algebra 2 math problems online, addition and subtraction estimation worksheet, ti-84 emulator, factorise calculator. Need help Please. How is solving for a specified variable in a formula similar to finding a solution for an equation or inequality?, how do i take the cubed root on a scientific calculator, how to use the quadratic formula on a ti84, slope of quadratic equation, adding negative fractions calculator, subtractin from 9 worksheets, help solving radicals in math by typing in my math problem. Express fraction percent in simplest form, simplifier irrational, afirst course in differential equations, absolute value 7th grade. Converting irrational numerators and denominators of radicals, solve the compound inequality calculator, radical expressions on ti 83, logarithmic subtraction algebra, free online polynomial calculator, what is the number in a power that is used as a factor, solve sample papers for 4grade. Google visitors found our website today by using these math terms: │cheating tricks to solve simaltaneous equation │rotation worksheet year 10 free │ │writing nubers 19-23 kindergarten free worksheets │Grade 7 lines of symmetry lesson │ │calculator quadratic program │linear algebra done right solution │ │how do you convert a decimal to a mixed number in simpliest form │rudin mathematics analysis exercise │ │dividing algebraic expressions calculator │factor square root calculator │ │how to program the midpoint formula in calculator │addition polynome ti 84 │ │printable pre cal worksheets │passport to algebra and geomegry assignments 9.5 │ │beginner Algerber │easy way to learn integration │ │poems using math terms │implicit differentiation online calculator │ │matlab second order differential equation to first │how to calculate greatest common divisor │ │college algebra homework solver │how to rewrite rational expressions │ │aptitude books download │Solving Rational Equations Calculator │ │parametric equation problem │poem about prime numbers │ │ratio maker free download │common entrance english exam online free │ │simplify fractions calculator │graphing radical equations │ │usable online graphing calculator │year six algebra │ │showing working out for simplifying ratios │Algebra for fourth graders │ │how to solve quadratic with variable as exponent │graph step function ti-89 │ │beginning algebra 5th tutorial │addition and subtraction if integers free worksheets │ │worksheets on solving quadratic equations using completing square method │range and domain of a parabola calculator │ │maths standard 8 │multiply out and simplify calculator │ │online ti 83 graphing calculator │algebra equation grapher │ │algebra worksheets with brakets │ellipse problems │ │contemporary abstract algebra solutions manual │like terms calculator │ │12th grade pre caluculus problems and solutions │application of Linear Algebra in daily life │ │formula for turning fractions into decimals │1st grade math lesson on arrays │ │number to radical converter │differentiate unit step to obtain delta function │ │product of square roots calculator │simultaneous algebraic equation matlab │ │solving inequalities excel │grade 9 math slope and linear algebraic questions │ │turning decimals to fractions in Matlab │sample for root of problem │ │trigonometry test ks3 │general aptitude questions with solutions │ │radicals calculator │algebra question sheets for grade 9 │ │calculator pictures domain and range │graphs of linear equation solver │ │find the point of intersection worksheet │hyperbolas equation │ │year 6 algebra │college algebra just answers │ │quadratic equation solve and check │asymptote is beginning of hook on exponential function graph │ │integrated algebra worksheets │hardest physics question │ │mathmatica to solve differential equations absolute value │turn mixed numbers into decimals calculator │ │solving radical equations restrictions │use product rule to simplify │ │hyperbola v excelu │ti 89 log base 2 │ │algebra homework cheat │how to solvegeometric applications on fractions │ │essential of investments 솔루션 │translation in maths worksheets │ │integralcalculator │cramer rule daily life │ │detailed instructions on graphing equations │how to solve logarithms calculator │ │4 times the square root of 2 explained │how to add fraction radicals │ │online binomial expansion calculator │factoring monomials calculator │ │grade 8 rotations, reflections and translations worksheets │simplify roots of numbers calculator │ │maths worksheets free ks4 │log algebra calculator │ │english worksheets for ks3 │ALGEBRATOR DOWNLOAD │ │solving rational exponents calculator │Algebrator for free │ │program for newton raphson method for solution of system of non linear equations in matlab │dividing fractions in substitution │ │logarithmic equation solver │prentice hall algebra 2 with trigonometry answers │ │free solution +college Algebra and trigonometry │solving linear equations with exponents calculator online │ │conservation laws and first order equations │ordering from least to greatest calculator │ │o Why should we clear decimals when solving linear equations and inequalities? │equations with fractions as exponents │ │free radical expression calculator │solve system of equations using elimination calculator │ │vertex formula for TI-84 │teaching permutations to third grade │ │free online hands on equations worksheets │real life application for ellipses │ │Factoring by Grouping Calculator │converting a quadriatic equation to a root │ │least common denominator calculator with variables │simplifying equations │ │simplify squae root of 15 │practice sat test for 3rd grade for free │ │logarithmic expression calculator │answer my math problems free │ │adding subtracting multiplying dividing integers │free polynomials calculator │ │solving simultaneous differential equations │online poweranalyse │ │Algebra formula sheet │prove difference of two squares divides a difference of two squares│ │dummies.com simple Use polynomial and rational functions. │easy to understand how to solve a formula │ │minimum boolean equation calculator │KS2 sat practice questions free │ │integration by trig substitution calculator │adding, subtracting, multiplying, and dividing fractions │ │download Algebrator │distributive property worksheets for third grade │ │exponential workout problems │laplace inverse calculator │ │scale factor calculator │solving ordered pair equations │ │mathematics structure and method course two answers │evaluating rational expressions solver │ │second differential equation solver │lesson plans on multiplying and dividing rational expressions │ │prentice hall mathematics algebra answer sheet │6th grade graphing worksheets │ │solving equations with fractional coefficients │ratio proportion worksheet │ │algebraic calculators │one basic principle that can be used to simplify a polynomial │ │ks2 algebra explained │directed number worksheet │ │how to add subtract multiply and divide radicals when there is a fraction under the radical│difference quotient free calculator │ │higher level division calculations │simplifying irrational expressions │ │java recursive linear equations │fistin math │ │fraction woksheets order of operation │simplify factoring │ │basic transformation matrices gcse │Real Analysis Rudin Solution Manual │ │kramer solving linear 3 variable equations │sample probability for sixth grade │ │formula for l.c.m │11+ algebra │ │fraction equation calculator │standardized test statistic calculator │ │solve greatest common divisor │decimal to percent generator │ │find slope in algebra finder │adding and subtracting integers worksheets with answers │ │simplify (b^2-a^2) │matrices ode45 │ │how to convert a square root to a decimal │college algebra solving software │ │business math(poems only) │partial decomposition calculator │ │spring problem nonhomogeneous differential equation │boolean algebra solver │ │subtracting trinomials │on how we can use ellipse in our life │ │compound inequalities solver │answers to algebra 1 holt texas textbook for free │ │printable question and answer key games │convert to square root │ │problem and solve using matlab │what if the variable is different in a trinomial │ │adding and subtracting real numbers worksheets │-7-3z=8+2z solving │ │accelerated math worksheets │writing on a calculator using sums │ │free 8th grade worksheets │free step by step math problem solver │ │simplifying square roots calculator │i have a algebra problem │ │ged math worksheets │factoring a cubed │ │how to reduce second order differential equation to first order │how to find the parabola after factoring free online tests │ │dividing polynomials by binomials │easy meiosis lesson plans │ │college algebra scientific calculator │how to solve equations graphically using excel │ │mcqs of accounting to solve │ordering fractions from least to greatest │ │mcdougal littell algebra 2 answers free │printable algebra workbooks │ │complex numbers pdf │solving quadratic equations flow chart │ Search Engine users found us today by entering these keyword phrases: │calculators for mathematical induction │Subtracting Integers │graphing calculator quad form │whats the answer to factoring the polynomial │ │ │ │ │t²-11t+24 │ │sample test in finding gcf for elementary │quadratic equations activity │simplifying radicals with absolute value signs │VBA solve linear system │ │Holt Algebra 2 answers for the fundamental theorem of │free calculus problem solver │maths casio calculator use for free │optional year 5 sat papers │ │Algebra │ │ │ │ │help me solve my algebra problems │ti 84 calculator online │free simplifying calculator online │online solving square root equations │ │polynomial order 3 │MATH WITH PIZZAZZ BOOK D │solve my algebra problem │online grade 9 physics test │ │equivalent fraction to algebraic expressions │TI-83 trace vertex minimum │how to factor in a tI-83 │use free algebrator │ │algebrator │grade 10 trig │best way to teach how to subtract positive and │solving a third order linear equation │ │ │ │negative numbers │ │ │daily life example of linear algebra │equations of clep test │4th grade algebra │time differential equations ti-89 │ │squaring imperfect polynomials │worksheet translating algebraic │dividing rational expressions worksheet │algebra with pizzazz answers key │ │ │equations │ │ │ │maths problem solving for 10 year olds │dividing polynomials program │writing equations without negative exponents │Program to Find the absolute value of two integer│ │ │ │ │taken as input. │ │solve fourth order equation using ti 83 │math trivia algebra │Algebra 11.3 │equation maker │ │steps involved to balance achemical equation │solving quadratic equations graphically│simplifying exponential expressions │wronskian calculator │ │ │my maths │ │ │ │x y intercept │complex coefficient matlab two equation│change a mixed percent number into a decimal │how do you teach basic equation algebra │ │examples of math trivia mathematics word problems │intermediate accounting free exercises │"venn diagram"+worksheet+math │trig values chart │ │online graphing calculator ti 83 plus(non downloadable) │excel formulas for adding, subtracting,│expanding and simplify linear expressions │variable exponent │ │ │dividing │calculator │ │ │trinomial quadratic equation calculator │simplifying exponential expressions and│counting problems including permutations and │how to write a poem using algebra 1 terms │ │ │equations │combinations worksheet │ │ │viii class maths │program to factor equations │nth term calculator online │how to find point of intersection using │ │ │ │ │elimination │ │solving for x and y intercepts worksheets │how to solve 4th grade alegebra │factorize to the 3rd order polynomials │math homework answers for percentages │ │ │questiond │ │ │ │algebra equations with cube │multiply 2 unknown in algebraic │mixed fractions to decimals calculator │inverse function solver │ │ │expression (II) worksheets pdf │ │ │ │free templates for online exam │glencoe parabolas │factoring machine for math │algebra university of chicago answer key │ │ax+by+c polynomial │adding and subtracting integers │free kumon sheets │powerpoints for kids │ │ │worksheet │ │ │ │radical to decimal │math sheets for grade 7 Formula and │algebra elimination calculator │vertex calculator online │ │ │equations │ │ │ │printable math test with adding, subtracting, multipying │solving 2nd order linear differential │math reference sheet grade 2010 │Ti 83 sqrt │ │and dividing fractions │equations using matlab │ │ │ │college prep algebra add/subtract radicals │fraction cube roots │simplifying radicals calculator online │real life examples, coordinate plane │ │worlds easy way to calculate cubes │literal coefficient │MATH FORMULAS grade 10 │algebra term- vertex │ │expression calculator with division │ks3 maths algebra test │math tests for 8yr olds │dividing exponent worksheets │ │3A maths investigations for year 11 │fractions matlab │ellipse graphing calculator │solve system of inequalities without graphing │ │graph polynomial program │mcdougal littell math course 2 workbook│adding and subtracting complex rational │+liner problem software │ │ │ │expressions │ │ │solving equations worksheet ks3 free │indefinite integrals substitution │sum first 100 integers │rational exponent square root │ │ │calculator │ │ │ │6th grade geometry worksheets │metre to square metre calculator │simplyfing rational expressions with negative │prentice hall algebra 1 answers │ │ │ │integers │ │ │grade 9 math - slope │ks3 algebra worksheets │lineal meters to square meters │cdc text book of grade 10 with complete solution │ │ │ │ │of mathmatics │ │middle school math with pizzazz! book D │non-homogenous differential equations │absolute value worksheet │common denominator calculator │ │ │of high orders │ │ │ │hardest physics equation │help solving equations containing │multiple choice math worksheets │testbank elementary linear algebra │ │ │rational expressions │ │ │ │square binomial calculator │algebrator free │least to greatest fractions worksheets free │simultaenous equation solver │ │elipse formula │partial fraction decomposition │excluded values calculator │distributive property algebra │ │ │calculator │ │ │ │how to solve a third order polynomial │"Holt Key Code" │reduce a mixed number calculator │texas instrument 83+rational expressions │ │greatest common divisor applet │algebra + "guided notes" │fraction simplifier │scale factor problems middle school │ │glencoe algebra 1 test │factoring multivariable polynomials │how I can determine intercept in linear function│matlab inequalities │ │ │solver │in excel │ │ │year 6 maths (test worksheets) factors or multiples │algebra 2 hyperbola powerpoint │third order polynomial roots matlab │free worksheets cube roots │ │calculator online radical │boolean algebra calculator │simplify the square root of 5 │multiple factor solver │ │can i use my own calculator in algebra clep │4th grade simplifying fractions │complex numbers slover │absolute value worksheets │ │ │worksheets │ │ │ │convert mixed fraction to decimal │help solve synthetic divisions │reduce square root calculator │exponents calculator │ │teaching simultaneous equations fun │solving real world inequality problems │algabra program │ti 89 polynomial │ │solving problems of easy equation │Year 6 past SAT papers │download simplified college algebra │foci and hyperbola solutions │ │ │how to apply geometrical facts and │ │ │ │multiplying and dividing in scientific notation worksheets│relashionships to form and solve │Test of Genius Worksheet Answers │theoretical & empirical probability │ │ │equations? │ │ │ │math poem │Instant Math Answers Free │samplemath test third grade │trig to simplest radical form │ │easy way to find lcd in fractions │elementary statistics a step by step │really hard maths problem sheets │how to find the slope on a TI-83 calculator │ │ │approach │ │ │ │questions on cubes │linear graphs worksheet │free printable worksheets on proportions │simplifying expressions with negative exponents │ │ │ │ │calculator │ │math rational multiply/division promblems │download square root chart │division algorithm homework solver │convert mixed fraction to decimal calculator │ │The steps for simplifying algebraic fractions │calculator metre to squares │ks3 maths trigonometry │online trinomial factor calculator │ │simplify binomials calc │definition of restricted values of │solving simultaneous linear equations in two │+"calculating compound angles" +freeware │ │ │rational expressions │variables │ │ │prentice hall │homework helpre integrals │download free registered math softwares │what are ks2 sats used for? │ │newton-raphson method simultaneous nonlinear,matlab │how to figure inequlitiles on the │solve equations using square roots calculator │boolesk algebra calculator │ │ │algebrator to be graphed │ │ │ │biology tests for 9th grade │online factoring polynomial calculator │foundations for algebra year 2 chapter 7 CT │saxon math worksheets │ │ │ │answers │ │ │simultaneous equation solver │difference of domain of rational │4th grade fraction to decimals │simplifying decimals? │ │ │function vs radical function │ │ │ │how to solve a hyperbola in geometry │least to greatest calcultor │adding subtracting multiplying dividing │matlab ode45 second order ode │ │ │ │fractions worksheet │ │ │aptitude questions and answers with explanation+download │9th class maths │test prep pretest answers │simplyfying 10th grade math using powerpoint │ │ │ │ │presentation │ │conversion chart fractions to decimals │greatest common factor formula │variables as exponents │java code for solving polynomials │ │linear equations worksheet grade 8 │BUSINESS APTITUDE DOWNLOAD BOOK │Polynomials Roots Calculator │ship math problems │ │logarithm ti89 │nysed math 6th test │fraction with java │factor trinomial calculator │ │ode23 │physics formula sheet │simplifying logarithms │ti-89 algebrator │ │how to calculate age problems in aptitude test easily │distributive property calculator │algebra worksheets grade 8 │rational expressions lcm calculator │ │partial factor method │simplifying polynomials calculator │multiplication solver │math variables and expressions 4th grade │ │solving third power equations │factorising quadratics calculator │free math worksheets adding integers │fractions times aptitude problems │ │grade 9 math slope and algebraic equations │coordinate planes that make pictures │polynomial activities │addition exponential expressions examples grade 6│ │formula for the mcnugget numbers │free limit solver online step by step │parapola volume │factoring cubed polynomials │ │tiling pattern 6th grade question answer key │poem about number theory │logarithm calculator simplify │learning objectives for square roots │ │adding similar fraction │work sheet- Convert decimals to │distributive property worksheets │"linear algebra made easy" ti89 torrent │ │ │fraction │ │ │ │solve complex rational expressions │simplify trigonometric equations │help wit algeba math problems │cpm geometry book download │ │ │software │ │ │ │fraction formula │algebra 2 equations solver │unknown two variable equations for 6th grade │Nonlinear Equation Solver polymath │ │ │ │math │ │ │completing the square program │quadratic equation factoring calculator│quadratic equations completing the square │operations in simplifying radical expressions │ │ │program │ │ │ │use algebrator to divide monomials │slope worksheet middle school │decimal - fraction formula │Explanation for factoring using distributive │ │ │ │ │property │ │greatest common factor with variables │old maths tests download │linear factors cubic calculator │how to solve addition and subtraction radicals │ │What is the difference between exponents and radicals or │ │ │ │ │roots? Is there really any difference or are they │algebrator software │subtracting polynomial calculators │greatest common divisor money problems │ │inverses? │ │ │ │ │least common denominator worksheets │simplify sqrt 4 w^4 │math trivias with answers │gcf monomials calculator │ │holt pre-algebra online math books for students │MATHE GAMES FOR MULTIPLING INTERGERS │number line gragh solver │quadratic expression solver │ │math problem for aptitude test │creating a parabola in excel │lowest common denominator algebra │prentice hall physics answers │ │exponential equation joke │simplify complex radical expressions │algerbra graph solver │2nd order runge-kutta matlab │ │use ti 83 calculator online │Ti-83 cubed button │how to graph a cubed root │ti-89 decimal to fraction │ │log base 10 graph │online simplifying radicals calculator │free worksheets integers │college algebra for dummies │ │pre algebra crossword puzzle │simplifying irrational numbers │8th class maths papers │integral calculator step by step │ │squre roots solving tricks │function machines worksheets │saxon algebra 2 answer key online │solving Systems of nonLinear Equations in excel │ │ecuaciones diferenciales no lineales de primer orden │Orleans/Hannah Algebra Prognosis Test │show the graph of a hyperbola by any method │how to solve 3rd order polynomial │ │solving nonlinear differential equations in matlab │introducing algebra lesson plan │prentice hall biology teacher's edition online │simplify radical calculator │ │5th grade equations worksheet │mathematics trivia questions │advanced algebra calculator │i need for apptitude question and answer with │ │ │ │ │explanation │ │synthetic division worksheet │who first came of with binomial algebra│solving algebraic expression lessons and fourth │Using the power rule and squaring twice │ │ │ │grade │ │ │learning math in std 7th │online standard form calculator │free ks3 maths papers │expression calculator with exponents │ │radical notation calculator │fraction decomposition calculator │rational expressions division calculator │OPERATION NR. COMPLEXE AVEC T.I. │ │systems of equations worksheet │gcse maths algebra formulas │algebrea fractions to the power of │c++ while loop program that asks for coefficients│ │ │ │ │of the quadratic equation and computes solutions │ │The worlds hardest maths sum. │how to solve cube problems in aptitute │second derivative calculators online │integral calculator with steps │ │5th grade adding subtracting multiplying dividing │elementary math trivia │mathematical induction calculator │multiplying radicals calculator │ │fractions │ │ │ │ │integers worksheet │investigatory project in math students │solving non algebraic equations in powerpoint │solving spring mass systems │ │help solving complicated rational expression algebra │arithmetic secuence worksheets │provide instruction for students to do decimals,│prentice hall mathematics algebra 1 practice │ │ │ │place value activities for authentic assessment │workbook answers │ │differential equations for excel │how do i do algebra on my ti 83 │in TI how to put y value │Parabolas made Linear │ │ │calculator │ │ │ │ratiomaker download │adding two terms under square root │foil method step by step printout │searching statistic crossword puzzle answers │ │ │(2x-3) │ │ │ │how to take the third square root │scatter plot worksheets │least to greatest calculator │chemical formula product solver │ │simplify algebraic expressions triangle │simplify complex fraction calculator │"Algebra I" "Graphing pictures" │radical expression calculator │ │solve for x fractions calculator │soft math.com │free worksheets for adding and subtracting │finding the focus of a circle │ │ │ │positive and negative numbers │ │ │solving nonhomogeneous second order differential equations│7th standred maths games │Solve a Maths Problem for Me free │graph program online respect to y │ │math trivia questions with answers │printable collecting like terms │adding mixed numbers unlike denominators │finding binomial factors calculator │ │ │worksheet │powerpoint │ │ │simplest radical form of decimals │fun logarithm worksheets │how to add cube roots │how to turn square root into fraction │ │what is the cubed root of 16 │trig graphing paper │how to solve exponential limits │College tutoring │ │It is a measure of chemical stability that can be used to │solving radical equations with │ │discussion and concept of special product of │ │predict and interpret phase changes and chemical │variables │simplifying radicals products and quotients. │algebra │ │reactions. │ │ │ │ │free download brsb aptitude test papers │how do you know if an expression is │logarithmic equation │apply square root method calculator │ │ │simplified │ │ │ │LAPLACE IN ALGEBRATOR │adding integers game │how to to make polynomial equation into ellipse │square root expressions │ │ │ │equation │ │ │fourth grade - free long division worksheets │stretch factor referring to quadratic │quadratic expressions calculator │exponential simplification │ │ │equations │ │ │ │how to solve non homogeneous equations in matlab │7th grade math pre algebra │finding radicals │square root calculator │ │grade 9 math worksheets │foiling calculator │y6 math problem │printable trigonometry table including minutes │ │math trivia in trigonometry │matlab solve two nonlinear simultaneous│constant differences │mixed number │ │ │equations │ │ │ │online algebra calculator │solve by the elimination method │printable fractions test │use substitution to factor polynomial │ │ │calculator online │ │ │ │joint variation math solver │the problem solver book mathematics │algebra an integrated approach book online │slope formula ti-83 step by step │
{"url":"http://softmath.com/math-com-calculator/graphing-inequalities/free-simplify-exponential.html","timestamp":"2014-04-21T12:39:55Z","content_type":null,"content_length":"156078","record_id":"<urn:uuid:09626ee9-f261-4913-a276-57f67feb38f8>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
This module provides functions useful for implementing new MonadRandom and RandomSource instances for state-abstractions containing StdGen values (the pure pseudorandom generator provided by the System.Random module in the "random" package), as well as instances for some common cases. data StdGen The StdGen instance of RandomGen has a genRange of at least 30 bits. The result of repeatedly using next should be at least as statistically robust as the Minimal Standard Random Number Generator described by [System.Random, System.Random]. Until more is known about implementations of split, all we require is that split deliver generators that are (a) not identical and (b) independently robust in the sense just given. The Show and Read instances of StdGen provide a primitive way to save the state of a random number generator. It is required that read (show g) == g. In addition, reads may be used to map an arbitrary string (not necessarily one produced by show) onto a value of type StdGen. In general, the Read instance of StdGen has the following properties: • It guarantees to succeed on any string. • It guarantees to consume only a finite portion of the string. • Different argument strings are likely to result in different results. Read StdGen Show StdGen RandomGen StdGen (Monad m, ModifyRef (IORef StdGen) m StdGen) => RandomSource m (IORef StdGen) (Monad m, ModifyRef (STRef s StdGen) m StdGen) => RandomSource m (STRef s StdGen) (Monad m1, ModifyRef (Ref m2 StdGen) m1 StdGen) => RandomSource m1 (Ref m2 StdGen) Monad m => MonadRandom (StateT StdGen m) Monad m => MonadRandom (StateT StdGen m) mkStdGen :: Int -> StdGen The function mkStdGen provides an alternative way of producing an initial generator, by mapping an Int into a generator. Again, distinct arguments should be likely to produce distinct generators. newStdGen :: IO StdGen Applies split to the current global random generator, updates it with one of the results, and returns the other. getRandomPrimFromRandomGenRef :: (Monad m, ModifyRef sr m g, RandomGen g) => sr -> Prim a -> m aSource Given a mutable reference to a RandomGen generator, we can make a RandomSource usable in any monad in which the reference can be modified. See Data.Random.Source.PureMT.getRandomPrimFromMTRef for more detailed usage hints - this function serves exactly the same purpose except for a StdGen generator instead of a PureMT generator. getRandomPrimFromRandomGenState :: forall g m a. (RandomGen g, MonadState g m) => Prim a -> m aSource Similarly, getRandomWordFromRandomGenState x can be used in any "state" monad in the mtl sense whose state is a RandomGen generator. Additionally, the standard mtl state monads have MonadRandom instances which do precisely that, allowing an easy conversion of RVars and other Distribution instances to "pure" random variables. Again, see Data.Random.Source.PureMT.getRandomPrimFromMTState for more detailed usage hints - this function serves exactly the same purpose except for a StdGen generator instead of a PureMT
{"url":"http://hackage.haskell.org/package/random-source-0.3.0.6/docs/Data-Random-Source-StdGen.html","timestamp":"2014-04-17T05:51:00Z","content_type":null,"content_length":"14692","record_id":"<urn:uuid:ef9c2869-3014-4125-8154-3fd601fb53fa>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
Entanglement witness From Wikipedia, the free encyclopedia In quantum information theory, an entanglement witness is a functional which distinguishes a specific entangled state from separable ones. Entanglement witnesses can be linear or nonlinear functionals of the density matrix. If linear, then they can also be viewed as observables for which the expectation value of the entangled state is strictly outside the range of possible expectation values of any separable state. Let a composite quantum system have state space $H_A \otimes H_B$. A mixed state ρ is then a trace-class positive operator on the state space which has trace 1. We can view the family of states as a subset of the real Banach space generated by the Hermitian trace-class operators, with the trace norm. A mixed state ρ is separable if it can be approximated, in the trace norm, by states of the form $\xi = \sum_{i=1} ^k p_i \, \rho_i^A \otimes \rho_i^B,$ where $\rho_i^A$'s and $\rho_i^B$'s are pure states on the subsystems A and B respectively. So the family of separable states is the closed convex hull of pure product states. We will make use of the following variant of Hahn–Banach theorem: Theorem Let $S_1$ and $S_2$ be disjoint convex closed sets in a real Banach space and one of them is compact, then there exists a bounded functional f separating the two sets. This is a generalization of the fact that, in real Euclidean space, given a convex set and a point outside, there always exists an affine subspace separating the two. The affine subspace manifests itself as the functional f. In the present context, the family of separable states is a convex set in the space of trace class operators. If ρ is an entangled state (thus lying outside the convex set), then by theorem above, there is a functional f separating ρ from the separable states. It is this functional f, or its identification as an operator, that we call an entanglement witness. There are more than one hyperplane separating a closed convex set and a point lying outside of it. So for an entangled state there are more than one entanglement witnesses. Recall the fact that the dual space of the Banach space of trace-class operators is isomorphic to the set of bounded operators. Therefore we can identify f with a Hermitian operator A. Therefore, modulo a few details, we have shown the existence of an entanglement witness given an entangled state: Theorem For every entangled state ρ, there exists a Hermitian operator A such that $\operatorname{Tr}(A \, \rho) < 0$, and $\operatorname{Tr}(A \, \sigma) \geq 0$ for all separable states σ. When both $H_A$ and $H_B$ have finite dimension, there is no difference between trace-class and Hilbert–Schmidt operators. So in that case A can be given by Riesz representation theorem. As an immediate corollary, we have: Theorem A mixed state σ is separable if and only if $\operatorname{Tr}(A \, \sigma) \geq 0$ for any bounded operator A satisfying $\operatorname{Tr}(A \cdot P \otimes Q) \geq 0$, for all product pure state $P \otimes Q$. If a state is separable, clearly the desired implication from the theorem must hold. On the other hand, given an entangled state, one of its entanglement witnesses will violate the given condition. Thus if a bounded functional f of the trace-class Banach space and f is positive on the product pure states, then f, or its identification as a Hermitian operator, is an entanglement witness. Such a f indicates the entanglement of some state. Using the isomorphism between entanglement witnesses and non-completely positive maps, it was shown (by the Horodecki's) that Theorem A mixed state $\sigma \in L(H_A) \otimes L(H_B)$ is separable if for every positive map Λ from bounded operators on $H_B$ to bounded operators on $H_A$, the operator $(I_A \otimes \Lambda)(\ sigma)$ is positive, where $I_A$ is the identity map on $\; L (H_A)$, the bounded operators on $H_A$. • Terhal, Barbara M. (2000). "Bell inequalities and the separability criterion". Physics Letters A 271 (5-6): 319–326. arXiv:quant-ph/9911057. Bibcode:2000PhLA..271..319T. doi:10.1016/S0375-9601 (00)00401-1. ISSN 0375-9601. Also available at quant-ph/9911057 • R.B. Holmes. Geometric Functional Analysis and Its Applications, Springer-Verlag, 1975. • M. Horodecki, P. Horodecki, R. Horodecki, Separability of Mixed States: Necessary and Sufficient Conditions, Physics Letters A 223, 1 (1996) and arXiv:quant-ph/9605038 • Z. Ficek, "Quantum Entanglement Processing with Atoms", Appl. Math. Inf. Sci. 3, 375–393 (2009). • Barry C. Sanders and Jeong San Kim, "Monogamy and polygamy of entanglement in multipartite quantum systems", Appl. Math. Inf. Sci. 4, 281–288 (2010). • Gühne, O.; Tóth, G. (2009). "Entanglement detection". Phys. Rep. 474: 1–75. arXiv:0811.2803. Bibcode:2009PhR...474....1G. doi:10.1016/j.physrep.2009.02.004.
{"url":"http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Entanglement_witness","timestamp":"2014-04-21T14:59:19Z","content_type":null,"content_length":"71075","record_id":"<urn:uuid:d34241bb-1e61-4f5f-a5f0-b2c277ff7995>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: DIMACS: Series in Discrete Mathematics and Theoretical Computer Science 1995; 441 pp; hardcover Volume: 20 ISBN-10: 0-8218-0239-9 ISBN-13: 978-0-8218-0239-7 List Price: US$119 Member Price: US$95.20 Order Code: DIMACS/20 This book grew out of the fourth Special Year at DIMACS, which was devoted to the subject of combinatorial optimization. During the special year, a number of workshops, small and large, dealt with various aspects of this theme. Organizers of the workshops and selected participants were asked to write surveys about the hottest results and ideas in their fields. Therefore, this book is not a set of conference proceedings but rather a carefully refereed collection of invited survey articles written by outstanding researchers. Aimed at researchers in discrete mathematics, operations research, and the theory of computing, this book offers an in-depth look at many topics not treated in textbooks. Co-published with the Center for Discrete Mathematics and Theoretical Computer Science beginning with Volume 8. Volumes 1-7 were co-published with the Association for Computer Machinery (ACM). Researchers and graduate students in discrete mathematics, operations research, and the theory of computing. • M. Deza, V. P. Grishukhin, and M. Laurent -- Hypermetrics in geometry of numbers • M. Jünger, G. Reinelt, and S. Thienel -- Practical problem solving with cutting plane algorithms in combinatorial optimization • L. Lovász -- Randomized algorithms in combinatorial optimization • S. Poljak and Z. Tuza -- Maximum cuts and largest bipartite subgraphs • Y. Pochet and L. A. Wolsey -- Algorithms and reformulations for lot sizing problems • H. Ripphausen-Lipa, D. Wagner, and K. Weihe -- Efficient algorithms for disjoint paths in planar graphs • D. B. Shmoys -- Computing near-optimal solutions to combinatorial optimization problems • G. Simonyi -- Graph entropy: A survey
{"url":"http://ams.org/bookstore-getitem/item=DIMACS-20","timestamp":"2014-04-16T19:31:51Z","content_type":null,"content_length":"15943","record_id":"<urn:uuid:9665873e-3db4-4924-8fc8-a9f3faf8f46f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
Griffith, CA Precalculus Tutor Find a Griffith, CA Precalculus Tutor ...I use pictures, colors, and graphs to make the subject easy to understand and fun. I have played the piano for over 30 years and I have tutored students in beginning piano. I am a patient, friendly, and enthusiastic teacher. 16 Subjects: including precalculus, French, calculus, algebra 1 ...Depending on your work load I can assign homework that will challenge your ability to solve hard problems so you can keep progressing. I hold my standards very high. I will give you my best ability to handle any problem. 9 Subjects: including precalculus, calculus, algebra 1, algebra 2 ...If you really want to master difficult topics or perform up to your potential on standardized tests like the MCAT, learning how to apply your knowledge under stress is essential. Also, I actually like teaching so I'll always show up with a smile on my face though I of course individually tailor ... 18 Subjects: including precalculus, chemistry, reading, biology ...Needless to say, that last exam was a disaster for all the class. While “Diffy-Q’s” subject matter is tough enough on its own, the course will also uncover any weakness you might have in your basic calculus and algebra skills as well. My talents as a tutor allow me to not only help you with the... 45 Subjects: including precalculus, English, chemistry, ASVAB ...Every student I have is important to me and I always give my best effort. My focus is expert, patient, private, high quality, in-home coaching in mathematics at all levels for middle school on up to college level students. I show up prepared, on time, and ready to lead you through your lesson. 17 Subjects: including precalculus, geometry, ASVAB, algebra 1 Related Griffith, CA Tutors Griffith, CA Accounting Tutors Griffith, CA ACT Tutors Griffith, CA Algebra Tutors Griffith, CA Algebra 2 Tutors Griffith, CA Calculus Tutors Griffith, CA Geometry Tutors Griffith, CA Math Tutors Griffith, CA Prealgebra Tutors Griffith, CA Precalculus Tutors Griffith, CA SAT Tutors Griffith, CA SAT Math Tutors Griffith, CA Science Tutors Griffith, CA Statistics Tutors Griffith, CA Trigonometry Tutors Nearby Cities With precalculus Tutor Briggs, CA precalculus Tutors Cimarron, CA precalculus Tutors Glassell, CA precalculus Tutors Glendale Galleria, CA precalculus Tutors La Canada, CA precalculus Tutors Magnolia Park, CA precalculus Tutors Oakwood, CA precalculus Tutors Playa, CA precalculus Tutors Rancho La Tuna Canyon, CA precalculus Tutors Santa Western, CA precalculus Tutors Sherman Village, CA precalculus Tutors Starlight Hills, CA precalculus Tutors Toluca Terrace, CA precalculus Tutors Vermont, CA precalculus Tutors Westwood, LA precalculus Tutors
{"url":"http://www.purplemath.com/Griffith_CA_precalculus_tutors.php","timestamp":"2014-04-16T16:35:56Z","content_type":null,"content_length":"24291","record_id":"<urn:uuid:66dbcd38-cdd1-4212-826b-1182c7a9df95>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/violy/answered","timestamp":"2014-04-21T02:20:34Z","content_type":null,"content_length":"113822","record_id":"<urn:uuid:60398b9d-fa88-405f-a0c1-0b697040755d>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
Decimal Fractions Terminating Decimals and Repeating Decimals Explain the meaning of the values stated in the following sentence. The gas can has a capacity of 4.17 gallons and weighs 3.4 pounds. The values represent four and seventeen hundredths and three and four-tenths. In other words, the decimals are another way to write the mixed numbers Fractional Parts in the Place Value System Earlier we learned about base-ten place value and how it is used to write whole number values. Now we extend the place-value concepts for the base-ten system to fractional parts of a whole. In that earlier lesson, we noticed how each column in the place value system represents a value ten times greater than the value of the column immediately to the right. Since that time, we have studied exponents and now see that these place values are consecutive "powers of 10". If we continue the patterns of each column being ten times the value of the column to the right, the next column to the right would have to be a value that when it is multiplied by 10 would equal the value of the last column, which is 1. This means the value of the next column to the right must be tenths, since one-tenth". We call the first place-value after the decimal point the tenths position. Notice that the decimal place values end in -ths. Tens are whole values and tenths are fractional parts of the whole. If we extend to the right one more column, that value would have to be the value that when we multiply it by ten it would equal one-tenth. That means the next column to the right would be the hundredths, since Hundreds are whole values and hundredths are fractional parts of the whole. If we continue this process, we can extend the place value table out as far as we desire. As long as we have the table with the columns labeled, we can tell which column has which value. But when we write numbers without the table labels, we need to know where the place values change from whole numbers to fractions. That is the role of the decimal point. The decimal point separates the place values that are whole values on the left from the place values that are fractional parts on the right, as illustrated in the table below. Note the thousandths position has a picture of the Missouri mill token; for more information on mill tokens used for taxes, see Mill (currency) - Wikipedia, the free encyclopedia. Relating to Reciprocals It is interesting to note that if we continue the pattern of the exponents, the tenths' column corresponds to 10^–1, the hundredths' column corresponds to 10^–2, etc. This is consistent with the properties of exponents and operations with integers, which we will discuss further when we discuss integers. Note that the negative exponent could be interpreted as the reciprocal of the value, e.g., the reciprocal of 10 is ^–1 is the reciprocal of 10 since Writing Decimal Fractions To write eight-tenths using decimal place value, the digit 8 is placed in the tenths' column. When we transfer the value out of the table, we need to include the decimal point. For better clarity and readability, when there are no whole number values, it is best to put a zero in front of the decimal point to indicate that there are no whole number values. Note that we inserted a zero in the numeral where the table has an empty position since this simply says that position has no value. When we read decimal fractions (decimals) out loud or write them in words, the word and is placed where the decimal point occurs. In the table above, twenty and sixteen thousandths is an example of this. Example: Write 127.836 in words, expanded fraction form, expanded decimal form and expanded exponential form. words: one hundred twenty-seven and eight hundred thirty-six thousandths fraction expanded form: decimal expanded form: 127.836 = 1 · 100 + 2 · 10 + 7 · 1 + 8 · 0.1 + 3 · 0.01 + 6 · 0.001 exponential expanded form: 127.836 = 1 · 10^2 + 2 · 10^1 + 7 · 10^0 + 8 · 10^–1 + 3 · 10^–2 + 6 · 10^–3 Note that 0.500 = 0.50 = 0.5, which follows from simplifying common fractions significant digits (Significant figures - Wikipedia, the free encyclopedia) need to be expressed, such has in scientific Self-Check Problem Write 305.0026 in words, expanded fraction from, expanded decimal form, and expanded exponential form. words Solution fraction expanded form: Solution decimal expanded form: Solution exponential expanded form: Solution Convert Common Fractions to Decimal Fractions All common fractions can be written in decimal form. Some common fractions are easy to visualize as decimal fractions. For instance, we rewrite each of the common fractions, in the following examples, as a decimal by changing to an equivalent fraction that has denominator that is a power of ten. We use the fact that the prime factorization of each denominator has prime factors of 2 or 5, and the fact that the product of two and five is ten. The above method works well when the denominator has only 2 or 5 as prime factors, but is there another method to convert common fractions? Another method to change a common fraction to a decimal fraction is to use the division interpretation of the fraction and continue the division on past the decimal point. In the example below, the division results in a terminating decimal. That means that it divides out completely, eventually having a remainder of zero. Examples: To show To show To show What about common fractions where the denominator has a prime factor other than 2 or 5? Self-Check Problem Repeating Decimals (Nonterminating Decimals) Some common fractions, when we divide them out, never have a remainder of zero, instead the quotient forms a nonzero repeating pattern that never ends. The fraction Important Note: Do not say or write Most repeating decimals have several digits that repeat. For instance, Example: Express three-sevenths in decimal form. Hence, we have Repeating decimal - Wikipedia, the free encyclopedia Self-Check Problem Express five-sixths in decimal form. Convert Decimal Fractions to Common Fraction Terminating Decimals Decimals that are not repeating decimals are called terminating decimals. Terminating decimals represent fractions that can be expressed with a denominator that is a power of 10. To change a terminating decimal to an equivalent common fraction, all we need to do is use place value to determine the denominator. Suggestion: Verbalize the decimal by reading it correctly, that is, read 0.025 as twenty-five thousandths. Self-Check Problems Convert 0.35 and 0.175 to a simplified common fractions. Solution for 0.35 --------------- Solution for 0.175 Repeating Decimals Repeating decimals can be changed to common fractions using an algebraic method. The method is to multiply the number by a power of ten that shifts the decimal point the number of place values that are repeating, and then subtract the number from that new value. This will cancel out the repeating portion. Example: We change First we notice that multiplying by 10 moves the value one place value to the left in the place value table. For example, 7(10) =70 and this places the digit 7 one place value to the left of where it was in the original value of 7. Or, 0.77(10) = 7.7. We then subtract these two values, and the repeating parts subtract completely off. We solve this equation for x to obtain Example: We change First we notice that multiplying by 100 moves the two place values to the right, i.e., We then solve this equation for x and simplify to obtain Example: We change Since there are only two positions repeating (the 32), we multiply by 100. After solving the equation and simplifying, we obtain Self-Check Problems Convert each repeating decimal to a simplified common fraction. Joke or Quote The polite mathematican says, "You go to infinity — and don't hurry back."
{"url":"http://web.mnstate.edu/peil/MDEV102/U3/S26/S26_print.html","timestamp":"2014-04-19T14:40:33Z","content_type":null,"content_length":"19883","record_id":"<urn:uuid:ab18a382-36de-4825-9e96-eb98c55e4be7>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
Orthogonal Drawings with Few Layers Biedl, Therese and Johansen, John and Shermer, Thomas and Wood, David R. (2002) Orthogonal Drawings with Few Layers. In: Graph Drawing 9th International Symposium, GD 2001, September 23-26, 2001, Vienna, Austria , pp. 297-311 (Official URL: http://dx.doi.org/10.1007/3-540-45848-4_24). Full text not available from this repository. In this paper, we study 3-dimensional orthogonal graph drawings. Motivated by the fact that only a limited number of layers is possible in VLSI technology, and also noting that a small number of layers is easier to parse for humans, we study drawings where one dimension is restricted to be very small. We give algorithms to obtain point-drawings with 3 layers and 4 bends per edge, and algorithms to obtain box-drawings with 2 layers and 2 bends per edge. Several other related results are included as well. Our constructions have optimal volume, which we prove by providing lower Repository Staff Only: item control page
{"url":"http://gdea.informatik.uni-koeln.de/517/","timestamp":"2014-04-18T06:20:29Z","content_type":null,"content_length":"40151","record_id":"<urn:uuid:35dba06e-7d44-48c7-b890-6d2170b63294>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry: Measurements The perimeter of a circle is called a special name other than perimeter: circumference. The circumference of a circle is the length of the curve that encloses that circle. A circle is defined by only two things: its center and its radius. Two circles with the same center and the same radius are the same circle. Therefore, the circumference of a circle must depend on one of these, or both. In fact, the circumference is dependent solely on the radius of a circle: circumference equals 2Πr , where r denotes the length of the radius. Another way to state the formula is Πd , where d denotes the length of the diameter of the circle, which is, of course, twice that of the radius. A clever way to remember the formula for circumference is with the sentence "See two pies run." This sentence corresponds to written version of the formula, C = 2Πr . Another way to think of the curve that encloses a circle is through the 360 degree arc of that curve. Thus, the circumference of a circle is the length of the 360 degree arc of that circle. Since we know that the circumference of a 360 degree arc is 2Πr , where r is the length of the radius, we can calculate the length of various arcs of a circle, provided that we know the radius of such a circle. For example, the length of a 180 degree arc must be half the circumference of the circle, the product of pi and the radius. The length of any arc is equal to whatever fraction of a full rotation the arc spans multiplied by the circumference of the circle. A 45 degree arc, for example, spans one-eighth of a full rotation, and is therefore equal to one-eighth the circumference of that circle. The length of an arc of n degrees equals (n/360) times the circumference. Below these concepts are pictured. Figure %: A 30 degree arc equals one-twelfth the circumference of the circle
{"url":"http://www.sparknotes.com/math/geometry2/measurements/section2.rhtml","timestamp":"2014-04-18T18:18:53Z","content_type":null,"content_length":"52627","record_id":"<urn:uuid:9d1200c1-28b3-45e8-92a3-73e2a9f1528e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
Making a hole in a Fermi sea It is not uncommon in physics that a simple, straightforward question requires the development of considerable theoretical machinery and intuition before an answer is approached. Such an elementary question is the following: In an infinitely extended, noninteracting ideal Fermi gas, how does the energy of the gas change if there is a localized perturbation in its otherwise constant density? Many approaches based on perturbation theory and semiclassical approximations exist to solve this problem, but their accuracy is not known a priori. Now, in a paper published in Physical Review Letters, Rupert Frank and collaborators at Princeton, McGill University, Canada, and the University of Cergy-Pontoise, France, provide, for the first time, a rigorous answer. In particular, they show that for spatial dimensions greater than or equal to two, the well-known semiclassical approximation provides a lower bound to the correct quantum mechanical energy of the perturbed Fermi sea, up to a universal constant. Given that the noninteracting Fermi gas is one of the fundamental models in physics and it is used to understand systems in astrophysics, condensed matter, and cold atom physics, their result is expected to touch upon many different fields. –Alex Klironomos
{"url":"http://physics.aps.org/synopsis-for/print/10.1103/PhysRevLett.106.150402","timestamp":"2014-04-17T07:29:02Z","content_type":null,"content_length":"4358","record_id":"<urn:uuid:defe7d7b-f90b-43e8-8d82-5eaabc65561c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
Industrial Mathematics Project for High School Students Industrial Mathematics Project for High School Students - Standards Follow the links below to see how these projects are aligned with established educational standards. The industrial mathematics projects were designed with educational standards in mind: they can fit into any mathematics curriculum. To help teachers justify using industrial mathematics projects in their classroom, the projects were mapped to both the Massachusetts Mathematics Curriculum Framework and the Principles and Standards for School Mathematics. We chose the Massachusetts educational guide because seventy percent of the teachers participating in the MII (Mathematics in Industry Institute) summer workshop are from Massachusetts. The NCTM (National Council of Teaches of Mathematics) standards are included for additional support and for those teachers not from Massachusetts. A matrix summarizes the alignment of each project with these standards: Included in the matrix are the industrial mathematics project titles, a brief description of the projects, and suggested classes for which the projects can be used. Lastly, the matrix includes the standards and expectations from Massachusetts and NCTM. Below is the breakdown of how the projects were linked to the standards and expectations from Massachusetts and NCTM. The Massachusetts Mathematics Curriculum Framework portion of the matrix focused on the Areas of Application section within each project. After looking at the Areas of Application, we decided the math classes in which the projects would most likely be used. We then mapped the Areas of Application onto the appropriate learning standards in the Massachusetts Mathematics Curriculum Framework for grades 8, 10, and 12. If the project was designed for upper-level math courses, we did not include any learning standards below grade 10, and if the project was for lower-level math courses, we did not include any learning standards above grade 10. To create The NCTM portion of the matrix, we numbered the learning standards and expectations listed in the Principles and Standards for School Mathematics. The NCTM learning standards were separated into four grade clusters: Pre-K – 2, 3 – 5, 6 – 8, 9 – 12. We only focused on grades 6 – 12 and labeled any expectation that fell in grades 6 – 8 as grade 8 and any expectation that fell in grades 9 – 12 as grade 12. After they were numbered, we used the Areas of Application, as well as the learning standards mapped to the Massachusetts Mathematics Curriculum Framework matrix, in order to determine the NCTM expectations covered in each project. Maintained by webmaster@wpi.edu Last modified: Jun 20, 2010, 09:03 EDT
{"url":"http://www.wpi.edu/academics/math/CIMS/IMPHSS/standards.html","timestamp":"2014-04-18T08:01:50Z","content_type":null,"content_length":"7410","record_id":"<urn:uuid:9b0c92eb-0f2a-49aa-a0aa-d35b4d725820>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Radioactive decay formula Wouldn't there have to be a negative sign in there somewhere >_> I believe you have to use integrals to solve that, which I haven't done yet. yeah it should have a minus sign, good! :-) Solving this: [tex] \int N ^{-1}dN = - \int \lambda dt [/tex] [tex] \ln(N(t)) - \ln(N(0)) = -\lambda t [/tex] [tex] \ln(N(t)/N(0)) = -\lambda t [/tex] [tex] N(t)/N(0) = e^{-\lambda t } [/tex] [tex] N(t) = N(0) e^{-\lambda t } [/tex] Lambda is the number of decays per unit time, is related to half life by: [tex] \lambda = \frac{\ln 2}{T_{1/2}} [/tex]
{"url":"http://www.physicsforums.com/showthread.php?t=238914","timestamp":"2014-04-20T14:16:31Z","content_type":null,"content_length":"50961","record_id":"<urn:uuid:3bdb74f0-be70-4be9-a33d-42e5de52124e>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
A general Lipschtiz potential can be specified by a Gibbs specification ? up vote 3 down vote favorite I want to consider one-dimensional system on the lattice $\mathbb{L}=\mathbb{N}$. Let be $A:(\mathbb{S}^1)^{\mathbb{L}}\to\mathbb{R}$ a lipschtiz potential. Consider the Ruelle operator $$ \mathcal {L}_{A}(\psi)(x)=\int_{\mathbb{S}^1} e^{A(ax)} \psi(ax)\ da $$ where $x\in(\mathbb{S}^1)^{\mathbb{N}}$ and $ax=(a,x_1,x_2,\ldots)\in (\mathbb{S}^1)^{\mathbb{N}}$ and $da$ is the normalized Lebesgue measure on $\mathbb{S}^1$. Let $\mu$ the Gibbs measure constructed from $\mathcal{L}_A$ in the standard way. Let $\Lambda_n=[0,\ldots,n]\cap\mathbb{Z}$ and $\mathscr{T}_{\Lambda_n}$ the external $\sigma$-algebra as defined in the Georgii's book. Question. Is there any Gibbs specification $\gamma=(\gamma_{\Lambda_n})_{n\in\mathbb{N}}$ such that $$ \mu(A|\mathscr{T}_{\Lambda_n})(x)=\gamma_{\Lambda_n}(A|x) \quad \mu-\text{a.s.} ? $$ The answer is simple if $A$ is a potential depending only on finite number of coordinates or other words, if $A$ is a short range potential. Since I am considering $A$ is lipschtiz, it seems reasonable to fix some configuration $\omega_0\in\mathbb{S}^{\mathbb{N}}$ and consider a sequence of truncated potentials as follows $$ \Phi_n=(\Phi^n_{\Gamma})_{\Gamma\subset \mathbb{N}}, $$ where $$ \Phi^n_{\Gamma}(x)= \left\{ \begin{array}{rl} -A(x_1,\ldots,x_{n},\omega_{n+1},\ldots),& \text{if}\ \Gamma=\{1,\ldots,n\}; \\ 0,&\ text{otherwise}. \end{array} \right. $$ Now we consider the respective specifications given by these potentials and ask if the unique $\mu_n\in\mathcal{G}(\Phi_n)$, converges to the measure $\mu$ for any choice of $\omega_0$. The main motivation to post this question is to know if there is a standard procedure to obtain the measure $\mu$ from the specification point of view. I also appreciate any comments about the aproximatoin scheme I described above. Any help or reference is welcome. Thanks. mp.mathematical-physics statistical-physics ergodic-theory add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged mp.mathematical-physics statistical-physics ergodic-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/66845/a-general-lipschtiz-potential-can-be-specified-by-a-gibbs-specification","timestamp":"2014-04-20T13:37:43Z","content_type":null,"content_length":"49169","record_id":"<urn:uuid:d0a16820-e079-4177-8570-4b253b157375>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
pros and cons of counseling theoretical approaches d33007 vg31 diagram free articles on nursing education and theoretical frameworks theoretical framework about accounting system theoretical yield vs experimental yield physics florida game breed pitbulls theoretical value for albumin heat combustion careers involving theoretical probability sample theoretical propositions in research study explain theoretical concepts meaning of theoretical triangulation research theoretical background enrollment system theoretical perspectives on families theoretical framework statistics picture of ann hamric theoretical model theoretical background of festival related violence in the philippines theoretical framework applicable in pain relief in labour primary theoretical perspective theoretical framework payroll system examples of theoretical framework in a qualitative study theoretical counseling models nursing theoretical framework in e learning theoretical framework of enrollment system what is the theoretical orientation of child aggression scale curriculum theoretical frameworks conceptual and theoretical framework of computer literacy theoretical methods of child abuse what is theoretical saturation theoretical probability powerpoint for middle school what was jean piagets framework for his theoretical concept examples of theoretical research theoretical models in nursing research theoretical framework of payroll system how to compute theoretical latency asa theoretical perspective of abnormal psychology social theoretical equality questions define primarily theoretical theoretical framework in computerized payroll system advantages does a theoretical framework for nursing of families disabled structuralist functionalist theoretical perspective definition of theoretical assumptions theoretical perspective in abuse and vulnerability examples of international business theoretical perspective are grades empirical subjective or theoretical probability theoretical framework for child labour ppt formulation of theoretical framework in research applied vs theoretical studie best theoretical concept for job satisfaction thesis theoretical dissertation theoretical framework for childhood development powerpoint theoretical yield of benzoate define theoretical framework example thesis of theoretical and conceptual framework of a values tobacco theoretical ethanol conversion theoretical approaches in studying personality theoretical framework of web radio philippines substantive theoretical orientation definition the theoretical framework on definition of health theoretical background definition theoretical framework on african leadership download nursing theoretical frameworks theoretical perspective pictures the postmodern perspective is differ from other theoretical orientation with in the field of counseling theoretical framework on discrimination articles on theoretical family perspective what is a theoretical audit teaching mental health status theoretical frameworks for nurses theoretical and ethical implications of hrm in pakistan lesson plan of theoretical mechanics dissertation examples of theoretical perspective international business major theoretical congnitive theories powerpoint sample of theoretical conceptual framework pear theoretical approaches to learning theoretical background of simple payroll system theoretical framework of a research all about paradigms and theoretical frameworks in social sciences child development theoretical perspectives perspective theoretical orientation lesson on theoretical perspectives define theoretical assumptions theoretical framework vs theoretical background theoretical orientation in counseling We have kittens available!!!! how to calculate theoretical yield for dna define theoretical framework conceptual framework theoretical conceptual framework of mobile advertising social work theoretical frameworks sample theoretical framework on management of fast food theoretical framwork for derivatives nestle food plc nigeria and their conceptual theoretical framework theoretical and conceptual framework at year 2007 to 2010 in chatting via bluetooth using mobile phone computing theoretical probability of success in excel theoretical background of home automation theoretical background on designing dc motor control with h bridge mosfet theoretical sampling examples in qualitative research in hospital ppt theoretical concepts to a framework theoretical framework for using modernist perspective theoretical questions criminology theoretical framework computer payroll the creater of eclectic theoretical orientation eclectic theoretical orientation 4th grade lesson plans on experimental versus theoretical define theoretical framework in research Click here to see them example of theoretical orientation definition of theoretical saturation in statistics compare and contrast the five theoretical perspectives of child development theoretical framework in payroll theoretical probability worksheets ppt female criminality theoretical perspective define conceptual and theoretical models in nursing dorothea dix theoretical perspective theoretical framework of advertising biggest debate in theoretical physics writing test questions on theoretical basis for nursing theoretical framework for sped what is the sociological theoretical perspectives on paraphilias bahamas education science theoretical perspectives on curriculum theoretical framework in nursing theoretical framework for business planning samples of diagrams of conceptual and theoretical frameworks in language teaching theoretical approaches social work management samples of research paper theoretical framework biometric system payroll + conceptual or theoretical framework Norwegian Forest Cats example of theoretical framework paradigm + theoretical framework theoretical framework for job opportunity sample of theoretical framework for criminal behaviour school counselor theoretical orientation examples of theoretical perspective theoretical framework on internal controls methods of theoretical integration nursing education and theoretical frameworks theoretical paradigm theoretical framework of the pathophysiology of hemorrhagic dengue fever theoretical versus conceptual lens in qualitative research theoretical framework enrolment system nursing theoretical perspectives family health working out theoretical yield nitration of methyl benzoate theoretical framework for malnutrition sample paradigm experimental theoretical framework lesson plans the three theoretical perspectives in sociology how to wright theoretical background theoretical orientation conceptual v theoretical perspectives examples lesson plan theoretical probability deck of cards theoretical psychology perspectives conceptual and theoretical framework for payroll system theoretical framework of a payroll system samples of theoretical or conceptual framework charts and drawings four theoretical perspectives an introduction to theoretical geomorphology theoretical yield physics fun theoretical yield definition theoretical interview question police formation theoretical framework theoretical framework of agricultural productivity theoretical framework of automated tax payer software scanner theoretical yield of aspirin samples of diagrams of theoretical or conceptual frameworks on teaching english strategies what are the different theoretical perspective on domestic violence theoretical framework about happy filipinos types of theoretical frameworks for qualitative research theoretical definition advertising Magnus, Fenia, and Dunderberg lesson plan for theoretical probability deck of cards theoretical saturation difference between framework and theoretical perspective compare the 4 theoretical models of child abuse theoretical positions in sociology nursing theoretical perspective family health theoretical framework for internal control example of a thesis theoretical conceptual framework of malnutrition theoretical framework examples criminal behavior what are major theoretical underpinnings agriculture theoretical framework background of the study theoretical framework conceptual framework theoretical yeild fermentation of d glucose master s thesis proposal theoretical example education major theoretical perspectives in sociology on education theoretical framework definition for counseling easy ways to calculate theoretical yield define reseatch theoretical framework nursing theoretical framework study +pdf define theoretical college student orientation
{"url":"http://mainecooncat-norwegianforestcat.com/backbone-pros-and-cons-of-counseling-theoretical-approaches/","timestamp":"2014-04-16T07:31:17Z","content_type":null,"content_length":"23214","record_id":"<urn:uuid:e0ff2487-ff16-4c5b-a518-80ad5b7efe4d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
Is there any known condition for the following property? up vote 0 down vote favorite For a mapping $f: \Omega\to \bf{R}^n$, what kind of condition ensures that the one-dimensional Hausdorff measure of $f^{-1}(E)$ is zero whenever $E$ is of zero one-dimensional Hausdorff measure zero. Note that f is not assumed to be a homeomorphism. 1 What is $\Omega$ here? – Tom Leinster Dec 5 '12 at 15:02 Since $f: \Omega \to \mathbb{R}^N$, $f^{-1}:f(\Omega) \to \Omega$, so if $E \subset f(\Omega)$, we should not be measuring the 1-D lebesgue measure, but rather the 1-D Hausdorff measure $H^1$. – Daniel Spector Dec 5 '12 at 17:32 Yes, you are right. It is better to use the Hausdorff measure H^1. \Omega is a domain in \bR^n and f is not necessarily to be a homeomorphism here. – Changyu Guo Dec 6 '12 at 8:11 10 This is the worst question title in the history of Math Overflow. It provides literally no information about what the question is about. – arsmath Dec 6 '12 at 12:14 add comment 1 Answer active oldest votes There may be a name for this, but it seems like a strange condition. Such a function cannot take a constant value on any set of positive Lebesgue measure, otherwise the inverse image of that constant (having zero 1-D Hausdorff measure in the range) would have positive Lebesgue measure, and therefore infinite 1-D Hausdorff measure. A good start might be to investigate the situation on maps $f:[0,1] \to \mathbb{R}$ with the Lebesgue measure in both places. up vote 1 down vote There is also a related notion, called Lusin's N property, which means $f$ takes sets of measure zero into sets of measure zero (as opposed to $f^{-1}$, as you desire). This is a quality of Lipschitz functions that Sobolev functions also inherit, and is necessary to satisfy the fundamental theorem of Calculus (along with being differentiable a.e., etc.). In the one-dimensional case suggested in this answer, the image measure (also called push-forward) $f_\ast\lambda (E) = \lambda(f^{-1}(E))$ is dominated by $\lambda$ hence, by Radon-Nikodym, it has a density. This may help to get something more concrete in particular cases. A problem with the higher dimensional case is that the one-dimensional Hausdorff measure is not $\sigma$-finite and Radon-Nikodym is not applicable. – Jochen Wengenroth Dec 7 '12 at 13:05 add comment Not the answer you're looking for? Browse other questions tagged geometric-measure-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/115504/is-there-any-known-condition-for-the-following-property?sort=votes","timestamp":"2014-04-20T08:32:24Z","content_type":null,"content_length":"57110","record_id":"<urn:uuid:888b7e15-5323-4e77-830d-94a10d482c55>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
Srinivasa Ramanujan Ramanujan’s life is full of strange contrasts. He had no formal training in mathematics but yet “he was a natural mathematical genius, in the class of Gauss and Euler.” Probably Ramanujan’s life has no parallel in the history of human thought. Godfrey Harold Hardy, (1877-1947), who made it possible for Ramanujan to go to Cambridge and give formal shape to his works, said in one of his lectures given at Harvard Universty (which later came out as a book entitled Ramanujan: Twelve Lectures on Subjects Suggested by His Life and Work): “I have to form myself, as I have never really formed before, and try to help you to form, some of the reasoned estimate of the most romantic figure in the recent history of mathematics, a man whose career seems full of paradoxes and contradictions, who defies all cannons by which we are accustomed to judge one another and about whom all of us will probably agree in one judgement only, that he was in some sense a very great mathematician.” Srinivasa Ramanujan Iyengar (best known as Srinivasa Ramanujan) was born on December 22, 1887, in Erode about 400 km from Chennai, formerly known as Madras where his mother’s parents lived. After one year he was brought to his father’s town, Kumbakonam. His parents were K. Srinivasa Iyengar and Komalatammal. He passed his primary examination in 1897, scoring first in the district and then he joined the Town High School. In 1904 he entered Kumbakonam’s Government College as F.A. student. He was awarded a scholarship. However, after school, Ramanujan’s total concentration was focussed on mathematics. The result was that his formal education did not continue for long. He first failed in Kumbakonam’s Government College. He tried once again in Madras from Pachaiyappa’s College but he failed again. While at school he came across a book entitled A Synopsis of Elementary Results in Pure and Applied Mathematics by George Shoobridge Carr. The title of the book does not reflect its contents. It was a compilation of about 5000 equations in algebra, calculus, trigonometry and analytical geometry with abridged demonstrations of the propositions. Carr had compressed a huge mass of mathematics that was known in the late nineteenth century within two volumes. Ramanujan had the first one. It was certainly not a classic. But it had its positive features. According to Kanigel, “one strength of Carr’s book was a movement, a flow to the formulas seemingly laid down one after another in artless profusion that gave the book a sly seductive logic of its own.” Thisbook had a great influence on Ramanujan’s career. However, the book itself was not very great. Thus Hardy wrote about the book: “He (Carr) is now completely forgotten, even in his college, except in so far as Ramanujan kept his name alive”. He further continued, “The book is not in any sense a great one, but Ramanujan made it famous and there is no doubt it influenced him (Ramanujan) profoundly”. We do not know how exactly Carr’s book influenced Ramanujan but it certainly gave him a direction. `It had ignited a burst of fiercely single-minded intellectual activity’. Carr did not provide elaborate demonstration or step by step proofs. He simply gave some hints to proceed in the right way. Ramanujan took it upon himself to solve all the problems in Carr’s Synopsis. And as E. H. Neville, an English mathematician, wrote : “In proving one formula, as he worked through Carr’s synopsis, he discovered many others, and he began the practice of compiling a notebook.” Between 1903 and 1914 he had three notebooks. While Ramanujan made up his mind to pursue mathematics forgetting everything else but then he had to work under extreme hardship. He could not even buy enough paper to record the proofs of his results. Once he said to one of his friends, “when food is problem, how can I find money for paper? I may require four reams of paper every month.” In fact Ramanujan was in a very precarious situation. He had lost his scholarship. He had failed in examination. What is more, he failed to prove a good tutor in the subject which he loved most. At this juncture, Ramanujan was helped by R. Ramachandra Rao, then Collector of Nellore. Ramchandra Rao was educated at Madras Presidency College and had joined the Provincial Civil Service in 1890. He also served as Secretary of the Indian Mathematical Society and even contributed solution to problem posed in its Journal. The Indian Mathematical Society was founded by V. Ramaswami Iyer, a middle-level Government servant, in 1906. Its Journal put Ramanujan on the world’s mathematical map. Ramaswami Iyer met Ramanujan sometime late in 1910. Ramaswami Iyer gave Ramanujan notes of introduction to his mathematical friends in Chennai (then Madras). One of them was P.V. Seshu Iyer, who earlier taught Ramanujan at the Government College. For a short period (14 months) Ramanujan worked as clerk in the Madras Port Trust which he joined on March 1, 1912. This job he got with the help of S. Narayana Iyer. Ramanujan’s name will always be linked to Godfrey Harold Hardy, a British mathematician. It is not because Ramanujan worked with Hardy at Cambridge but it was Hardy who made it possible for Ramanujan to go to Cambridge. Hardy, widely recognised as the leading mathematician of his time, championed pure mathematics and had no interest in applied aspects. He discovered one of the fundamental results in population genetics which explains the properties of dominant, and recessive genes in large mixed population, but he regarded the work as unimportant. Encouraged by his well-wishers, Ramanujan, then 25 years old and had no formal education, wrote a letter to Hardy on January 16, 1913. The letter ran into eleven pages and it was filled with theorems in divergent series. Ramanujan did not send proofs for his theorems. He requested Hardy for his advice and to help getting his results published. Ramanujan wrote : “I beg to introduce myself to you as a clerk in the Accounts Department of the Port Trust Office at Madras on a salary of only £ 20 per annum. I have had no university education but I have undergone the ordinary school course. After leaving school I have been employing the spare time at my disposal to work at mathematics. I have not trodden through the conventional regular course which is followed in a university course, but I am striking out a new path for myself. I have made a special investigation of divergent series in general and the results I get are termed by the local mathematicians as “startling“… I would request you to go through the enclosed papers. Being poor, if you are convinced that there is anything of value I would like to have my theorems published. I have not given the actual investigations nor the expressions that I get but I have indicated the lines on which I proceed. Being inexperienced I would very highly value any advice you give me “. The letter has become an important historical document. In fact, ‘this letter is one of the most important and exciting mathematical letters ever written’. At the first glance Hardy was not impressed with the contents of the letter. So Hardy left it aside and got himself engaged in his daily routine work. But then he could not forget about it. In the evening Hardy again started examining the theorems sent by Ramanujan. He also requested his colleague and a distinguished mathematician, John Edensor Littlewood (1885-1977) to come and examine the theorems. After examining closely they realized the importance of Ramanujan’s work. As C.P. Snow recounted, ‘before mid-night they knew and knew for certain’ that the writer of the manuscripts was a man of genius’. Everyone in Cambridge concerned with mathematics came to know about the letter. Many of them thought `at least another Jacobi in making had been found out’. Bertrand Arthur William Russell (1872-1970) wrote to Lady Ottoline Morell. “I found Hardy and Littlewood in a state of wild excitement because they believe, they have discovered a second Newton, a Hindu Clerk in Madras … He wrote to Hardy telling of some results he has got, which Hardy thinks quite Fortunately for Ramanujan, Hardy realised that the letter was the work of a genius. In the next three months Ramanujan received another three letters from Hardy. However, in the beginning Hardy responded cautiously. He wrote on 8 February 1913. To quote from the letter. “I was exceedingly interested by your letter and by the theorems which you state. You will however understand that, before I can judge properly of the value of what you have done it is essential that I should see proofs of some of your assertions … I hope very much that you will send me as quickly as possible at any rate a few of your proofs, and follow this more at your leisure by more detailed account of your work on primer and divergent series. It seems to me quite likely that you have done a good deal of work worth publication; and if you can produce satisfactory demonstration I should be very glad to do what I can to secure it” . In the meantime Hardy started taking steps for bringing Ramanujan to England. He contacted the Indian Office in London to this effect. Ramanujan was awarded the first research scholarship by the Madras University. This was possible by the recommendation of Gilbert Walker, then Head of the Indian Meteorological Department in Simla. Gilbert was not a pure mathematician but he was a former Fellow and mathematical lecturer at Trinity College, Cambridge. Walker, who was prevailed upon by Francis Spring to look through Ramanujan’s notebooks wrote to the Registrar of the Madras University : “The character of the work that I saw impressed me as comparable in originality with that of a Mathematical Fellow in a Cambridge College; it appears to lack, however, as might be expected in the circumstances, the completeness and precision necessary before the universal validity of the results could be accepted. I have not specialised in the branches of pure mathematics at which he worked, and could not therefore form a reliable estimate of his abilities, which might be of an order to bring him a European reputation. But it was perfectly clear to me that the University would be justified in enabling S. Ramanujan for a few years at least to spend the whole of his time on mathematics without any anxiety as to his livelihood.” Ramanujan was not very eager to travel abroad. In fact he was quite apprehensive. However, many of his well-wishers prevailed upon him and finally Ramanujan left Madras by S.S. Navesa on March 17, 1914. Ramanujan reached Cambridge on April 18, 1914. When Ramanujan reached England he was fully abreast of the recent developments in his field. This was described by J. R. Newman in 1968: “Ramanujan arrived in England abreast and often ahead of contemporary mathematical knowledge. Thus, in a lone mighty sweep, he had succeeded in recreating in his field, through his own unaided powers, a rich half century of European mathematics. One may doubt whether so prodigious a feat had ever been accomplished in the history of thought.” Today it is simply futile to speculate about what would have happened if Ramanujan had not come in contact with Hardy. It could happen either way. But then Hardy should be given due credit for recognizing Ramanujan’s originality and helping him to carry out his work. Hardy himself was very clear about his role. “Ramanujan was”, Hardy wrote, “my discovery. I did not invent him — like other great men, he invented himself — but I was the first really competent person who had the chance to see some of his work, and I can still remember with satisfaction that I could recognize at once what I treasure I had found.” It may be noted that before writing to Hardy, Ramanujan had written to two well-known Cambridge mathematicians viz., H.F. Baker and E.W. Hobson. But both of them had expressed their inability to help Ramanujan was awarded the B.A. degree in March 1916 for his work on ‘Highly composite Numbers’ which was published as a paper in the Journal of the London Mathematical Society. He was the second Indian to become a Fellow of the Royal Society in 1918 and he became one of the youngest Fellows in the entire history of the Royal Society. He was elected “for his investigation in Elliptic Functions and the Theory of Numbers.” On 13 October 1918 he was the first Indian to be elected a Fellow of Trinity College, Cambridge. Much of Ramanujan’s mathematics comes under the heading of number theory — a purest realm of mathematics. The number theory is the abstract study of the structure of number systems and properties of positive integers. It includes various theorems about prime numbers (a prime number is an integer greater than one that has not integral factor). Number theory includes analytic number theory, originated by Leonhard Euler (1707-89); geometric theory - which uses such geometrical methods of analysis as Cartesian co-ordinates, vectors and matrices; and probabilistic number theory based on probability theory. What Ramanujan did will be fully understood by a very few. In this connection it is worthwhile to note what Hardy had to say of the work of pure mathematicians: “What we do may be small, but it has certain character of permanence and to have produced anything of the slightest permanent interest, whether it be a copy of verses or a geometrical theorem, is to have done something beyond the powers of the vast majority of men.” In spite of abstract nature of his work Ramanujan is widely known. Ramanujan was a mathematical genius in his own right on the basis of his work alone. He worked hard like any other great mathematician. He had no special, unexplained power. As Hardy, wrote: “I have often been asked whether Ramanujan had any special secret; whether his methods differed in kind from those of other mathematicians; whether there was anything really abnormal in his mode of thought. I cannot answer these questions with any confidence or conviction; but I do not believe it. My belief that all mathematicians think, at bottom, in the same kind of way, and that Ramanujan was no Of course, as Hardy observed Ramanujan “combined a power of generalization, a feeling for form and a capacity for rapid modification of his hypotheses, that were often really startling, and made him, in his peculiar field, without a rival in his day. Here we do not attempt to describe what Ramanujan achieved. But let us note what Hardy had to say about the importance of Ramanujan’s work. “Opinions may differ as to the importance of Ramanujan’s work, the kind of standard by which it should be judged and the influence which it is likely to have on the mathematics of the future. It has not the simplicity and the inevitableness of the greatest work; it would be greater if it were less strange. One gift it shows which no one will deny—profound and invincible originality.” The Norwegian mathematician Atle Selberg, one of the great number theorists of this century wrote : “Ramanujan’s recognition of the multiplicative properties of the coefficients of modular forms that we now refer to as cusp forms and his conjectures formulated in this connection and their later generalization, have come to play a more central role in the mathematics of today, serving as a kind of focus for the attention of quite a large group of the best mathematicians of our time. Other discoveries like the mock-theta functions are only in the very early stages of being understood and no one can yet assess their real importance. So the final verdict is certainly not in, and it may not be in for a long time, but the estimates of Ramanujan’s nature in mathematics certainly have been growing over the years. There is doubt no about that.” Often people tend to speculate what Ramanujan would have achieved if he had not died or if his exceptional qualities were recognised at the very beginning. There are many instances of such untimely death of gifted persons, or rejection of gifted persons by the society or the rigid educational system. In mathematics
{"url":"http://www.vigyanprasar.gov.in/scientists/Ramanujan.HTM","timestamp":"2014-04-19T17:15:09Z","content_type":null,"content_length":"28728","record_id":"<urn:uuid:221deb2a-97a0-4e86-9735-c5b3a3d147e7>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
The awesome power of Bayesian Methods - What they didn't teach you in grad school. Part I For all the things we learned in grad school, Bayesian methods was something that was skimmed over. Strange too, as we learned all the computationally machinery necessary, but we were never actually shown the power of these methods. Let's start our explanation with an example where the Bayesian analysis clearly simply is more correct (in the sense of getting the right answer). The Table Game Bob and Alice both approach a table at a casino. The dealer at the table has chosen a number between 0 and 1 that stays fixed for the duration of the game, and is hidden from Alice and Bob. Each round, a new random number between 0 and 1 is produced. Alice scores a point if the random number falls below the dealer's hidden number, and Bob scores a point if the random number is above. The game ends when either Alice or Bob has scored 6 points. After 8 rounds, Alice has a score of 5 versus Bob's score of 3. Bob, a statistician, asks "What is the probability I win the game, given these two scores?" Frequentist Bob thinks: This is a simple problem. Alice's score is the same as a $Bin( 8, p)$ random variable, where I don't know the parameter $p$. Her score is observed to be $5$. The MLE of $p$ is $\hat{p} = 0.625 \pm 0.28$. In order for me to win the overall game, I must win the next three games, else Alice scores 6 points and wins the game. Thus, $P( \text{Bob wins} ) = ( 1- p)^3$, and by that useful invariance property of MLEs, I can estimate my probability to be $( 1- 5/8)^3 = 0.052$, with 95% confidence interval $( 0.008, 0.28 )$. Let me do some Python to confirm. I'll perform simulated games by 1. Randomly picking a $p$ with uniform probability. 2. Perform 8 rounds. If the game, after the 8th round, is 5vs3 for Alice, we simulate 3 more rounds: if any of them are in favour of Alice, she wins, else Bob wins. If the game is not 5vs3, we start a new game. 3. What is the proportion of games Bob wins versus Alice. from __future__ import division #Alice vs Bob in table game import random max_simulations = 1e5 simulation = 0 wins_alice = 0 wins_bob = 0 while simulation < max_simulations: #draw random p p = random.random() #draw eight trials Alice_wins = sum( [ random.random() < p for i in range(8) ] ) if Alice_wins == 5: simulation += 1 #This is case of 5vs3 in 8th round, lets check who wins by drawing three more if any( [ random.random() < p for i in range(3) ] ): wins_alice +=1 wins_bob +=1 print "Proportion of Alice wins: %.3f."%( wins_alice/max_simulations ) Hmm, I get the probability 9.1%, almost twice as much as Bob predicted. Maybe there weren't enough iterations, let me try again. max_simulations = 1e6 print "Proportion of Alice wins: %.3f."%( wins_alice/max_simulations ) Ok, so something is wrong with our estimation. The problem is that regardless of the true value $p$, the MLE estimator, given a 5vs3 game, always returns the same thing. Ask, what is the prior beliefs of Alice and Bob before coming to the table? Both can view $p$ as a random variable: the dealer may have chosen $p$ at random, hence it really is a random variable! But the frequentist Bob does not see it this way. In his mind, it is fixed. Let's explore what the Bayesian Bob thinks when the game hits 5vs3: Bayesian Bob thinks: I really had no idea what $p$ could be before I entered this game, so really to me, the value of $p$ was uniform over $[0,1]$. But now Alice leads 5 to 3. I should update what I think $p$ is after seeing this data. I still think this updated $p$ is a random variable, but some values of $p$ are more likely than others. So, the probability I win should be calculated considering all possible values of $p$: $$ \begin{aligned} P ( \text{Bob wins} ) &= \int_0^1 (1-p)^3P( p | X = 5, n = 8) \; dp \\ & = E_{P( p | X,n )} [ ( 1 - p)^3 ] \end{aligned} $$ This can be computed in closed form, but I'm not teaching a calculus tutorial, so I'll skip it. I'd rather use PyMC, a Python library for performing Bayesian analysis. It's a great, underused library, and I'll go into details about it my next Bayesian post. The code below is pretty self explanatory, except that we need to employ the mathematical phenomenon of Markov Chain Monte Carlo. If you are unfamiliar, that's okay, you don't need to know it for this. But there a lots of great tutorials available. Basically, what we want is to be able to sample from the posterior distribution, $P ( p | X = 5, n = 8 )$. If we have many sample from there, we can do pretty much anything (things I will get into in another post). But for now, let's compute that integral. It can be approximated by the sum: $$ E_{P( p | X,n )} [ ( 1 - p)^3 ] \approx \frac{1}{N} \sum_{i=0}^N (1 - p_i)^3, \; p_i \text{ comes from } P( p | X = 5, n = 8 )$$ import pymc as mc theta = mc.Uniform( "theta", 0, 1) obs = mc.Binomial( "obs", n = 8, value = 5, p = theta, observed = True) model = mc.Model( {"obs":obs, "theta": theta } ) mcmc = mc.MCMC( model ) #perform MCMC to generate (1000-2000)/2 samples. mcmc.sample( 100000, burn = 2000, thin=2) samples = mcmc.trace("theta").gettrace() #compute the integral above print "Bayesian Estimate: %.4f."%( (1 - samples)**3).mean() #Bayesian Estimate: 0.0902. Fuck yea. So, aside from the simulation error, we arrive at the right answer. FYI, the integral can be calculated and is equal to $1/11 = 0.9091$. A few other comments: It is meaningless to say "what is the estimated value of $p$?" in a Bayesian setting. Our analysis did not return an estimate of $p$: it returned a probability distribution. Using the samples from the MCMC, we can see what this distribution looks like: Meta-analysis of the table game In our Python simulation, we chose the value $p$ to be uniformly random. And for our Bayesian analysis, we assumed a uniformly random prior. Are we cheating? Not really. Regardless of how $p$ is generated, if the game is at 5 vs 3, the frequentist's MLE gives the same result, $\hat{p} = 5/8$. We could generate $p$ differently, say from a distribution where points around 1/2 are more likely, and our Bayesian answer would only change slightly, but still be more accurate than the MLE answer. On other hand, we can update our prior to reflect the $p$ is more likely to be around 1/2 if we know this (or believe this to be true). Some things I want to discuss in the next parts of this series: • Using Bayesian analysis to optimize loss functions • Solve the overfitting problem with Bayesian analysis • Eddy, Sean R. "What is Bayesian statistics?." (2004): 1177-1178. Other articles to enjoy: Follow me on Twitter at cmrn_dp
{"url":"http://camdp.com/blogs/awesome-power-ibayesian-methodsi-what-they-didnt-t","timestamp":"2014-04-20T15:51:49Z","content_type":null,"content_length":"91583","record_id":"<urn:uuid:b323a05c-a90e-4110-85bd-27ef8da4ac79>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding trig functions with one known function September 9th 2009, 02:02 PM Finding trig functions with one known function I have a review packet that will be almost exactly like a test I will have and I'm stuck on some problems and here is one that is like the rest: If sin0= -4/7 and cos0 > 0, find sec0 I'm mostly confused because of the negative sine since obviously you can't measure a line and get negative inches. Can someone show me how they can get the answer for this so I will be able to do similar problems to this on the test? September 9th 2009, 02:09 PM I have a review packet that will be almost exactly like a test I will have and I'm stuck on some problems and here is one that is like the rest: If sin0= -4/7 and cos0 > 0, find sec0 I'm mostly confused because of the negative sine since obviously you can't measure a line and get negative inches. Can someone show me how they can get the answer for this so I will be able to do similar problems to this on the test? The sine of an angle is the ratio of the opposite leg of the triangle divided by the hypotenuse. If this ratio is negative, it means that the opposite leg points downward. However this can happen in quadrants III and IV, so you need to use the fact that the cosine of the angle (adjacent divided by hypotenuse)is positive to figure out which quadrant the angle is in. September 9th 2009, 02:25 PM sort of... on my paper the negative symbol for the sine doesn't belong to the 4 but I doubt that makes a difference... if you don't mind could you give me the answer, then I'll know exactly how to do these types of problems September 9th 2009, 02:28 PM I'm sorry but I do mind. I will help you through the problem though. Cmon, you can do it! You know that sine, cosine and the other trig ratios all come from right triangle ratios right? Draw your x and y axes, then think about which quadrant the angle is in. Like I said because sin(x) <0, then leg parallel to the y-axis must be facing down, making it negative. Thus it has to be in the quadrant III or IV (bottom left or bottom right). Are you with me so far? September 9th 2009, 03:07 PM No... thanks for the help but I'd rather have a demonstration September 9th 2009, 03:53 PM
{"url":"http://mathhelpforum.com/pre-calculus/101378-finding-trig-functions-one-known-function-print.html","timestamp":"2014-04-20T23:32:19Z","content_type":null,"content_length":"8268","record_id":"<urn:uuid:81ddc252-9974-4ab8-adab-55cde2fbc958>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
Jersey Vlg, TX Algebra 2 Tutor Find a Jersey Vlg, TX Algebra 2 Tutor ...I received the AP Scholar award, became a member of the California Scholarship Federation, and received the scholarship athlete award. I have my soccer coaching license and have coached middle and high school teams. I gained tutoring experience as a math teaching assistant and as a private tutor for students at my high school. 22 Subjects: including algebra 2, chemistry, calculus, physics ...So that’s how it works!” moments as algebra becomes more familiar and understandable. Algebra 2 builds on the foundation of algebra 1, especially in the ongoing application of the basic concepts of variables, solving equations, and manipulations such as factoring. My approach in working with yo... 20 Subjects: including algebra 2, writing, algebra 1, logic I was born in Taiwan. I graduated from No.1 university in Taiwan, majored in Economics and came to the USA to pursue an MBA at Lamar University in 1988. I am a loving and patient Christian mom of three children. 12 Subjects: including algebra 2, reading, geometry, algebra 1 For the last ten years, I have been working as an adjunct instructor in the area of developmental mathematics at Houston Community College System (HCCS) and Lone Star College (Fairbanks Center). I teach all levels of the developmental math classes. I enjoy working in this environment, because the ... 8 Subjects: including algebra 2, algebra 1, GED, SAT math ...Becoming proficient in math and science requires directed practice and problem solving on the part of students and should be actively enjoyed by both the student and the tutor. There is nothing in science and math which cannot be explained to an open mind and these subjects are actually aimed at... 13 Subjects: including algebra 2, chemistry, geometry, ASVAB Related Jersey Vlg, TX Tutors Jersey Vlg, TX Accounting Tutors Jersey Vlg, TX ACT Tutors Jersey Vlg, TX Algebra Tutors Jersey Vlg, TX Algebra 2 Tutors Jersey Vlg, TX Calculus Tutors Jersey Vlg, TX Geometry Tutors Jersey Vlg, TX Math Tutors Jersey Vlg, TX Prealgebra Tutors Jersey Vlg, TX Precalculus Tutors Jersey Vlg, TX SAT Tutors Jersey Vlg, TX SAT Math Tutors Jersey Vlg, TX Science Tutors Jersey Vlg, TX Statistics Tutors Jersey Vlg, TX Trigonometry Tutors
{"url":"http://www.purplemath.com/jersey_vlg_tx_algebra_2_tutors.php","timestamp":"2014-04-16T10:12:10Z","content_type":null,"content_length":"24195","record_id":"<urn:uuid:22d60058-068a-4036-9ca2-c7366c5738a5>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
Robust statistics. Wiley series in probability and mathematical statistics. Probability and mathematical statistics Results 1 - 10 of 14 - IEEE Trans. on Signal Processing , 1996 "... Source separation consists in recovering a set of independent signals when only mixtures with unknown coefficients are observed. This paper introduces a class of adaptive algorithms for source separation which implements an adaptive version of equivariant estimation and is henceforth called EASI (Eq ..." Cited by 381 (10 self) Add to MetaCart Source separation consists in recovering a set of independent signals when only mixtures with unknown coefficients are observed. This paper introduces a class of adaptive algorithms for source separation which implements an adaptive version of equivariant estimation and is henceforth called EASI (Equivariant Adaptive Separation via Independence) . The EASI algorithms are based on the idea of serial updating: this specific form of matrix updates systematically yields algorithms with a simple, parallelizable structure, for both real and complex mixtures. Most importantly, the performance of an EASI algorithm does not depend on the mixing matrix. In particular, convergence rates, stability conditions and interference rejection levels depend only on the (normalized) distributions of the source signals. Close form expressions of these quantities are given via an asymptotic performance analysis. This is completed by some numerical experiments illustrating the effectiveness of the proposed ap... , 2000 "... Quantitative analysis of MR images is becoming increasingly important in clinical trials in multiple sclerosis (MS). This paper describes a fully automated atlas-based technique for segmenting MS lesions from large data sets of multi-channel MR images. The method simultaneously estimates the paramet ..." Cited by 49 (6 self) Add to MetaCart Quantitative analysis of MR images is becoming increasingly important in clinical trials in multiple sclerosis (MS). This paper describes a fully automated atlas-based technique for segmenting MS lesions from large data sets of multi-channel MR images. The method simultaneously estimates the parameters of a stochastic model for normal brain MR images, and detects MS lesions as voxels that are not well explained by the model. It corrects for MR field inhomogeneities, estimates tissuespecific intensity models from the data itself, and incorporates contextual information in the MS lesion segmentation using a Markov random field. The results of the automated method were compared with lesions delineated by human experts, showing a high total lesion load correlation. When the degree of spatial correspondence between segmentations was taken into account, considerable disagreement was revealed, both between the expert manual segmentations, and between expert and automatic , 2001 "... This paper presents a novel approach for model-based realtime tracking of highly articulated structures such as humans. This approach is based on an algorithm which efficiently propagates statistics of probability distributions through a kinematic chain to obtain maximum a posteriori estimates of th ..." Cited by 39 (2 self) Add to MetaCart This paper presents a novel approach for model-based realtime tracking of highly articulated structures such as humans. This approach is based on an algorithm which efficiently propagates statistics of probability distributions through a kinematic chain to obtain maximum a posteriori estimates of the motion of the entire structure. This algorithm yields the least squares solution in linear time (in the number of components of the model) and can also be applied to non-Gaussian statistics using a simple but powerful trick. The resulting implementation runs in real-time on standard hardware without any pre-processing of the video data and can thus operate on live video. Results from experiments performed using this system are presented and discussed. - in British Machine Vision Conference , 2000 "... This paper presents a method for segmenting multiple motions using edges. Recent work in this field has been constrained to the case of two motions, and this paper demonstrates that the approach can be extended to more than two motions. The image is first segmented into regions, and then the framewo ..." Cited by 5 (0 self) Add to MetaCart This paper presents a method for segmenting multiple motions using edges. Recent work in this field has been constrained to the case of two motions, and this paper demonstrates that the approach can be extended to more than two motions. The image is first segmented into regions, and then the framework determines the motions present and labels the edges in the image. Initialisation is particularly difficult, and a novel scheme is proposed which recursively splits motions to provide the Expectation-Maximisation algorithm with a reasonable guess, and a Minimum Description Length approach is used to determine the best number of models to use. The edge labels are then used to determine the the region labelling. A global optimisation is introduced to refine the motions and provide the most likely region labelling. 1 - IN PROCEEDINGS OF IPMU '96 (SIXTH INTERNATIONAL CONFERENCE ON INFORMATION PROCESSING AND MANAGEMENT OF UNCERTAINTY IN KNOWLEDGE-BASED SYSTEMS , 1996 "... We study the relation between possibility measures and the theory of imprecise probabilities. It is shown that a possibility measure is a coherent upper probability iff it is normal. We also prove that a possibility measure is the restriction to events of the natural extension of a special kind of u ..." Cited by 5 (4 self) Add to MetaCart We study the relation between possibility measures and the theory of imprecise probabilities. It is shown that a possibility measure is a coherent upper probability iff it is normal. We also prove that a possibility measure is the restriction to events of the natural extension of a special kind of upper probability, defined on a class of nested sets. Next, we go from upper probabilities to upper previsions. We show that if a coherent upper prevision defined on the convex cone of all positive gambles is supremum preserving, then it must take the form of a Shilkret integral associated with a possibility measure. But at the same time, we show that a supremum preserving upper prevision is not necessarily coherent! This makes us look for alternative extensions of possibility measures that are not necessarily supremum preserving, through natural extension. - DMV Nachrichten , 2005 "... imaging Mathematical Subject Classification: 93E14, 62G08, 68T45, 49M20, 90C31 This essay deals with ‘discontinuous phenomena ’ in time-series. It is an introduction to, and a brief survey of aspects concerning the concepts of segmentation into ‘smooth ’ pieces on the one hand, and the complementary ..." Cited by 4 (4 self) Add to MetaCart imaging Mathematical Subject Classification: 93E14, 62G08, 68T45, 49M20, 90C31 This essay deals with ‘discontinuous phenomena ’ in time-series. It is an introduction to, and a brief survey of aspects concerning the concepts of segmentation into ‘smooth ’ pieces on the one hand, and the complementary notion of the identification of jumps, on the other hand. We restrict ourselves to variational approaches, both in discrete, and in continuous time. They will define ‘filters’, with data as ‘inputs ’ and minimizers of functionals as ‘outputs’. The main example is a particularly simple model, which, for historical reasons, we decided to call the Potts functional. We will argue that it is an appropriate tool for the extraction of the simplest and most basic morphological features from data. This is an attempt to interpret data from a well-defined point of view. It is in contrast to restoration of a true signal- perhaps distorted and degraded by noise- which is not in the main focus of this paper. , 2011 "... We study the related problems of denoising images corrupted by impulsive noise and blind inpainting (i.e., inpainting when the deteriorated region is unknown). Our basic approach is to model the set of patches of pixels in an image as a union of low dimensional subspaces, corrupted by sparse but pe ..." Cited by 4 (1 self) Add to MetaCart We study the related problems of denoising images corrupted by impulsive noise and blind inpainting (i.e., inpainting when the deteriorated region is unknown). Our basic approach is to model the set of patches of pixels in an image as a union of low dimensional subspaces, corrupted by sparse but perhaps large magnitude noise. For this purpose, we develop a robust and iterative RANSAC like method for single subspace modeling and extend it to an iterative algorithm for modeling multiple subspaces. We prove convergence for both algorithms and carefully compare our methods with other recent ideas for such robust modeling. We demonstrate state of the art performance of our method for both imaging problems. "... While Boltzmann Machines have been successful at unsupervised learning and density modeling of images and speech data, they can be very sensitive to noise in the data. In this paper, we introduce a novel model, the Robust Boltzmann Machine (RoBM), which allows Boltzmann Machines to be robust to corr ..." Cited by 3 (0 self) Add to MetaCart While Boltzmann Machines have been successful at unsupervised learning and density modeling of images and speech data, they can be very sensitive to noise in the data. In this paper, we introduce a novel model, the Robust Boltzmann Machine (RoBM), which allows Boltzmann Machines to be robust to corruptions. In the domain of visual recognition, the RoBM is able to accurately deal with occlusions and noise by using multiplicative gating to induce a scale mixture of Gaussians over pixels. Image denoising and inpainting correspond to posterior inference in the RoBM. Our model is trained in an unsupervised fashion with unlabeled noisy data and can learn the spatial structure of the occluders. Compared to standard algorithms, the RoBM is significantly better at recognition and denoising on several face databases. 1. , 2005 "... A simple variational approach to the estimation of timeseries is studied in detail and mathematical rigor. The functional in question is a complexity penalized sum of squares. The results include existence, uniqueness, continuous dependence on parameters, and stability, in dependence of parameters a ..." Cited by 1 (1 self) Add to MetaCart A simple variational approach to the estimation of timeseries is studied in detail and mathematical rigor. The functional in question is a complexity penalized sum of squares. The results include existence, uniqueness, continuous dependence on parameters, and stability, in dependence of parameters and data, of the statistical estimate. "... Abstract. The adaptive Metropolis (AM) algorithm of Haario, Saksman and Tamminen [Bernoulli 7 (2001) 223-242] uses the estimated covariance of the target distribution in the proposal distribution. This paper introduces a new robust adaptive Metropolis algorithm estimating the shape of the target dis ..." Cited by 1 (0 self) Add to MetaCart Abstract. The adaptive Metropolis (AM) algorithm of Haario, Saksman and Tamminen [Bernoulli 7 (2001) 223-242] uses the estimated covariance of the target distribution in the proposal distribution. This paper introduces a new robust adaptive Metropolis algorithm estimating the shape of the target distribution and simultaneously coercing the acceptance rate. The adaptation rule is computationally simple adding no extra cost compared with the AM algorithm. The adaptation strategy can be seen as a multidimensional extension of the previously proposed method adapting the scale of the proposal distribution in orderto attain agiven acceptancerate. The empiricalresults showpromising behaviour of the new algorithm in an example with Student target distribution having no finite second moment, where the AM covariance estimate is unstable. Furthermore, in the examples with finite second moments, the performance of the new approach seems to be competitive with the AM algorithm combined with scale adaptation. 1.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=517415","timestamp":"2014-04-16T22:59:07Z","content_type":null,"content_length":"38820","record_id":"<urn:uuid:8b25bdbc-c82f-4c89-9ddf-6d0537e1a07c>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
The concept of probability is fairly easy to grasp. Probability is an indicator or a numerical measure of the LIKELIHOOD that an event will occur. Thus, probabilities could be used as measures of the degree of uncertainty associated with events associated with the game of blackjack for which, as a player, you need to make certain decisions that would make you win, such as: • What are the “chances” of getting a 10 or an Ace for my “double down”? • What is the “likelihood” of getting a favorable card when hitting my hard 15? • How “likely” is it that the dealer’s hole card is a 10 so that buying insurance would be a good option? • What are the “odds” in favor of raising my bet to the maximum during the next round? Probability values are always assigned on a scale from 0 to 1. If the event cannot possibly happen, we say that the probability is equal to zero. If the event is sure or certain to happen, then the probability is equal to 1. A probability near 0 indicates that the event is UNLIKELY to occur; a probability near 1 indicates that an event is VERY LIKELY or ALMOST CERTAIN to occur. Other probabilities between 0 and 1 indicates varying degrees of likelihood that an event will occur. The following figure illustrates this view of probability: Probability as a Numerical Measure of Likelihood of an Event Probability can be expressed as a fraction, a concept most everybody is familiar with: Probability (Event) = P (Event) = N/D where the numerator, N, represents the ways the event can occur SUCCESSFULLY, and the denominator, D, represents the total number of ways the event can occur both SUCCESSFULLY and UNSUCCESSFULLY. As a fraction, probability can also be expressed as a decimal. Thus, in Tossing or flipping a coin, we will observe one of two possible results: get a HEAD or TAIL. With a fair coin, getting a HEAD or getting a TAIL are equally likely. Hence, the probability of the event “get a HEAD”, is given by P(HEAD) = 1/2 = 0.50 Experiments and the Sample Space In discussing probability, we first define a (PROBABILITY) EXPERIMENT as any process that results in well-defined outcomes. When the experiment is repeated, one and only one of the possible experimental outcomes can occur. Several examples of such experiments in the gaming world are found in the following table: In analyzing a particular experiment, it is necessary to carefully identify the experimental outcomes. The set of all possible experimental outcomes is usually referred to as the SAMPLE SPACE for the experiment; any one of the experimental outcomes is called a SAMPLE POINT and is an element of the sample space. It should be mentioned that each outcome of a probability experiment occurs at random. This means that you cannot predict with certainty which outcome will occur when the experiment is conducted. In addition, each outcome of the experiment is equally likely to occur. This also means that each outcome has the same probability of occurring. It is also important to point out that the notion of a probability experiment is somewhat different from the “experiments” conducted in science laboratories. In the laboratory, the researcher assumes that each time an experiment is repeated in exactly the same way, the same outcome will occur. When we speak of probability experiments, the outcome is DETERMINED BY CHANCE, such that even though the experiment might be repeated in exactly the same way, a different outcome may occur. For this reason, probability experiments may also be referred to as RANDOM EXPERIMENTS. In discussing probabilities, it is usual to consider several outcomes of the experiment. In our earlier example of rolling a single die, we may want to consider getting an odd number - a 1, 3 or 5. We refer to this as the event of getting an odd number from the experiment of rolling a single die. Thus, an event would consist of one or more outcomes of the sample space. An event with only one outcome is called a simple event. An event with two or more events is called a compound event. In the experiment of drawing a card from a standard single-deck, we can list the outcomes of several events, as shown in the following table: Assigning Probabilities to Experimental Outcomes In assigning probabilities to experimental outcomes, two basic requirements / axioms must be satisfied: 1. The probability values assigned to each experimental outcome (or sample point) must be between 0 and 1. Denoting by E[i] the ith experimental outcome and P(E[i]) its corresponding probability, we must have 0 ≤ P(E[i]) ≤ 1 for all i 2. The sum of all of the probabilities must be equal to one. That is, P(E[1]) + P(E[2]) + . . . P(E[k]) = 1 Any method that satisfies the above requirements and results in reasonable numerical measures of the likelihood of the outcome is acceptable. In practice, the classical or objective method, the relative frequency method, or the subjective method are often used. This will be discussed in our next post.
{"url":"http://how-to-win-at-blackjack.org/","timestamp":"2014-04-21T00:39:47Z","content_type":null,"content_length":"34704","record_id":"<urn:uuid:f1637114-bc2f-4726-ad8e-d65f1a7e6956>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
Power in electric circuits In addition to voltage and current, there is another measure of free electron activity in a circuit: power. First, we need to understand just what power is before we analyze it in any circuits. Power is a measure of how much work can be performed in a given amount of time. Work is generally defined in terms of the lifting of a weight against the pull of gravity. The heavier the weight and/ or the higher it is lifted, the more work has been done. Power is a measure of how rapidly a standard amount of work is done. For American automobiles, engine power is rated in a unit called "horsepower," invented initially as a way for steam engine manufacturers to quantify the working ability of their machines in terms of the most common power source of their day: horses. One horsepower is defined in British units as 550 ft-lbs of work per second of time. The power of a car's engine won't indicate how tall of a hill it can climb or how much weight it can tow, but it will indicate how fast it can climb a specific hill or tow a specific weight. The power of a mechanical engine is a function of both the engine's speed and its torque provided at the output shaft. Speed of an engine's output shaft is measured in revolutions per minute, or RPM. Torque is the amount of twisting force produced by the engine, and it is usually measured in pound-feet, or lb-ft (not to be confused with foot-pounds or ft-lbs, which is the unit for work). Neither speed nor torque alone is a measure of an engine's power. A 100 horsepower diesel tractor engine will turn relatively slowly, but provide great amounts of torque. A 100 horsepower motorcycle engine will turn very fast, but provide relatively little torque. Both will produce 100 horsepower, but at different speeds and different torques. The equation for shaft horsepower is simple: Notice how there are only two variable terms on the right-hand side of the equation, S and T. All the other terms on that side are constant: 2, pi, and 33,000 are all constants (they do not change in value). The horsepower varies only with changes in speed and torque, nothing else. We can re-write the equation to show this relationship: Because the unit of the "horsepower" doesn't coincide exactly with speed in revolutions per minute multiplied by torque in pound-feet, we can't say that horsepower equals ST. However, they are proportional to one another. As the mathematical product of ST changes, the value for horsepower will change by the same proportion. In electric circuits, power is a function of both voltage and current. Not surprisingly, this relationship bears striking resemblance to the "proportional" horsepower formula above: In this case, however, power (P) is exactly equal to current (I) multiplied by voltage (E), rather than merely being proportional to IE. When using this formula, the unit of measurement for power is the watt, abbreviated with the letter "W." It must be understood that neither voltage nor current by themselves constitute power. Rather, power is the combination of both voltage and current in a circuit. Remember that voltage is the specific work (or potential energy) per unit charge, while current is the rate at which electric charges move through a conductor. Voltage (specific work) is analogous to the work done in lifting a weight against the pull of gravity. Current (rate) is analogous to the speed at which that weight is lifted. Together as a product (multiplication), voltage (work) and current (rate) constitute power. Just as in the case of the diesel tractor engine and the motorcycle engine, a circuit with high voltage and low current may be dissipating the same amount of power as a circuit with low voltage and high current. Neither the amount of voltage alone nor the amount of current alone indicates the amount of power in an electric circuit. In an open circuit, where voltage is present between the terminals of the source and there is zero current, there is zero power dissipated, no matter how great that voltage may be. Since P=IE and I=0 and anything multiplied by zero is zero, the power dissipated in any open circuit must be zero. Likewise, if we were to have a short circuit constructed of a loop of superconducting wire (absolutely zero resistance), we could have a condition of current in the loop with zero voltage, and likewise no power would be dissipated. Since P=IE and E=0 and anything multiplied by zero is zero, the power dissipated in a superconducting loop must be zero. (We'll be exploring the topic of superconductivity in a later chapter). Whether we measure power in the unit of "horsepower" or the unit of "watt," we're still talking about the same thing: how much work can be done in a given amount of time. The two units are not numerically equal, but they express the same kind of thing. In fact, European automobile manufacturers typically advertise their engine power in terms of kilowatts (kW), or thousands of watts, instead of horsepower! These two units of power are related to each other by a simple conversion formula: So, our 100 horsepower diesel and motorcycle engines could also be rated as "74570 watt" engines, or more properly, as "74.57 kilowatt" engines. In European engineering specifications, this rating would be the norm rather than the exception. • REVIEW: • Power is the measure of how much work can be done in a given amount of time. • Mechanical power is commonly measured (in America) in "horsepower." • Electrical power is almost always measured in "watts," and it can be calculated by the formula P = IE. • Electrical power is a product of both voltage and current, not either one separately. • Horsepower and watts are merely two different units for describing the same kind of physical measurement, with 1 horsepower equaling 745.7 watts. Related Links
{"url":"http://www.allaboutcircuits.com/vol_1/chpt_2/3.html","timestamp":"2014-04-17T01:10:29Z","content_type":null,"content_length":"17413","record_id":"<urn:uuid:71f372ad-8011-4e56-afdd-11d5bc795544>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
Autoduality and Fourier-Mukai for compactified Jacobians Margarida Melo, October 22nd, 2013 Among Abelian varieties, Jacobians of smooth curves C have the important property of being autodual, i.e., they are canonically isomorphic to their dual abelian variety. This is equivalent to the existence of a Poincare line bundle P on J(C)xJ(C) which is universal as a family of algebraically trivial line bundles on J(C). Another instance of this fact was discovered by S. Mukai who proved that the Fourier-Mukai transform with kernel P is an auto-equivalence of the bounded derived category of J(C). I will talk on joint work with Filippo Viviani and Antonio Rapagnetta, where we try to generalize both the autoduality result and Mukai's equivalence result for singular reducible curves X with locally planar singularities. Our results generalize previous results of Arinkin, Esteves, Gagne and Kleiman and can be seen as an instance of the geometric Langlands duality for the Hitchin fibration.
{"url":"http://cims.nyu.edu/~zakharov/f13melo.html","timestamp":"2014-04-21T02:32:34Z","content_type":null,"content_length":"1353","record_id":"<urn:uuid:916772e8-45cf-43c6-afe5-4a881698eab4>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Physics Twistors- What are they? - A set of notes introducing spinors and twistors. Dimensional Analysis - A simple review of the powerful technique of dimensional analysis. Mathematical Methods I - This site contains the complete lecture notes and homework sets for PHYCS498MMA, a course of mathematical methods for physics given to entering graduate students, and senior undergraduates, at the University of Illinois at Urbana-Champaign. Hyperreal World - Nonstandard analysis and its applications to quantum physics, by H.Yamashita. Mixed English/Japanese. Discrete Self-trapping Equation - A bibliography in BibTeX format for those interested in discrete nonlinear Schrödinger type equations. This Week's Finds in Mathematical Physics - This is a column written about modern topics in mathematical physics. Holomorphic Methods in Mathematical Physics - This set of lecture notes by Brian C. Hall gives an introduction to holomorphic function spaces as used in mathematical physics. The emphasis is on the Segal-Bargmann space and the canonical commutation relations. Lectures on Orientifolds and Duality - Notes by Atish Dabholkar on orientifolds emphasizing applications to duality. Homological Methods in Mathematical Physics - These lecture notes by Joseph Krasil'shchik and Alexander Verbovetsky are a systematic and self-contained exposition of the cohomological theories naturally related to partial differential equations. Recent Developments in Skyrme Models - An introduction by T. Gisiger and M.B. Paranjape to recent, more mathematical developments in the Skyrme model. The aim is to render these advances accessible to mainstream nuclear and particle physicists. An Introduction to Noncommutative Geometry - A set of lecture notes by Joseph C. Varilly on noncommutative geometry and its applications in physics. Euclidean Geometric Transforms for Physics - A new method of correlating physics formulas to derive one formula from a related formula using Euclidean geometry to represent the inter-relationship of physics formulas. Open Problems in Mathematics and Physics - Links to open problems in mathematics, physics and other subjects. Non Commutative Geometry - Preprints of Alejandro Rivero about Connes's NCG and the Standard Model. Also some historical articles on related topics. Symplectic Geometries in Quantum Physics and Optics - A comparison of symplectic geometry with Euclidean or unitary geometries in quantum physics and optics Topology and Physics - An essay by C. Nash on the historical connection between topology and physics. Period and energy in one degree of freedom systems - An article by Jorge Rezende, University of Lisbon (PDF). Radial Symmetric Fourier Transforms - Fourier transforms of radially-symmetric functions can be performed efficiently using the Hankel transform of order zero. Illustrations of the method are presented, and of the Gibbs' phenomenon. Five Lectures on Soliton Equations - A self-contained review by Edward Frenkel of a new approach to soliton equations of KdV type. Complex Geometry of Nature and General Relativity - A paper by Giampiero Esposito attempting to give a self-contained introduction to holomorphic ideas in general relativity. The main topics are complex manifolds, spinor and twistor methods, heaven spaces. Local Quantum Physics Crossroads - An international forum for information exchange among scientists working on mathematical, conceptual, and constructive problems in local relativistic quantum physics (LQP). Solitons - An overview of the classical and quantum theory related to solitons The Dirac Delta Function - A brief introduction to the properties and uses of the Dirac delta function. Solitons - Resources at Heriot-Watt University. Meetings, local and other links. Doing Physics with Quaternions - A research effort to see how much of standard physics can be done using only quaternions, a 4-dimensional division algebra. Geometry and Duality - Lecture notes from the ITP miniprogram on Geometry and Duality Journal on Applied Clifford Algebra - Journal devoted to the development of Geometric Analysis in particular through the use of Clifford Algebras, Quaternions, Hypercomplex Analysis and Multivector Techniques. Main emphasis en the applications to Physics. Differential Equations and Oscillations - Many problems in physics are described by differential equations. As a complete discussion of differential equations is beyond the scope of this chapter we will deal only with linear first and second order ordinary differential equations. Klaus Brauer's Soliton Page - Presents a history of J.S.Russell's discovery of solitary waves, and animations of one-, two- and three-soliton solutions to the Korteweg-de Vries equation. Includes an article in PDF format on finding exact solutions to the KdV equation using the method of Backlund transform with the help of Mathematica. Inexplicable Secrets of Creation - Relationships between number theory and physics. Intrinsic Localized Modes - Dynamics of defect-free periodic lattices in terms of plane wave phonons. Web text by Albert J. Sievers, Cornell. On the Origins of Twister Theory - A new approach pioneered by Roger Penrose, starting with conformally-invariant concepts, to the synthesis of quantum theory and relativity.
{"url":"http://www.bazsites.com/Science/Physics/MathematicalPhysics/","timestamp":"2014-04-18T13:08:38Z","content_type":null,"content_length":"12103","record_id":"<urn:uuid:0378f9da-4136-4cb7-9942-fc21a80bc302>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
inverse normal table Best Results From Wikipedia Yahoo Answers Youtube From Wikipedia Contingency table In statistics, a contingency table (also referred to as cross tabulationor cross tab) is often used to record and analyze the relation between two or morecategorical variables. It displays the (multivariate) frequency distribution of the variables in a matrix format. The term contingency table was first used by Karl Pearson in "On the Theory of Contingency and Its Relation to Association and Normal Correlation", part of the Drapers' Company Research Memoirs Biometric Series I published in 1904. Suppose that we have two variables, sex (male or female) and handedness (right- or left-handed). Further suppose that 100 individuals are randomly sampled from a very large population as part of a study of sex differences in handedness. A contingency table can be created to display the numbers of individuals who are male and right-handed, male and left-handed, female and right-handed, and female and left-handed. Such a contingency table is shown below. The numbers of the males, females, and right- and left-handed individuals are called marginal totals. The grand total, i.e., the total number of individuals represented in the contingency table, is the number in the bottom right corner. The table allows us to see at a glance that the proportion of men who are right-handed is about the same as the proportion of women who are right-handed although the proportions are not identical. The significance of the difference between the two proportions can be assessed with a variety of statistical tests including Pearson's chi-square test, the G-test, Fisher's exact test, and Barnard's test, provided the entries in the table represent individuals randomly sampled from the population about which we want to draw a conclusion. If the proportions of individuals in the different columns vary significantly between rows (or vice versa), we say that there is a contingency between the two variables. In other words, the two variables are not independent. If there is no contingency, we say that the two variables are independent. The example above is the simplest kind of contingency table, a table in which each variable has only two levels; this is called a 2 x 2 contingency table. In principle, any number of rows and columns may be used. There may also be more than two variables, but higher order contingency tables are difficult to represent on paper. The relation between ordinal variables, or between ordinal and categorical variables, may also be represented in contingency tables, although such a practice is rare. Measures of association The degree of association between the two variables can be assessed by a number of coefficients: the simplest is the phi coefficient defined by where χ^2 is derived from Pearson's chi-square test, and N is the grand total of observations. φ varies from 0 (corresponding to no association between the variables) to 1 or -1 (complete association or complete inverse association). This coefficient can only be calculated for frequency data represented in 2 x 2 tables. φ can reach a minimum value -1.00 and a maximum value of 1.00 only when every marginal proportion is equal to .50 (and two diagonal cells are empty). Otherwise, the phi coefficient cannot reach those minimal and maximal values. Alternatives include the tetrachoric correlation coefficient (also only applicable to 2&nbsp;&times;&nbsp;2 tables), the contingency coefficientC, and Cramér'sV. C suffers from the disadvantage that it does not reach a maximum of 1 or the minimum of -1; the highest it can reach in a 2 x 2 table is .707; the maximum it can reach in a 4 x 4 table is 0.870. It can reach values closer to 1 in contingency tables with more categories. It should, therefore, not be used to compare associations among tables with different numbers of categories. Moreover, it does not apply to asymmetrical tables (those where the numbers of row and columns are not equal). The formulae for the C and V coefficients are: C=\sqrt{\frac{\chi^2}{N+\chi^2}} and k being the number of rows or the number of columns, whichever is less. C can be adjusted so it reaches a maximum of 1 when there is complete association in a table of any number of rows and columns by dividing C by \sqrt{\frac{k-1}{k}} (recall that C only applies to tables in which the number of rows is equal to the number of columns and therefore equal to k). The tetrachoric correlation coefficient assumes that the variable underlying each dichotomous measure is normally distributed. The tetrachoric correlation coefficient provides "a convenient measure of [the Pearson product-moment] correlation when graduated measurements have been reduced to two categories." The tetrachoric correlation should not be confused with the Pearson product-moment correlation coefficient computed by assigning, say, values 0 and 1 to represent the two levels of each variable (which is mathematically equivalent to the phi coefficient). An extension of the tetrachoric correlation to tables involving variables with more than two levels is the polychoric correlation coefficient. The Lambda coefficientis a measure the strength of association of the cross tabulations when the variables are measured at thenominal level. Values range from 0 (no association) to 1 (the theoretical maximum possible association). Asymmetric lambdameasures the percentage improvement in predicting the dependent variable. Database normalization - Wikipedia, the free encyclopedia Each table has a "highest normal form" (HNF): by definition, a table always .... required which extends the relational algebra of the higher normal forms. ... Inverse trigonometric functions In mathematics, the inverse trigonometric functions or cyclometric functions are the inverse functions of the trigonometric functions, though they do not meet the official definition for inverse functions as their ranges are subsets of the domains of the original functions. Since none of the six trigonometric functions are one-to-one (by failing the horizontal line test), they must be restricted in order to have inverse functions. For example, just as the square root function y = \sqrt{x} is defined such that y^2 = x, the function y = arcsin(x) is defined so that sin(y) = x. There are multiple numbers y such that sin(y) = x; for example, sin(0) = 0, but also sin(π) = 0, sin(2π) = 0, etc. It follows that the arcsine function is multivalued: arcsin(0) = 0, but also arcsin(0) = π, arcsin(0) = 2π, etc. When only one value is desired, the function may be restricted to its principal branch. With this restriction, for each x in the domain the expression arcsin(x) will evaluate only to a single value, called its principal value. These properties apply to all the inverse trigonometric functions. The principal inverses are listed in the following table. If x is allowed to be a complex number, then the range of y applies only to its real part. The notations sin^&minus;1, cos^&minus;1, etc. are often used for arcsin, arccos, etc., but this convention logically conflicts with the common semantics for expressions like sin^2(x), which refer to numeric power rather than function composition, and therefore may result in confusion between multiplicative inverse and compositional inverse. In computer programming languages the functions arcsin, arccos, arctan, are usually called asin, acos, atan. Many programming languages also provide the two-argument atan2 function, which computes the arctangent of y / x given y and x, but with a range of (&minus;π,&nbsp;π]. Relationships among the inverse trigonometric functions Complementary angles: \arccos x = \frac{\pi}{2} - \arcsin x \arccot x = \frac{\pi}{2} - \arctan x \arccsc x = \frac{\pi}{2} - \arcsec x Negative arguments: \arcsin (-x) = - \arcsin x \! \arccos (-x) = \pi - \arccos x \! \arctan (-x) = - \arctan x \! \arccot (-x) = \pi - \arccot x \! \arcsec (-x) = \pi - \arcsec x \! \arccsc (-x) = - \arccsc x \! Reciprocal arguments: \arccos x^{-1} \,= \arcsec x \, \arcsin x^{-1} \,= \arccsc x \, \arctan x^{-1} = \tfrac{1}{2}\pi - \arctan x =\arccot x,\text{ if }x > 0 \, \arctan x^{-1} = -\tfrac{1}{2}\pi - \arctan x = -\pi + \arccot x,\text{ if }x < 0 \, \arccot x^{-1} = \tfrac{1}{2}\pi - \arccot x =\arctan x,\text{ if }x > 0 \, \arccot x^{-1} = \tfrac{3}{2}\pi - \arccot x = \pi + \arctan x\text{ if }x < 0 \, \arcsec x^{-1} = \arccos x \, \arccsc x^{-1} = \arcsin x \, If you only have a fragment of a sine table: \arccos x = \arcsin \sqrt{1-x^2},\text{ if }0 \leq x \leq 1 \arctan x = \arcsin \frac{x}{\sqrt{x^2+1}} Whenever the square root of a complex number is used here, we choose the root with the positive real part (or positive imaginary part if the square was negative real). From the half-angle formula \tan \frac{\theta}{2} = \frac{\sin \theta}{1+\cos \theta} , we get: \arcsin x = 2 \arctan \frac{x}{1+\sqrt{1-x^2}} \arccos x = 2 \arctan \frac{\sqrt{1-x^2}}{1+x},\text{ if }-1 < x \leq +1 \arctan x = 2 \arctan \frac{x}{1+\sqrt{1+x^2}} Relationships between trigonometric functions and inverse trigonometric functions \sin (\arccos x) = \cos(\arcsin x) = \sqrt{1-x^2} \sin (\arctan x) = \frac{x}{\sqrt{1+x^2}} \cos (\arctan x) = \frac{1}{\sqrt{1+x^2}} \tan (\arcsin x) = \frac{x}{\sqrt{1-x^2}} \tan (\arccos x) = \frac{\sqrt{1-x^2}}{x} General solutions Each of the trigonometric functions is periodic in the real part of its argument, running through all its values twice in each interval of 2π. Sine and cosecant begin their period at 2πk− π/2 (where k is an integer), finish it at 2πk + π/2, and then reverse themselves over 2πk + π/2 to 2πk + 3π/2. Cosine and secant begin their period at 2πk, finish it at 2πk + π, and then reverse themselves over 2πk + π to 2πk + 2π. Tangent begins its period at 2πk− π/2, finishes it at 2πk + π/2, and then repeats it (forward) over 2πk + π/2 to 2πk + 3π/2. Cotangent begins its period at 2πk, finishes it at 2πk + π, and then repeats it (forward) over 2πk + π to 2πk + 2π. This periodicity is reflected in the general inverses where k is some integer: \sin(y) = x \ \Leftrightarrow\ y = \arcsin(x) + 2k\pi \text{ or } y = \pi - \arcsin(x) + 2k\pi \cos(y) = x \ \Leftrightarrow\ y = \arccos(x) + 2k\pi \text{ or } y = 2\pi - \arccos(x) + 2k\pi \tan(y) = x \ \Leftrightarrow\ y = \arctan(x) + k\pi \cot(y) = x \ \Leftrightarrow\ y = \arccot(x) + k\pi \sec(y) = x \ \Leftrightarrow\ y = \arcsec(x) + 2k\pi \text{ or } y = 2\pi - \arcsec (x) + 2k\pi \csc(y) = x \ \Leftrightarrow\ y = \arccsc(x) + 2k\pi \text{ or } y = \pi - \arccsc(x) + 2k\pi Derivatives of inverse trigonometric functions Simple derivatives for real and complex values of x are as follows: \frac{d}{dx} \arcsin x & {}= \frac{1}{\sqrt{1-x^2}}\\ \frac{d}{dx} \arccos x & {}= \frac{-1}{\sqrt{1-x^2}}\\ \frac{d}{dx} \arctan x & {}= \frac{1}{1+x^2}\\ \frac{d}{dx} \arccot x & {}= \frac{-1}{1+x^ 2}\\ \frac{d}{dx} \arcsec x & {}= \frac{1}{x\,\sqrt{x^2-1}}\\ \frac{d}{dx} \arccsc x & {}= \frac{-1}{x\,\sqrt{x^2-1}} Only for real values of x: From Yahoo Answers Question:I'm 14 years old and I'm in 8th grade. I am currently taking 9th grade level math, but the thing is I don't understand what these things are: square roots, GCF, LCF, perfect square binomial, perfect square trinomial, ect. And I don't know how to do these: FOIL, subtracting and adding fractions, solving proportions, factoring, pythagorean theorem, and I can't memorize most of the basic multiplication tables. Ever since 6th grade I've failed every single test and quiz in math. But the thing is I'm doing excellently in all my other classes and electives while my classmates struggle in them. I've gotten nearly perfect scores on my reading, writing and science FCATs, but I just barely passed the math FCAT. No matter how hard I study, no matter how many tutoring sessions I go to and no matter who I ask for help, I can't keep my grades up. My test and quiz grades have been 50% and below. My most recent ones were 36, 8, and 0%. Is there anything wrong with me? Answers:It's normal. I'm the same way, I just don't get math. Question:I am looking for the equation to generate the normal distribution table in order to calculate probability based on the critical value (z) in the right tail of the curve. I know that the equation for P(x) is 1/(SIGMA * SQRT(2 * PI)) * EXP(-(x - MU)^2/(2 * SIGMA^2)) , but I am looking for the representation using critical value (z). I am writing an application that requires the equation instead of the distribution table. I know that there has to be an algorithm that is commonly used. I just need to find the equation. Answers:The standard normal distribution, which is what I think you suggest by z, has a mean of 0 and a standard deviation of 1. Therefore just put these figures (mu = 0, sigma = 1) into the general normal distribution formula that you have above and you get the formula you want. Question:the mean life of a certain kind of light bulb is 900 hours with a standard deviation of 30 hours. Assuming the lives of the light bulbs are normally distributed, find the percent of the light bulbs that will last for the given interval. use the standard normal table if necessary. 1. more than 984 hours like i know the normal curve and all but 984 is not on the normal curve. how do you find the answer using the standard normal table or if you don't need it, please explain im so confused. Answers:Hi, On a TI-83, press [2nd][VARS] to get to DISTR. Choose #2 normalcdf(. It needs normalcdf(lower bound, upper bound,mean,standard deviation). If you enter normalcdf(984,99999,900,30), it equals .002555 which is .2555% of the bulbs will last more than 984 hours. (99999 is just a very large number to approximate infinity.) I hope that helps!! :-) Question:Let x be a continuous random variable that follows a normal distribution with a mean of 550 and a standard deviation of 75. a) Find the value of x so that the area under the normal curve to the left of x is .0250 b) Find the value of x so that the area under the normal curve to the right of x is .9345 c) Find the value of x so that the area under the normal curve to the right of x is approximately .0275 Thanks! Answers:a) ANSWER: x = -1.96 Why??? NORMAL DISTRIBUTION, STANDARDIZED VARIABLE z, PROBABILITY "LOOK-UP" P = 0.025 (2.5%) probability to the left of 0.250 inches Table "LOOK-UP" Inverse Cumulative Distribution Function Normal with mean = 0 and standard deviation = 1 P( X <= x ) x 0.025 -1.96 b) ANSWER: x = 1.51 Inverse Cumulative Distribution Function Normal with mean = 0 and standard deviation = 1 P( X <= x ) x 0.9345 1.51 c) ANSWER: x = -1.92 Inverse Cumulative Distribution Function Normal with mean = 0 and standard deviation = 1 P( X <= x ) x 0.0275 -1.92 From Youtube Do Inversion Tables Work :www.doinversiontableswork.com When you're in the business of physical therapy, it is important that you have the right types of tables. It does not suffice to have just one type of table. That is because your patients come in different types. They come in different shapes, sizes, and conditions. If you have just one type of table, you're not going to be able to serve everyone who walks through your door. That can do a lot of harm to your bottom line and there are going to be a lot of people wanting you to help them, but you can't. Finding the right table Two of the most common types of physical therapy tables are the Classic Clinician style tables that you see in spas and various physical therapy offices. You will also see the heavy duty treatment table for bariatric patients. These are just two of the physical therapy supplies that are going to make your business as successful as possible. Most importantly, you want to make sure that your physical therapy equipment can treat anyone in a variety of different situations. Take the bariatric patient, for example. Normal Distribution Probability Calculation (Using Table) :Learn how to calculate probability for a normal distribution using a table. Learn more about online education at www.studyatapu.com
{"url":"http://www.edurite.com/kbase/inverse-normal-table","timestamp":"2014-04-16T07:20:31Z","content_type":null,"content_length":"87362","record_id":"<urn:uuid:892a543d-9efa-4c06-8684-7a5c403533bb>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
The attempt to load metrics for this article has failed. The attempt to plot a graph for these metrics has failed. Axisymmetric bubble collapse in a quiescent liquid pool. I. Theory and numerical simulations (a) Numerical shapes of an axisymmetric bubble formed from a vertical underwater nozzle for . Note that, at a length scale of the order of , the shape of the bubble apparently forms two cones with a semiangle of close to the pinch-off point. (b) Detail of the bubble profiles—displaced in so that the minimum is located at —close to the pinch-off region. Note that the profiles become more symmetric with respect to the plane as pinch-off is approached. The values of in curves 1, 2, and 3 are, respectively, , , and . (c) Numerical bubble shapes represented in (b) as a function of the dimensionless stretched coordinates and . Note that the bubble shapes become more and more locally slender as pinch-off is approached. (d) In solid lines, full potential flow numerical simulations take into account the nozzle-bubble interaction and were previously represented in Figs. 1(b) and 1(c). In dashed lines, simplified potential flow numerical simulations in which an isolated bubble breaks symmetrically around the plane for a value of the Bond number, . Observe that there are no appreciable differences between the two types of simulations in the region near the minimum radius. Velocity at the minimum radius for a bubble in water, and two different gases (air, and , ). The details of the inviscid potential flow numerical simulations, represented in solid lines, will be provided in Sec. III. Note that, in agreement with Ref. 17 the radial velocity for is . Equation (6) is represented for different initial values of with dashed lines and differ from the numerics no matter how small the initial value of is. Values of the local Weber number and of the local gas and liquid Reynolds numbers ( and , respectively) for the inviscid potential flow numerical simulations depicted in Fig. 2. Note that the evolution of the local Weber number is very similar to the experimental one reported in Fig. 46 of Ref. 17. (a) Dimensionless axial gas velocity profiles for different values of the local gas Reynolds number, ; (b) continuous line: dimensionless gas pressure gradient defined in Eq. (18) and given by Eq. (22); dotted line: , which proves to be a good approximation to the real solution. Sketch of the geometry used for the symmetric type of simulations. In solid lines, the velocity at the minimum radius in the collapse of bubbles of two different gases in water [(a) air, and (b) , ]. The numerical computations have been performed using the inviscid symmetric code and the Bernoulli equation (35). Dashed lines: the theoretical result obtained integrating the two-dimensional Rayleigh equations (25) and (26) in the limits Re, with initial conditions for , , and the values of the numerical simulations when (a) , (b) . In (a) and (b) . The results of integrating Eqs. (25) and (26) in the limits Re, , and have also been included in each figure. While no appreciable differences between and are observed in the case of air, gas density plays a key role in the good agreement between theory and numerics in the case of . Velocity at the minimum radius of an air bubble that collapses within liquids of different viscosities. The computations are performed using the symmetric code and the Bernoulli equation (40). The experimental time evolution of the bubble minimum radius is represented using dashed lines for two different values of the liquid viscosity (adapted from the experiments in Ref. 17, where it is reported that the exponents of the power law for the cases of the liquids of 4.2 and 21 cp are, respectively, 0.6 and 0.67). Solid lines: the resulting time evolution calculated through the integration of the two-dimensional Rayleigh-like equations (25) and (26). (a) Comparison between the numerical computations depicted in Figs. 6 and 7 (solid lines) and the theoretical results obtained integrating the two-dimensional Rayleigh equations (25) and (26) using as initial conditions for , , and the values of the numerical simulations when oscillates between 120 and . Similarly, the comparison between numerics and theory in the case of is given in (b). In this case, the initial conditions for , , and are the values of the numerical simulation when . The instant at which , which is the condition for satellite formation, is indicated for each viscosity using a vertical arrow: (a) , (b) . (a) Satellite formation process in the case of air bubbles collapsing in a liquid with a viscosity of . In (b), the gas is and the material properties of the liquid are those of water. (a) Sketch of the gas flow during the initial stages of bubble collapse, where it is expected that flow separation does not occur. (b) During the latest instants of bubble pinch-off, gas flow may separate and the stagnant gas pressure is not recovered from the exit of the tube to the main bubble. Comparison between the results in Fig. 6 (, solid line) and those obtained integrating systems (B3) and (B4) for the same values of the initial conditions (dashed lines) and . Physical properties of the different gases and liquid (water) considered. Scitation: Axisymmetric bubble collapse in a quiescent liquid pool. I. Theory and numerical simulations
{"url":"http://scitation.aip.org/content/aip/journal/pof2/20/11/10.1063/1.3009297","timestamp":"2014-04-19T10:27:41Z","content_type":null,"content_length":"114088","record_id":"<urn:uuid:2885ede7-f4c4-440f-8988-a63c2f76fc9b>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
Surjectivity of a homomorphism between Picard groups up vote 3 down vote favorite Let $X$ be a one-dimensional Noetherian scheme over an algebraically closed field $k$. Suppose $X$ is reduced and let $X=\bigcup X_i$ be the composition of $X$ into irreducible components. Then, is the following homomorphism surjective? $\mathrm{Pic} X\to \bigoplus \mathrm{Pic} X_i$. 1 Yes. Use the description of $Pic(X)$ as the cohomology group $H^1(X,O_X^*)$. You may want to see the thread mathoverflow.net/questions/57127/…. – J.C. Ottem Mar 4 '11 at 22:22 2 This is problem III.5.8 in Hartshorne by the way. – J.C. Ottem Mar 4 '11 at 22:25 add comment 1 Answer active oldest votes Yes! This can be shown using the isomorphism $H^1(X,\mathcal{O}_X) \cong \mathrm{Pic} X$. First, look at the short exact sequence: $ 0 \to \mathcal{O}_X^* \to \bigoplus \mathcal{O}_{X_i}^* \to \mathcal{C} \to 0$ up vote 4 down From the long exact sequence of cohomological groups associated to the short exact sequence, it suffices to show that $H^1(X,\mathcal{C})\cong 0$. However, this is clear since from the vote accepted short exact sequence above, we can see that the support of $\mathcal{C}$ is a finite number of points (points that belong to more than just 1 irreducible component) and hence, of dimension 0. Now, use Grothendieck's vanishing theorem and we are done. Yes, although the Grothendieck vanishing theorem is perhaps a bit of a sledgehammer here.. – J.C. Ottem Mar 4 '11 at 22:48 True. But using this, we get a strong result: it seems to work for Noetherian locally ringed space of dimension 1 in general. – Brian Mar 4 '11 at 22:52 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/57405/surjectivity-of-a-homomorphism-between-picard-groups?sort=newest","timestamp":"2014-04-18T06:04:11Z","content_type":null,"content_length":"54642","record_id":"<urn:uuid:802bcf0e-5ccc-4503-a155-4709f1416f04>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
Sparse coding for the jvm? Are there any efficient implementations of sparse coding (that is, minimizing (1/2)||Ax - b||^2 + lambda ||x||_1) for the JVM? asked Jan 26 '12 at 09:53 Alexandre Passos ♦ After failing to find a public one I wrote my own and put it in a gist. answered Jan 26 '12 at 10:57 Alexandre Passos ♦
{"url":"http://metaoptimize.com/qa/questions/8846/sparse-coding-for-the-jvm?sort=oldest","timestamp":"2014-04-17T12:41:15Z","content_type":null,"content_length":"16452","record_id":"<urn:uuid:8696b2f6-1fb2-45e3-a160-335b95c778b1>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
moduli space and modularity up vote 3 down vote favorite I recently realized some kind of analogy when considering modularity results (such as the modularity of elliptic curves over Q). The analogy comes from algebraic groups. Take one point (say, the origin) of an algebraic group, then something over the one point (for example, a tangent vector) can be extended to the whole algebraic group by group translations (so we get a translation invariant vector field). Now, modular curves are moduli spaces for elliptic curves. One elliptic curve is like one point on the moduli space, so probably things over that one point (for example, the first etale cohomology, which is the same as the Tate module Galois representation) could be extended to the whole modular curve, and indeed, that is the modular form, which also lives in some cohomology of the modular curve. The modularity results have always been very mysterious and surprising for me, since it links two very "natural" and "intuitive" but "seems-far-away" objects together (the Tate module, and a modular form). But the above point of view might be a good reason for such things to be true. This leads me to think more about the general theory of moduli spaces, which I don't know very much. It seems that there is a quite well developped theory of moduli spaces of curves with fixed genre, and also there is some special kind of moduli spaces such as Hilbert moduli space. There is also a very important moduli space in number theory,namely, moduli space of p-divisible groups, which recently has an important work by Mark Kisin. But it seems to be that the techniques and ideas used in Kisin's work (of course as well as in Breuil and other people's work) look quite different from that of traditional geometrical moduli spaces. So my question is, can someone give some motivation about studying p-divisible groups and their moduli spaces? Also, is the study of such objects analogus to that of the traditional moduli spaces? Breuil ''Groupes p-divisibles, groupes finis et modules filtrés'', Annals of Math. 151, 2000, 489-549. http://www.ihes.fr/~breuil/PUBLICATIONS/p-divisibles.pdf Kisin, Moduli of finite flat group schemes and modularity -- Annals of Math. 170(3) (2009), 1085-1180. http://www.math.harvard.edu/~kisin/dvifiles/bt.dvi nt.number-theory moduli-spaces modular-forms soft-question I like the questions at the end (and my ability to answer is limited to "Serre-Tate theorem" and "yes"), but I'm having trouble understanding the mathematics in the first paragraph. Aren't there essential things like level structures, Jacobians, and Hecke operators? – S. Carnahan♦ Jan 20 '10 at 6:06 yes, I was only being (too) brief. – natura Jan 20 '10 at 7:25 add comment 1 Answer active oldest votes Kisin's work is fairly technical, and is devoted to studying deformations of Galois representations which arise by taking $\overline{K}$-valued points of a finite flat group over $\mathcal O_K$ (where $K$ is a finite extension of $\mathbb Q_p$). The subtlety of this concept is that when $K$ is ramified over $\mathbb Q_p$ (more precisely, when $e \geq p-1$, where $e$ is the ramification degree of $K$ over $\mathbb Q_p$), there can be more than one finite flat group scheme modelling a given Galois represenation. E.g. if $p = 2$ and $K = {\mathbb Q}\_2$ (so that $e = 1 = 2 - 1$), the trivial character with values in the finite field $\mathbb F_2$ has two finite flat models over $\mathbb Z_2$; the constant etale group scheme $\mathbb Z/2 \mathbb Z$, and the group scheme $\mu_2$ of 2nd roots of unity. In general, as $e$ increases, there are more and more possible models. Kisin's work shows that they are in fact classified by a certain moduli space (the "moduli of finite flat group schemes" of the title). He is able to get some control over these moduli spaces, and hence prove new modularity lifting theorems; in particular, with this (and several other fantastic ideas) he is able to extend the Taylor--Wiles modularity lifting theorem to the context of arbitrary ramification at $p$, provided one restricts to a finite flat deformation problem. This result plays a key role in the proof of Serre's conjecture by Khare, Wintenberger, and Kisin. The detailed geometry of the moduli spaces is controlled by some Grassmanian--type structures that are very similar to ones arising in the study of local models of Shimura varieties. However, there is not an immediately direct connection between the two situations. up vote 5 down EDIT: It might be worth remarking that, in the study of modularity of elliptic curves, the fact that the modular forms classifying elliptic curves over $\mathbb Q$ are themselves functions vote on the moduli space of elliptic curves is something of a coincidence. One can already see this from the fact that lots of the other objects over $\mathbb Q$ that are not elliptic curves are also classified by modular forms, e.g. any abelian variety of When one studies more general instances of the Langlands correspondence, it becomes increasingly clear that these two roles of elliptic curves (providing the moduli space, and then being classified by modular forms which are functions on the moduli space) are independent of one another. Of course, historically, it helped a lot that the same theory that was developed to study the Diophantine properties of elliptic curves was also available to study the Diophantine properties of the moduli spaces (which again turn out to be curves, though typically not elliptic curves) and their Jacobians (which are abelian varieties, and so can be studied by suitable generalizations of many of the tools developed in the study of elliptic curves). But this is a historical relationship between the two roles that elliptic curves play, not a mathematical Thank you! but what do you mean by the two roles that elliptic curves play? As for higher dimensional abelian varieties, the corresponding automorphic representation living in the cohomology of the Shimura variety (the moduli space of abelian varieties) is not a function on the moduli. Is that what you mean by "coincidence" (since it does not hold for higher-dimension case)? – natura Jan 20 '10 at 7:34 I am pretty sure he means the following: consider the modular curve X_0(N). This plays not one but two roles in the theory of elliptic curves. One is that its non-cuspidal points 1 parametrise (in some sense that can be made rigorous) elliptic curves with a cyclic subgroup of order N. That statement is geometric---for example it works over the complexes, over finite fields and so on. The other role X_0(N) plays is that if E is an elliptic curve over the rational numbers then there is a non-constant map X_0(N)-->E for some N (the conductor of the curve). That is an arithmetic statement. – Kevin Buzzard Jan 20 '10 at 11:04 1 ...and it's just a coincidence that both statements about elliptic curves. They have a very different flavour though: for example if E is an elliptic curve over the complexes, then it still gives a point on some X_0(N) if you choose a cyclic subgroup of order N, but you don't expect it to be the image of some X_0(N) under some holomorphic map (for example if the j-invariant is transcendental). – Kevin Buzzard Jan 20 '10 at 11:05 Thanks Kevin; that's what I meant. – Emerton Jan 20 '10 at 13:58 add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory moduli-spaces modular-forms soft-question or ask your own question.
{"url":"http://mathoverflow.net/questions/12380/moduli-space-and-modularity?sort=votes","timestamp":"2014-04-21T13:07:10Z","content_type":null,"content_length":"62666","record_id":"<urn:uuid:27cb013f-2bdb-41a2-9d89-aefc420a0dd6>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
Convergence Craziness May 19th 2009, 01:15 PM #1 Senior Member Apr 2009 Atlanta, GA Convergence Craziness Take some infinite subset of natural numbers $A \subset \mathbb{N}$ . Define $\sigma_A= \sum_{n\in A} \frac{1}{n}$ . For some sets A, $\sigma$ diverges, like the primes $P=\{2,3,5,7,11,13...\}$ and for other sets, $\sigma$ converges, like the squares $S=\{1,4,9,16,25,36,49,64...\}$ . One could say the more "dense" a set A is, the more likely it is to diverge. One could also compare the "density" of two subsets of $\mathbb{N}$ by looking at the asymptotes of their counting functions, i.e. the primes are "denser" than the square numbers because there are more primes under an arbitrarily large x than there are square numbers. But is there such a thing as a "least dense" divergent series? Question: Let A be a sequence of natural numbers, such that $\sigma_A$ diverges. Does there exist such an A where no matter how we partition it into two mutually exclusive subsets ( $A_1 \cup A_2 =A$ but $A_1 \cap A_2$ =null) either $\sigma_{A_1}$ or $\sigma_{A_2}$ must converge? Find such a set A or prove none exists. Conjecture: None exists. For any divergent series A, you will be able to find two mutually exclusive, divergent subsets $A_1$ and $A_2$ . Last edited by Media_Man; May 19th 2009 at 01:28 PM. Take some infinite subset of natural numbers $A \subset \mathbb{N}$ . Define $\sigma_A= \sum_{n\in A} \frac{1}{n}$ . For some sets A, $\sigma$ diverges, like the primes $P=\{2,3,5,7,11,13...\}$ and for other sets, $\sigma$ converges, like the squares $S=\{1,4,9,16,25,36,49,64...\}$ . One could say the more "dense" a set A is, the more likely it is to diverge. One could also compare the "density" of two subsets of $\mathbb{N}$ by looking at the asymptotes of their counting functions, i.e. the primes are "denser" than the square numbers because there are more primes under an arbitrarily large x than there are square numbers. But is there such a thing as a "least dense" divergent series? Question: Let A be a sequence of natural numbers, such that $\sigma_A$ diverges. Does there exist such an A where no matter how we partition it into two mutually exclusive subsets ( $A_1 \cup A_2 =A$ but $A_1 \cap A_2$ =null) either $\sigma_{A_1}$ or $\sigma_{A_2}$ must converge? Find such a set A or prove none exists. Conjecture: None exists. For any divergent series A, you will be able to find two mutually exclusive, divergent subsets $A_1$ and $A_2$ . Hi Media_Man. This is my idea. Pick any number $n_1$ from $A$ and let $k$ be the largest integer such that $2^k\le n_1$. Thus $n_1<2^{k+1}$ but since $A$ is infinite we can pick $n_2\in A\setminus\{n_1\}$ such that $2^{k+1}\le n_2.$ Then pick $n_3\in A\setminus\{n_1,n_2\}$ such that $2^{k+2}\le n_3,$ then $n_4\in A\setminus\{n_1,n_2,n_3\}$ such that $2^{k+3}\le n_4,$ and so on ad infinitum. Thus $\frac1{n_1}+\frac1{n_2}+\frac1{n_3}+\cdots\le\frac 1{2^k}+\frac1{2^{k+1}}+\frac1{2^{k+2}}+\cdots$ and since the RHS converges, so must the LHS. I think I understand what you are saying. You are suggesting that given an arbitrary set A, define $A_1=\{n_1,n_2,n_3,...\}$ as per your algorithm and $A_1$ is guaranteed to converge. This is true. But what I am asking is to find a divergent set A that cannot be partitioned into two divergent subsets. Lemma: If $\sigma_A$ diverges, and $A_1, A_2$ partition A, then we have one of three cases: (i) $\sigma_{A_1}$ diverges but $\sigma_{A_2}$ converges (ii) $\sigma_{A_1}$ converges but $\sigma_{A_2}$ diverges (iii) Both diverge. What I am looking for is a set A for which no subsets $A_1,A_2$ exist satisfying (iii). Let A be the naturals $\{1,2,3,4,5,6,7,...\}$, which diverge. Letting $A_1$ be evens, and $A_2$ be odds, it is easily shown that case (iii) follows. More clever... Let P denote the primes $\{2,3,5,7,11,13,...\}$ . Euler and Goldbach independently showed that $\sigma_P$ diverges. Now let $A_1$ be primes of the form $4k+1$ and $A_2$ be primes of the form $4k+3$ . Since $A_1$ and $A_2$ are equinumerous (Modular Prime Counting Function -- from Wolfram MathWorld), and they cannot both be convergent (as $\sigma_A=\sigma_{A_1}+\sigma_{A_2}$), they must both be divergent. Therefore, can you think of a divergent set that cannot be split into two smaller divergent sets like this? *It seems to me that none would exist because just as in the last example, any divergent set can be partitioned perfectly in half, but can this be shown rigorously? If $\sum_{n=1}^\infty \frac{1}{a_n}$ diverges, does that imply that $\sum_{n=1}^\infty \frac{1}{a_{2n}}$ also diverges, for $1\leq a_1 < a_2 <a_3 < ...$ ??? Yes; rewording the question to a strictly (eventually) decreasing sequence of positive numbers. $\sum_{n=1}^{\infty} a_{2n-1} \geq \sum_{n=1}^{\infty} a_{2n} \geq \sum_{n=1}^{\infty} a_{2n+1}$ Suppose $\sum_{n=1}^\infty a_{2n}$ converges, then $\sum_{n=1}^\infty a_{2n+1}$ converges, and hence $\sum_{n=1}^\infty a_{2n-1}$ converges (adding an constant) Therefore $\sum_{n=1}^\infty a_{2n-1} + \sum_{n=1}^\infty a_{2n} = \sum_{n=1}^{\infty} a_n$ converges. Contradiction Okay, this would imply that given any sequence of natural numbers $1 \leq a_1 < a_2 < a_3 < ...$ for which $\sigma_A$ diverges, it is possible to construct a subset of A, called B, for which the partial sums as L gets large, $\sum_{n=1}^L \frac{1}{b_n} \approx \frac{1}{2}\sum_{n=1}^L \frac{1}{a_n}$ . Likewise we can find a divergent subset of B that grows approximately $\frac{1}{4}$ as quickly as A and so on. Iterating, given any arbitrarily large integer k, there exists a subset of the natural numbers, $A_k$ whose asymptotic growth is approximately $\sum_{n=1}^L \frac{1}{a_n} \approx \frac{1}{2^k}\sum_{n=1}^L \frac{1}{n}$ Therefore there is no such thing as a "least dense" divergent series. A sequence of natural numbers can always be found for which $\sigma=\sum_{n=1}^\infty \frac{1}{a_n}$ diverges arbitrarily The reason I am interested is that it seems to me that the power set of $\mathbb{N}$ is well-ordered. Though uncountable, you can always take two elements of $\mathbb{P}(\mathbb{N})$, say A and B, and say that the partial sums for $\sigma$ when L is greater than some arbitrarily large number are: (i) $\sigma_A > \sigma_B$ (A is less dense) (ii) $\sigma_A < \sigma_B$ (B is less dense) (iii) $\sigma_A \approx \sigma_B$ (A and B are equinumerous) The same can be said for converging elements of $\mathbb{P}(\mathbb{N})$ . Now here is the big question: Picking an element at random, what is the probability that it converges? What portion of sequences of natural numbers are "large sets" ( $\sigma$ diverges) vs. "small sets" ( $\sigma$ converges)? And can we get any kind of handle of a line that can be drawn between them? Now here is the big question: Picking an element at random, what is the probability that it converges? What portion of sequences of natural numbers are "large sets" ( $\sigma$ diverges) vs. "small sets" ( $\sigma$ converges)? And can we get any kind of handle of a line that can be drawn between them? Here's my answer about the last part. There are natural probability measures on $\mathcal{P}(\mathbb{N})=\{0,1\}^{\mathbb{N}}$ (with product $\sigma$-algebra), namely the distributions that consist in picking each number independently with some probability $p\in(0,1)$. This gives random infinite subsets of $\mathbb{N}$ (that have asymptotic density $p$). Moreover, every natural number plays the same role (the measure is shift-invariant), so that these measures appear to be analogs of a "uniform measure". Choosing $p=1/2$ would additionally give a symmetry between the random subset and its complement (they would be distributed alike), but I don't need this assumption in the following. In other words, let $p\in(0,1)$ and let $(X_n)_{n\geq 0}$ be independent random variables distributed according to $P(X_n=1)=p$ and $P(X_n=0)=1-p$. The random subset would be $A=\{n\in\mathbb{N}\ \mid\ X_n=1\}$. And the question is: What is the probability that the sum $S=\sum_{n=1}^\infty \frac{X_n}{n}$ is finite? The answer is..., wait for it,... zero. Disappointingly. But not surprisingly if you know Kolmogorov's 0-1 law, which tells that the answer has to be either 0 or 1. I've come up with various proofs but no fully elementary one. One possibility is to use Paley-Zygmund inequality (this is an easy one, don't get scared by the name!) for partial sums to get that, taking a limit, $P(S\geq\frac{1}{2}\mathbb{E}(S))>0$ which, given Kolmogorov's 0-1 law, leads to the conclusion since $\mathbb{E}[S]=\sum_{n=1}^\infty \frac{p}{n}=\infty$. Another possibility (without K's 0-1 law) is to write $Y_k=\sum_{n=k^2+1}^{(k+1)^2}X_n$. We notice that almost-surely $Y_k\sim_{k\to\infty} (2k+1)p$ because of the law of large numbers (Not quite: in fact there is a subtelty, but nevermind, it can be made to work). Then $S_{k^2}=\sum_{i=0}^{k-1}\sum_{n=i^2+1}^{(i+1)^2}\frac{X_n}{n}$$\geq\sum_{i=0}^{k-1}\sum_{n=i^2+1}^{(i+1)^2}\frac{X_n}{(i+1)^2}=\sum _{i=0}^{k-1}\frac{Y_i}{(i+1)^2}$, and almost-surely $\frac{Y_i}{(i+1)^2}\sim_{i\to\infty} \frac{(2i+1)p}{(i+1)^2}\sim_{i\to\infty} \frac{2p}{i}$, so that the right-hand side sum diverges almost-surely as $k\to\infty$. As a conclusion: for almost-every subset $A$ of $\mathbb{N}$ (according to the previous measures), $\sigma_A=\infty$. The second proof (almost) only uses the fact that a random set has a positive asymptotic density. About this "frontier" thing, here is something you might like (but maybe you know it already): for any $k\geq 1$, the series $\sum_n\frac{1}{n\log n \log\log n\cdots \log\log\cdots\log n}$ (with successively $1, 2, 3,\ldots, k$ nested logs) diverges, while for any $\varepsilon>0$, for any $k\geq 1$, the series $\sum_n\frac{1}{n\log n\log\log n\cdots (\log\log\cdots\log n)^{1+\varepsilon}}$ (only the last function is raised to the power $1+\varepsilon$) converges. For a given $\varepsilon>0$, the higher $k$ is, the slower the converging series decays. This gives an intuition (not a theorem) of how fast a decreasing sequence should at least decay for making a convergent series. Awesome, Laurent. I had a feeling such. I pictured flipping a weighted coin that came up heads if a number was in the set, and tails otherwise. Regardless of how unfair the coin was, the set would still diverge. If you got heads 1% of the time, you'd get a set containing approximately one of every 100 natural numbers, for example. This would be equinumerous to the set $k\mathbb{N}$, which diverges, as $\ sum_{n=1}^\infty \frac{1}{kn}=k\sum_{n=1}^\infty \frac{1}{n}=\infty$ . I was just having trouble showing this rigorously. This is bordering topology more than my better understood subject of number theory. Consider this: For some subset of the naturals $A \in \mathcal{P}(\mathbb{N})$ , define $\alpha(A)=\sum_{a\in A} \frac{1}{2^a}$ . I believe this is a well-defined homomorphism from $\mathcal{P}(\ mathbb{N}) \rightarrow [0,1]$ . Defining $C \subset \mathcal{P}(\mathbb{N})$ as the set of converging elements of $\mathcal{P}(\mathbb{N})$(with the restriction that elements of C be infinite) , we can actually plot C on [0,1] and express the set visually. I believe C is uncountably infinite (proof?) but if the probability that a random element of $\mathcal{P}(\mathbb{N})$ is in C is zero, then it is a totally disconnected set of infinite points with no "length" . This results in some sort of fractal structure with Hausdorff dimension $\in (0,1)$ , doesn't it? This also amounts to seeing the series of flipped coins as the expansion of a real number is base 2 (1 for heads, 0 for tails). A little care should be taken on "unproper" expansions like 0.0111111... which equals 0.10000..., but this is only countably many points. Defining $C \subset \mathcal{P}(\mathbb{N})$ as the set of converging elements of $\mathcal{P}(\mathbb{N})$(with the restriction that elements of C be infinite) , we can actually plot C on [0,1] and express the set visually. I believe C is uncountably infinite (proof?) One simple reason why: take an element $A$ of $C$, then every subset of $A$ is in $C$ as well! And the set of subsets of $A$ is in bijection with $\mathcal{P}(\mathbb{N})$. but if the probability that a random element of $\mathcal{P}(\mathbb{N})$ is in C is zero, then it is a totally disconnected set of infinite points with no "length" . This results in some sort of fractal structure with Hausdorff dimension $\in (0,1)$ , doesn't it? This set should indeed have a complex topological structure. If a number $x=0.a_1a_2\cdots$ is in $C$, then all translates $0.\ast\ast\ast a_1a_2\cdots$ are in $C$. In particular, there are two important subsets of $C$ that can be seen as copies of $C$: the one made of sequences not containing 1 (i.e. of the form $0.0a_2a_3\cdots$) and the one made of sequences containing 1 (i.e. of the form $0.1a_2a_3\cdots$). Heuristically, these copies have half the (euclidian) size of $C$ and there are two of them. I think (not even 10% sure) this should imply that the Hausdorff dimension is either 0 or 1, again... I can't remember enough of fractal theory to conclude... This also amounts to seeing the series of flipped coins as the expansion of a real number is base 2 (1 for heads, 0 for tails). A little care should be taken on "unproper" expansions like 0.0111111... which equals 0.10000..., but this is only countably many points. Yes, I neglected to mention that until later in my post. The set $\{1,2,3\}$ is most certainly not equal to $\{1,2,4,5,6,7,8,...\}$, so this homomorphism really only maps $\mathcal{P}(\mathbb{N}) - F \rightarrow (0,1]$ , where F represents finite elements of $\mathcal{P}(\mathbb{N})$ , or terminating decimals. And yes, as F is countable, it does not pose much of a problem, but I do not know enough about topology to correct this flaw. I think (not even 10% sure) this should imply that the Hausdorff dimension is either 0 or 1, again... I'll have to look into it. It would be impossible to actually "see" this fractal as prescribed here because $\alpha$ converges so quickly, but I believe any function of the form $\alpha_\epsilon (A)=\sum_{a\in A} \frac{1}{(1+\epsilon)^a}$ would work equally well, slowing the convergence enough to be able to distinguish two points who only differ at, say, the 100th or 1000th digit. Also, I know that if a set is countable, it has a Hausdorff dimension of zero - is the converse true? (No answer yet...) It would be impossible to actually "see" this fractal as prescribed here because $\alpha$ converges so quickly, but I believe any function of the form $\alpha_\epsilon(A)=\sum_{a\in A} \frac{1} {(1+\epsilon)^a}$ would work equally well, slowing the convergence enough to be able to distinguish two points who only differ at, say, the 100th or 1000th digit. You won't see anything in any way: the fact that $x$ belongs to $C$ is an asymptotic property, you can't tell it given ANY finite number of digits. This is exactly like you would want to picture the set of rational numbers... I know that if a set is countable, it has a Hausdorff dimension of zero - is the converse true? No, it isn't. May 19th 2009, 02:10 PM #2 May 19th 2009, 07:45 PM #3 Senior Member Apr 2009 Atlanta, GA May 20th 2009, 05:08 AM #4 Junior Member Oct 2008 May 20th 2009, 10:16 AM #5 Senior Member Apr 2009 Atlanta, GA May 21st 2009, 05:27 AM #6 MHF Contributor Aug 2008 Paris, France May 21st 2009, 06:20 AM #7 Senior Member Apr 2009 Atlanta, GA May 21st 2009, 06:49 AM #8 MHF Contributor Aug 2008 Paris, France May 21st 2009, 07:13 AM #9 Senior Member Apr 2009 Atlanta, GA May 21st 2009, 08:12 AM #10 MHF Contributor Aug 2008 Paris, France
{"url":"http://mathhelpforum.com/number-theory/89693-convergence-craziness.html","timestamp":"2014-04-19T00:21:09Z","content_type":null,"content_length":"101805","record_id":"<urn:uuid:bb49a4fb-e9b7-41b3-b063-f91d9cbe8af4>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Statistical/Stata question From Richard Williams <Richard.A.Williams.5@nd.edu> To statalist@hsphsun2.harvard.edu Subject Re: st: Statistical/Stata question Date Tue, 17 Feb 2004 22:03:59 -0500 It isn't silly at all. In effect, you are forcing the first two categories (1 and 2) to have exactly the same odds (probability) for the event. In other words, you are constraining the odds ratio for category2 vs. category1 to be exactly equal to 1. As a consequence, the comparison of category3 to category1 has the same odds ratio as the comparison of category3 to category2 (and similarly for category4). For the purposes of the model, there are only 3 categories: a combined 1/2 category, category3 and category4. © Copyright 1996–2014 StataCorp LP | Terms of use | Privacy | Contact us | What's new | Site index
{"url":"http://www.stata.com/statalist/archive/2004-02/msg00472.html","timestamp":"2014-04-21T02:10:53Z","content_type":null,"content_length":"8576","record_id":"<urn:uuid:9d8c55fd-3cdf-462b-b954-c5faa1746168>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
Difference between interaction and interference of QM systems Suppose I have n identical quantum mechanical systems [latex]\mathcal{H}[/latex] isolated from each other. It is a postulate of quantum mechanics that the states of this composite system are described by rays in the tensor product space [latex]\mathcal{H}^{\otimes n}[/latex]. If the states are not allowed to interfere with each other then the state will be a product state of the form [latex]\otimes_{i=1}^n|\alpha_i \rangle[/latex]. Interference between the wavefunctions opens the possibility of entangled states which cannot be factorized into the form above. In this case, the outcomes of measurements on each of the systems can affect the probabilities of the outcomes of measurements on the remaining systems. An example would be the [latex]2^2[/latex] dimensional Hilbert space associated with an electron/positron pair created in the decay of a neutral pion. Is it correct to say that entanglement is a consequence of wavefunction intereference rather than interaction? In reality the electron/positron pair in the last exam interact via pairwise Coulomb Can interactions other than wavefunction interference affect the entanglement between quantum mechanical systems?
{"url":"http://www.physicsforums.com/showthread.php?t=224839","timestamp":"2014-04-21T14:46:18Z","content_type":null,"content_length":"22736","record_id":"<urn:uuid:e64cf6c4-f2eb-4c33-bc75-c50eb349a9b2>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Evaluating a New Equipment Purchase 1. 341884 Evaluating a New Equipment Purchase Please use Excel for this question so that the solution can be displayed in a clear and organized manner. Your company is evaluating new equipment that will cost $1,000,000. The equipment is in the MACRS 3-year class and will be sold after 3 years for $100,000. Use of the equipment will increase net working capital by 100,000. The equipment will save $450,000 per year in operating costs. The company's tax rate is 30 percent and its cost of capital is 10%. MACRS Depreciation Percentages for three-year class life assets: 33% 45% 15% 7% Please complete the following: Part a. Calculate the cash flow in Year 0. Part b. Calculate the incremental operational cash flows. Part c. Calculate the terminal year cash flow. Part d. Calculate the project's payback period. Part e. Calculate the project's NPV. Part f. Calculate the project's IRR. Part g. Calculate the project's MIRR. Part h. Make the investment decision: Should the project be accepted or rejected? Why or why not? This solution explains how to evaluate the new equipment purchase by calculating cash flow, payback, NPV, IRR and MIRR. This solution is enclosed within an attached Excel document.
{"url":"https://brainmass.com/business/net-present-value/341884","timestamp":"2014-04-21T04:33:19Z","content_type":null,"content_length":"26690","record_id":"<urn:uuid:b87c15bf-4fa6-41be-9fce-390363bc67f2>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
1. Introduction2. General Description of the Study Area3. Data Treatments and Methodology3.1. Data Processing3.1.1. The Generalization of the Topology and Terrains3.1.2. Generalization from Basic Information on Urbanization Process3.1.3. Generalization of the Water Discharge Systems3.2. Methodology3.2.1. Construction of the Mathematical Models of Urban Rainstorm Waterlogging3.2.2. Extraction of the Directions of Waterlogging on Urban Streets3.2.3. Generation of Irregular Grid Cells3.2.4. Establishing the Topology Relationship for Grids and Passages3.3. Experimental Test of Trip Difficulty Thresholds for Acrossing Waterlogged Area3.3.1. Experimental Methodology3.3.2. Experimental Results4. Results and Discussion4.1. Simulation and Visualization of Urban Waterlogging Scenarios4.2. The Relationship between Trip Difficulty and the Depth and Flow Velocity of Waterlogging4.3. Assessment of the Resident Trip Difficulty and the Visualization Results4.4. DiscussionAcknowledgementsReferences Urban areas face potentially significant threats from both natural and human-made catastrophes. Cities growing rapidly at the current speed become very vulnerable to environmental problems, particularly the rainstorm waterlogging problem. As soon as rainwater starts to accumulate on roadways in urban areas, it will become a hazard for the residents because walking across 20 cm deep water is very difficult. Bikers and motorcycles would have problem passing through 30 cm deep water on roads. The most problematic spots are those under bridges as waterlogging can block traffic flow In the majority of cities, rainstorm waterlogging normally occurs within a very short period of time after heavy rainfalls, but rainstorms are not the only cause of waterlogging problems. As a result of the urbanization process, the topology, terrains, and water convergence conditions in urban areas are altered, leading to the decimation of vegetation coverage and river and pond areas. Large proportions of land have been changed into impervious areas, resulting in reduced ground water holding capacity, shortened water stagnation periods and low permeability, and fast surface runoff. On the other hand, the underground water discharge pipe network facilities are typically not upgraded concurrently with city development, so there is a lack of sufficient capacity for discharging such a large volume of water. These two factors together are responsible for causing urban rainstorm waterlogging disasters. The potential damage from waterlogging in urban area increases as the resident population continues to grow, and buildings are more and more compacted with highly centered economic activities. When a waterlogging disaster happens, the normal routines of daily life and the network of society activities are disrupted. When the commuting and transportation systems are interrupted as roads become impassable to residents and motor vehicles, and life support facilities residents will be inaccessible to residents, resulting in big threats to human life and property. Therefore, to accurately assessing the level of residents’ trip difficulty under rainstorm waterlogging conditions is very important for developing evacuation and emergency management plans during a disaster. When assessing the trip difficulty of urban residents, the best way is to use data recorded in previous urban rainstorm waterlogging events to construct mathematical models which are used to simulate the road conditions under various waterlogging scenarios. In developed countries where urbanization started very early, comprehensive strategies to manage urban waterlogging disasters have been developed and some of their experiences are very valuable [2,3]. From the late 19th to the mid-20th centuries, several very important and hydraulics-mathematical models (also including some hydraulic models) have been proposed consecutively, which are the theoretical basis for urban hydrology studies. The mathematical models include the de Saint-Venant system of equations (1871), Manning Formula (1889), the Thiessen polygon method to interpolate area average precipitation (1911), isochrones unit hydrograph method (1922), Pearson-III type curve method to fit the frequency distribution curve (1924), unit hydrograph method (1932), Muskingum method (1935), Synthesis Unit Hydrograph Method (1938), the Los Angeles hydrograph method (1944), the torrential rain intensity formula described using the exponential function (1950), the instant unit hydrograph of Clark method (1957), water ingress flood hydrograph method (1958), the Chicago method (1960), TRLL calculation program (1963), Muskingum-Cunge method (1969) and others. Those urban hydrological and hydraulic mathematical models are the foundation for simulations of trip difficulty for urban residents under rainstorm waterlogging conditions. Forecasts for urban rainstorm waterlogging early-warning systems play a key role in quick response to a disaster. Estimation of the trip difficulty for residents in the waterlogged area is a critical component for any disaster management plan. Japan is a country experiencing very frequent natural disasters. The country had to build a strong capacity to cope with urban waterlogging disasters. Presently in Japan, most of the urban rainstorm waterlogging studies use hydraulic tests to determine the safety thresholds for residents to walk across waterlogged areas, which is used to select the evacuation routes [4]. In a study by Daichi’s group, the authors developed a method to analyze the waterlogging data and to estimate the possibility for the residents to evacuate on foot during a disaster [5]. A comprehensive strategy was developed by also taking consideration human factors during the evacuation process. They also studied an evacuation process when basements were flooded by applying the hydraulic principles [6]; the pressure force exerted from water logging on doors per unit area was estimated, which, in combination with the flow rate and depth of the rainwater, was used to assess the trip difficulty for the residents to evacuate on foot. Then using a large scale in-door hydraulic test [7], the threshold values were determined for residents to walk across the waterlogged area, the length of time required to escape a car, and the relationship between water depth and the difficulty level for humans to escape a waterlogging disaster. In a study by Suga et al. [8,9] five factors, including the flood velocity, water depth, distance from disaster shelter, walking speed, and the awareness of risk in residents were compared, and the threshold values for residents to walk across waterlogged areas were estimated. In summary, several studies have been performed to assess residents’ trip difficulty level to evacuate on foot during an urban rainstorm waterlogging disaster. In most of the reported case studies, the disaster-affected objects and the surrounding environments were selected as the influential factors during the assessment procedure, whereas the dynamics of waterlogging process and simulation under different scenarios was not been fully investigated. In the present study, the process of development of an urban rainstorm waterlogging case was used initially to construct mathematical models which were then used to simulate the dynamics of urban rainstorm waterlogging formation. Factors such as water depth, flow directions, and water flow velocity were all embedded in the models, furthermore, irregular grids were used as skeletons for modeling which made the simulation more accurate. Additionally, scenario simulation methods were used to plot road waterlogging situations under different schemes of rainstorms, then locations of roads under waterlogging condition, the expanse of waterlogged areas, water depth, water flow velocity and directions were simulated dynamically. All of those factors were considered when assessing the trip difficulty for residents in waterlogged areas. The simulation results are very instructive for protecting the safety of residents and their propery, for directing the traffic, and for enhancing forecasting and early warning systems, thus taking appropriate actions to mitigate the risk of urban rainstorm waterlogging disasters. Ha-Erbin is the capital city of HeiLongjiang province in China, it is located at 125°42′–130°10′ east longitude and 44°04′–46°40′ north latitude (Figure 1). The city is the center of politics, economy, culture and transportation in Northeastern China, it has the largest metropolitan area in the region and is also the second largest provincial capital city in terms of expanse and population and one of the top 10 largest cities in China. The city has an area over 53.1 thousand square km, consisting of eight districts and 10 counties (suburbs). The total population is 10.635 million people, and downtown residents are over 5.879 million people. The average annual rainfall precipitation is 569.1 mm, summer is hot and humid with frequent rains. Major precipitation occurs in June–August, which receive over 60–70% of total annual rainfall. Because of the rainfall seasonality, rainstorms occur periodically during the summer. Diagram of the study area. In recent years, the city of Ha-Erbin has been growing rapidly. Neither the new nor the old towns have efficient drainage pipelines, and the cross sectional area of the drainage system is too small. According to the records, in 2011 the water discharge pipeline in the city was 993 km long. Drainage pipes only covered 66% of the area, leaving the remaining 34% of the urban area with no drainage outlets. What made it worse was that over 30% of the pipelines were outdated, and there were 27 km of pipelines that have been in use for over 70 years and the pipes were seriously aged. According to current national regulations, urban water discharge pipelines should have a discharge capacity of 185 m^3/s to handle a medium-rainfall (25 mm/h precipitation), and the standard for urban rain water sewer line density is 11 km per square km, for rain water and sewer convergence discharge pipeline density it is 8 km per square km. In Ha-Erbin, the water discharge capacity was only 117 m^3/s, which is 68 m^3 lower than the medium-rain standard. As a consequence, when there was a medium-rainfall, rain water would accumulate on road surface at a rate of 68 m^3/s, and within about half an hour, some roads started to have paddling-depth water. In the city, rain water and sewage are discharged through the same pipelines. The discharge pipeline density was 5.36 km/square km, which is 30% below the national standard. In the old town, as population continued to increase, areas of water convergence also expanded. However, improvement of the water discharge pipelines fell far behind, and as a consequence some road sections would have water logging due to a lack of drainage system. As more household garbage was produced and continuously disposed of into the drainage pipe lines, some of the pipes became clogged, and the roadways were heavily flooded when it rained, affecting residents’ safety. Urban waterlogging scenarios were simulated for road conditions under various rainstorm conditions. After fitting the historical data of urban rainstorm waterlogging, models were constructed for three scenarios which would occur once every 10, 20 or 50 years. Then data for the waterlogged locations, and the depth and flowing velocity and directions of water, were generated to construct the GIS topic maps for the city (Figure 6, Figure 7, Figure 8). The velocity of waterlogging under three scenarios of reoccurrence of rainstorm: from right to left: every 10 years, 20 years, 50 years. The flowing direction of waterlogging after rainstorm in urban area. Options of accessible-roads were used as major criteria for assessing the trip difficulty in urban rainstorm waterlogging, because waterlogged roads would be impassable to residents, thus threatening the mobility and safety of residents. The major factors affecting road accessibility are the location of the waterlogged streets, the size of the area under water, and the depth, flow velocity and direction of water. After combining information from historical record data and results from the experimental tests conducted in this study as well as referring to published papers [23,24,25], it was concluded that when water on the road is 0.5 m deep and flowing at 1.0 m/s, walking across water becomes very difficult. When water reaches above 1 m deep and moving at 1.5 m/s, the roads would be inaccessible to residents [26,27]. The difficulty levels of walking across areas with different depths and flowing rate of waterlogging were assessed (Table 3). ijerph-09-02057-t003_Table 3 Relationship between trip difficulty for residents and the depth of waterlogging. h < 0.5 m 0.5 m < h < 1 m h > 1 m v < 0.5 m/s Passable Passable Difficult 0.5 m/s < v < 1.5 m/s Passable Difficult Impassable v > 1.5 m/s Difficult Impassable Impassable Using the simulated results under the three unban rainstorm waterlogging scenarios described above, subject maps were generated for the waterlogging depth and flow velocity using ARCGIS. Then the interpolation and buffer wizards in the conversion tools were activated to import the simulated data for water depth and flow rate to obtain the trip difficulty threshold value for each of the grids. Based on the experimental results, the historical records and the situation in downtown Ha-Erbin, the trip difficulty in urban rainstorm waterlogging conditions was divided into three levels: passable roads (water depth 0 m-0.5 m, and flow velocity at 0 m/s-0.5 m/s), difficult but passable roads (water depth at 0.5 m-1 m and flow velocity at 0.5 m/s-1.5 m/s), and impassable roads (water deeper than 1m and flowing velocity above 1.5 m/s). From Figure 9, it can be seen that in the urban rainstorm waterlogging scenario which reoccurs once every 10 years, only three grids were impassable for residents, while other areas remained open. Under this scenario, the safety of residents travelling on foot and traffic flow should be normal. In the once-every-20-years-scenario, 13 grids were impassable, while other areas were open to the traffic. This situation would have some impact on the mobility of residents, traffic jams could occur on some waterlogged roads. In the once-every-50-years scenario, there were 86 grids that would become impassable to residents, while the rest of the city were still open. This situation would have caused traffic jams in expanded areas, roads could be closed for residents and vehicles. Emergency response measures should be taken by the respective agencies in charge to resolve the traffic problems. Evaluation of resident trip difficulty under the three scenarios. This study was performed in the downtown (Daoli) district in Ha-Erbin. The two-dimensional unsteady flow was used as the basic controlling equation, and irregular grids as the basic skeletons. A “three-layer model” was constructed to simulate various urban rainstorm waterlogging scenarios. The model was put into a test on 10 August 2011 in a strong thunderstorm rainfall caused by the north-bound hurricane “Muifa” in Ha-Erbin, and it was proven effective. Meanwhile, according to the rainstorm characteristics in Ha-Erbin, the model was used to simulate the resident trip difficulty in urban rainstorm waterlogging scenarios that reoccur once every 10 years, 20 years, or 50 years. Urban rainstorm waterlogging can be very treacherous in certain areas. Together with the fast moving speed and rapid-spreading properties of precipitation from rainstorm, it is hard to develop an efficient strategy to control the waterlogging situation. Deploying an inappropriate approach can actually exacerbate the resultant damage. After waterlogging spreads into large areas, it will become very difficult to evacuate residents or divert traffic, it also makes it harder for the rescue workers to reach the disaster-stricken areas. In this study, several small-scale urban rainstorm waterlogging scenarios were assessed for the consequential effects on trip difficulty for residents. The research can be expanded to the assessment of urban basic construction, transportation roads and systems, and hazardous disasters in the ecological systems. The simulation methods are applicable in the prediction, forecast and validation of different types of natural disasters on regional scales, the construction of the vulnerable curves in response to natural disasters in urban area, and the enhancement of rapid response to emergency waterlogging situation in urban areas. Application of the technology will benefit rapid assessment of potential disaster damages, improving the level of management and mitigation of risks against natural disasters.
{"url":"http://www.mdpi.com/1660-4601/9/6/2057/xml","timestamp":"2014-04-20T16:43:22Z","content_type":null,"content_length":"87099","record_id":"<urn:uuid:fce7a2fe-4c42-4348-b9ba-3056a2d6cb0a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00009-ip-10-147-4-33.ec2.internal.warc.gz"}