content
stringlengths
86
994k
meta
stringlengths
288
619
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Best Response You've already chosen the best response. Hospital officials estimate that approximately N(p)=p^2+5p+900 people will seek treatment in an emergency room each year if the population of the community is thousand. The population is currently 20,000 and is growing at the rate of 1,200 per year. At what rate is the number of people seeking emergency room treatment increasing? Best Response You've already chosen the best response. Did you forget something between the words "is" and "thousand"? Best Response You've already chosen the best response. I think you're looking for \[\frac{dN}{dt}\] Best Response You've already chosen the best response. which can be found by looking at \[\frac{dN}{dt}=\frac{dN}{dp}\frac{dp}{dt}\] Best Response You've already chosen the best response. First, start with your function\[N(p)=p ^{2}+5p+900\] Let's take a derivative of both sides to unlock the rates of change that are related\[\frac{d}{dt}N(p)=\frac{d}{dt}(p ^{2}+5p+900)\] Then we can simplify to this:\[\frac{dN}{dt}=2p \frac{dp}{dt}+5\] From here, let's plug in what we know:\[1,200=2(20,000) \frac{dp}{dt}+5\] From here, you can solve for dp/dt Best Response You've already chosen the best response. so why did dp/dt only come up with the p^2 term and not the 5p term Best Response You've already chosen the best response. I'm wondering about that myself actually... Best Response You've already chosen the best response. and I'm actually looking for dN/dt according to the way this is making me input my answer Best Response You've already chosen the best response. Good gawd - then I really messed that one up... lol Best Response You've already chosen the best response. haha its all good I've been messing this one up for about an hour Best Response You've already chosen the best response. Let's try that one again... From the top - take two! Best Response You've already chosen the best response. Here is our function:\[N(p)=p^2+5p+900\] Let's first identify some things what we are given, and we'll identify what they're asking us for (whenever I skip that, I screw things up). Best Response You've already chosen the best response. Best Response You've already chosen the best response. that was right to the function not to you screwing up by the way haha Best Response You've already chosen the best response. The population is currently 20,000; so p = 20,000 It is growing at a rate of 1,200 per year; so dp/dt = 1,200 Best Response You've already chosen the best response. They're asking for the the rate at which the number of people seeking medical attention is increasing; so dN/dt = ? Best Response You've already chosen the best response. That's what we're trying to find. :) Best Response You've already chosen the best response. ok well what if we take what you had a second ago 2(20)(dp/dt) +5 and plug in 1200 for dp/dt and yes thats what we are trying to find hha Best Response You've already chosen the best response. well actually thats just going to give us a ridiculously huge number Best Response You've already chosen the best response. But, will the number make sense? Best Response You've already chosen the best response. What did you get? Best Response You've already chosen the best response. no it was like 48 million Best Response You've already chosen the best response. \[\frac{d}{dt}N(p)=\frac{d}{dt}(p^2+5p+900)\]Lets plug these things into the right places this time... \[\frac{d}{dt}N(p)=2p \frac{dp}{dt}+5\frac{dp}{dt}\] \[\frac{dN}{dt}=2(20,000)(1,200)+5 Best Response You've already chosen the best response. ya thats what i did Best Response You've already chosen the best response. Oh man... I got nothing... And nothing was left out of the question? Best Response You've already chosen the best response. ya i don't get it either man thanks anyway Best Response You've already chosen the best response. I'm going to take a look at this one on the calculator really quick, just to see if that will shine a little light on this one. Best Response You've already chosen the best response. Wait a sec... Best Response You've already chosen the best response. You originally wrote: "Hospital officials estimate that approximately N(p)=p^2+5p+900 people will seek treatment in an emergency room each year if the population of the community is thousand. The population is currently 20,000 and is growing at the rate of 1,200 per year. At what rate is the number of people seeking emergency room treatment increasing?" Best Response You've already chosen the best response. Did you instead mean: "Hospital officials estimate that approximately N(p)=p^2+5p+900 people will seek treatment in an emergency room each year if the population of the community IN thousands." ? Best Response You've already chosen the best response. Because if you did, then we should be plugging in 20 instead of 20,000. Actually, that may fix the problem. :) Best Response You've already chosen the best response. If the function is set up to inherently measure in thousands, then by typing in 20,000 - we're accidentally making the population 20,000,000. Best Response You've already chosen the best response. @TheFigure you already reached the answer: 48,006,000 people seek for emergency care per year! Best Response You've already chosen the best response. But he would be entering the answer in wrong if he had those extra zeros attached to the back of it. Best Response You've already chosen the best response. You've been right since 30 min before :) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f6e94efe4b0772daa08bd4b","timestamp":"2014-04-16T13:08:01Z","content_type":null,"content_length":"110231","record_id":"<urn:uuid:1cb141a2-aed5-40c8-8df2-729910df8ccc>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply Oh thank you admin!!! thank you so much.. your reply has been so helpful..i really appreciate your help... thanks a lot man.. you really made my life easier.. thank you.....oh.. you do not have any idea how happy you made me... God bless you man... really, i mean it.. and can you tell me where & how you exactly got the answer from??? thanks a lot once again...
{"url":"http://www.mathisfunforum.com/post.php?tid=125&qid=655","timestamp":"2014-04-16T13:25:06Z","content_type":null,"content_length":"16332","record_id":"<urn:uuid:f65380f7-150a-435c-a415-253f0f689d10>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Henk's Characterization of Recursivity Csaba Henk tphhec01 at degas.ceu.hu Sat Jul 10 13:19:23 EDT 2004 On Thu, Jul 01, 2004 at 04:45:41PM -0400, Ali Enayat wrote: > Let's begin with the classical characterization of recursivity in terms of > TM's (Turing Machines): > 1. A set X is recursive if there is a (deterministic) TM which can "decide" > all questions of the form "is q in X?". In other words, X is recursive if > there a TM with the following properties: > (a) T outputs a single symbol iff T reaches a halting state. > (b) T either outputs s=0 or s=1 if T reaches the halting state. > (c) For every input q, there exists a natural number t such that T halts > after t steps. > (d) T outputs 1 on input q iff q is in X. > 2. Some definitions: > (a) Let us call a TM with properties (a) and (b) a DECISIVE TM. Note that > (c) and (d) are not included in this definition. > (b) Suppose T is a decisive TM. Let us say "T accepts q" (T rejects q) if T > outputs 1 (T outputs 0), when q is the input of T. Also, let us say "T > decides q" if either T accepts q, or T rejects q. > 5. Suppose t,e, s, and q are natural numbers. The statement "After t steps, > the e-th decisive TM outputs s, when provided with q as input" can be > expressed as a bounded formula. This is definitely classical, and in my > judgement has been known **at least** since the discovery of Tranhtenbrot's > theorem. For making back-reference easy, let's call your bounded formula psi(t,e,s,q), or if the TM (so its index e) is fixed, psi_e(t,s,q). > 6. We can now explain the "hard" direction of Henk's characterization: > Suppose X is a recursive set. Then there is some decisive TM T that > witnesses the recursivity of X (as in (1) above). > Consider the formula phi(x) = "the e-th decisive TM accepts x", where T is > the e-th decisive TM. What is phi? What I can imagine is that you think of "there exists t such that psi_e(t,1,x)", it's a sigma_1 formula. > Let V be the set of hereditarily finite sets. By (1) through (5), for every > q, there is some initial segment F(q) of V such that F(q) satisfies "T > decides q". Therefore the following are equivalent: This "T decides q" is a bit ambiguous. If we are in V, it's just equivalent with "True", so then the requirement wrt. F(q) is void. However, I suppose you'd formalize "T decides q" as follows: "there exists t such that psi_e(t,0,x) or psi_e(t,1,x)". > (i) V satisfies phi(q) > (ii) F(q) satisfies phi(q) > (iii) Every end extension of F(q) satisfies phi(q). Now I think you've made mistake here (given that you'd formalize phi in the way suggested above): (ii) and (iii) are *not* equivalent. That is, consider the case when q is not in X, ie., T halts with 0 on q, ie., "there exists t such that psi_e(t,0,x)". In this case, F(q) won't satisfy phi(q), of course. But it's quite possible that you can end extend F(q) such that phi(q) will be valid in the extension -- so, in this pathological extension "there exists t, t' such that psi_e(t,0,x) and psi_e(t',1,x)" will hold. The difficulty lies in re-formulating the above phi in a safe way, which can cope with those "pathological" extensions. It is possible -- it's neither trivial nor too hard, doing this doesn't require a genius, one just have to build the formula carefully, the whole procedure resembles the way how Rosser extended the scope of Godel's noncompleteness result from omega-consistent Peano models to arbitrary ones. Anyway, in my honest opinion the necessity of this re-formulation is enough for raising my statement above the "folklore" status. * * * One more comment: I re-read your proof for the "easy" direction, thanks for that, it's really an elegant solution, much sorter than my original Csaba Henk "There's more to life, than books, you know but not much more..." [The Smiths] More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2004-July/008313.html","timestamp":"2014-04-16T06:02:35Z","content_type":null,"content_length":"6884","record_id":"<urn:uuid:a7ecda67-67f1-4053-8186-ada98211affe>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
Can you separate the integral of product functions? December 30th 2008, 12:40 PM Can you separate the integral of product functions? Hi everyone, This may be a very simple 'No', but it's been going on 15 years since I've had any regular calculus quality time, so I can't remember. if I have the integral of A(x) * B(x) dx, is there a way to separate that into 2 independent integrals? And, second question, is there a simple integral of (where A is constant) or is that an elliptic integral? December 30th 2008, 12:49 PM No. In general $\int f\cdot g~dxe \int f~dx\cdot\int g~dx$ For quick counterexample to the opposite of above is $f=x,g=e^x$. And, second question, is there a simple integral of (where A is constant) or is that an elliptic integral? Try the substitution $x=2\arctan(z)$ (Hi) If you have any problems stop back! December 30th 2008, 12:52 PM Hi everyone, This may be a very simple 'No', but it's been going on 15 years since I've had any regular calculus quality time, so I can't remember. if I have the integral of A(x) * B(x) dx, Parts is often a choice for something like this. And, second question, is there a simple integral of (where A is constant) or is that an elliptic integral? This has a general form. $\displaystyle\frac{ 2tanh^{-1}\left(\displaystyle\frac{(A-1)tan(\frac{x}{2})}{\sqrt{(a-1)(A+1)}}\right)}{\sqrt{(A+1)(A-1)}}$ December 31st 2008, 07:30 AM Thanks. Actually, the post by galactus reminded me to look into Integration by parts, and the Wiki page mentions this: $\int f\cdot g~dx = f \int g~dx - \int ( f' \int g~dx ) ~dx$ So, I'm going to try the substitution you mentioned below, and see what I can find. I have a feeling that I am in reality trying to invent a very old and well-worn wheel, but, as I'm just dabbling in my spare time, I don't mind. THank you though for your suggestions! Do you mean, given the integral I mentioned in my OP, change it to integrate WRT z instead of x, such that $z = tan(x/2)$? I'll see what I can figure out. December 31st 2008, 07:33 AM So, 3 things. First, I love the joke in your sig. I almost spit milk out my nose, and I haven't even drunk any milk this week. Second, thanks for the help on the integral, as well as the reference to parts. Third - I saw that $tanh$, and immediately shuddered. I never did understand the hyperbolic functions back in high school, and never went back to them later.... December 31st 2008, 11:13 AM Now that I am looking at this in more detail, I am not sure how you got from A to B. Is there someplace you could point me that might explain the process you took? What my original post boiled down to was that I am trying to figure out : $\int\displaystyle\frac{1 + cos \theta}{1 + A*cos \theta} ~d\theta$ over the interval from $0 - 2 \pi$, where A is a constant. I don't know how to do it, so I was trying to tease out possible strategies without blatantly asking people to do it for me. By trying integration by parts, given $\int f(x)*g(x) = f(x) * \int g(x)~dx - \int [f'(x) \int g(x)~dx]~dx$ I set $g(x) = 1+cos \theta$ $f(x) = 1+a*cos \theta$ such that I could substitute the first half of the parts equation in as follows: $f(x) * \int g(x)~dx = \displaystyle\frac{\theta + sin \theta}{1 + A*cos \theta}$ and the second part would be: $\int(f'(x) \int g(x)~dx)~dx = \int [f'(x)*(\theta + sin \theta)]~d\theta$ It's calculating $f'(x)$ that I am tripping myself. my first impulse is to think that since the exponent is -1, just multiply f(x) by negative 1 and decrease the exponent to negative 2, but there is a huge nagging image of my Calc teacher yelling something about throwing in the derivative of cos(theta) as well. But then I think, well, simplifying the second terms results in the integral of the derivative of f(x), and isn't that just f(x)? Or did I go wrong long before all that? December 31st 2008, 11:24 AM If you use the sub that mathstud suggested(it's called the Weierstrauss substitution) it works out OK. $x=2tan^{-1}(u), \;\ dx=\frac{2}{u^{2}+1}du, \;\ u=tan(\frac{x}{2})$ Then, upon making the subs, it whittles down to: Integrate and resub. The solution may be different than the one I posted, but equivalent. December 31st 2008, 01:30 PM Here you go my friend: Let $x\longmapsto 2\arctan(z)$ so we get $dx=\frac{2}{1+z^2}dz$. So subbing gives $\int\frac{dx}{a+b\cos(x)}\stackrel{x=2\arctan{z}}{ \longmapsto}2\int\frac{1}{a+b\cos\left(2\arctan(z) \right)}\cdot\frac{dz}{1+z^2}$ Now remember three things $\cos(\arctan(\phi))=\frac{1}{\sqrt{1+\phi^2}}\quad (2)$ $\sin(\arctan(\phi))=\frac{\phi}{\sqrt{1+\phi^2}}\q uad(3)$ So combining these three things we can see that \begin{aligned}\cos\left(2\arctan(z)\right)&=\cos^ 2\left(\arctan(z)\right)-\sin^2\left(\arctan(z)\right)\\<br /> &=\left(\frac{1}{\sqrt{1+z^2}}\right)^2-\left(\frac{z}{\sqrt{1+z^2}}\right)^2\\ <br /> &=\frac{1}{1+z^2}-\frac{z^2}{1+z^2}\end{aligned} \begin{aligned}2\int\frac{dz}{a+b\cos\left(2\arcta n(z)\right)}\cdot\frac{1}{1+z^2}&=2\int\frac{dz}{a +\frac{b}{1+z^2}-\frac{bz^2}{1+z^2}}\cdot\frac{1}{1+z^2}\\<br /> &=2\int\frac{dz}{a(1+z^2) +b-bz^2}\\<br /> &=2\int\frac{dz}{(a-b)z^2+a+b}\end{aligned} Now consider this. We can rewrite $(a-b)z^2$ as $\left(\sqrt{a-b}z\right)^2$ So now make a trig sub $\sqrt{a-b}z\longmapsto\sqrt{a+b}\tan(\vartheta)$ so $dz=\frac{\sqrt{a+b}}{\sqrt{a-b}}\sec^2(\vartheta)d\vartheta$ Subbing this gives \begin{aligned}2\int\frac{dz}{\left(\sqrt{a-b}z\right)^2+a+b}&\stackrel{\sqrt{a-b}z=\sqrt{a+b}\tan(\vartheta)}{\longmapsto}2\frac{ \sqrt{a+b}}{\sqrt{a-b}}\int\frac{\sec^2(\vartheta)}{\left(\sqrt {a+b}\t an(\vartheta)\right)^2+a+b}d\vartheta\\<br /> &=\frac{2}{\sqrt{a-b}\sqrt{a+b}}\int\frac{\sec^2(\vartheta)}{\tan^2(\ vartheta)+1}d\vartheta\end{aligned} Now remember that \begin{aligned}\tan^2(\phi)+1&=\frac{\sin^2(\phi)} {\cos^2(\phi)}+1\\<br /> &=\frac{\sin^2(\phi)+\cos^2(\phi)}{\cos^2(\phi) }\\<br /> &=\frac{1}{\cos^2(\phi)}\\<br /> &=\sec^2(\phi)\end{aligned} \begin{aligned}\frac{2}{\sqrt{a-b}\sqrt{a+b}}\int\frac{\sec^2(\vartheta)}{\tan^2(\ vartheta)+1}d\vartheta&=\frac{2}{\sqrt{a-b}\sqrt{a+b}}\int~d\vartheta\\<br /> &=\frac{2\vartheta}{\sqrt{a-b}\ So we must make our tedious backsubs $\sqrt{a-b}z=\sqrt{a+b}\tan(\vartheta)\implies \arctan\left(\frac{\sqrt{a-b}z}{\sqrt{a+b}}\right)=\vartheta$ But now remember that $x=2\arctan(z)\implies \tan\left(\frac{x}{2}\right)=z$ So we finally get $\int\frac{dx}{a+b\cos(x)}=\frac{2\arctan\left(\fra c{\sqrt{a-b}\tan\left(\frac{x}{2}\right)}{\sqrt{a+b}}\right) }{\sqrt{a^2-b^2}}+C$ I hope that helps (Hi) December 31st 2008, 01:35 PM Go Mathstud(Clapping) January 2nd 2009, 08:46 AM Here you go my friend: Let $x\longmapsto 2\arctan(z)$ so we get $dx=\frac{2}{1+z^2}dz$. So subbing gives $\int\frac{dx}{a+b\cos(x)}\stackrel{x=2\arctan{z}}{ \longmapsto}2\int\frac{1}{a+b\cos\left(2\arctan(z) \right)}\cdot\frac{dz}{1+z^2}$ Now remember three things $\cos(\arctan(\phi))=\frac{1}{\sqrt{1+\phi^2}}\quad (2)$ $\sin(\arctan(\phi))=\frac{\phi}{\sqrt{1+\phi^2}}\q uad(3)$ So combining these three things we can see that \begin{aligned}\cos\left(2\arctan(z)\right)&=\cos^ 2\left(\arctan(z)\right)-\sin^2\left(\arctan(z)\right)\\<br /> &=\left(\frac{1}{\sqrt{1+z^2}}\right)^2-\left(\frac{z}{\sqrt{1+z^2}}\right)^2\\ <br /> &=\frac{1}{1+z^2}-\frac{z^2}{1+z^2}\end{aligned} \begin{aligned}2\int\frac{dz}{a+b\cos\left(2\arcta n(z)\right)}\cdot\frac{1}{1+z^2}&=2\int\frac{dz}{a +\frac{b}{1+z^2}-\frac{bz^2}{1+z^2}}\cdot\frac{1}{1+z^2}\\<br /> &=2\int\frac{dz}{a(1+z^2) +b-bz^2}\\<br /> &=2\int\frac{dz}{(a-b)z^2+a+b}\end{aligned} Now consider this. We can rewrite $(a-b)z^2$ as $\left(\sqrt{a-b}z\right)^2$ So now make a trig sub $\sqrt{a-b}z\longmapsto\sqrt{a+b}\tan(\vartheta)$ so $dz=\frac{\sqrt{a+b}}{\sqrt{a-b}}\sec^2(\vartheta)d\vartheta$ Subbing this gives \begin{aligned}2\int\frac{dz}{\left(\sqrt{a-b}z\right)^2+a+b}&\stackrel{\sqrt{a-b}z=\sqrt{a+b}\tan(\vartheta)}{\longmapsto}2\frac{ \sqrt{a+b}}{\sqrt{a-b}}\int\frac{\sec^2(\vartheta)}{\left(\sqrt {a+b}\t an(\vartheta)\right)^2+a+b}d\vartheta\\<br /> &=\frac{2}{\sqrt{a-b}\sqrt{a+b}}\int\frac{\sec^2(\vartheta)}{\tan^2(\ vartheta)+1}d\vartheta\end{aligned} Now remember that \begin{aligned}\tan^2(\phi)+1&=\frac{\sin^2(\phi)} {\cos^2(\phi)}+1\\<br /> &=\frac{\sin^2(\phi)+\cos^2(\phi)}{\cos^2(\phi) }\\<br /> &=\frac{1}{\cos^2(\phi)}\\<br /> &=\sec^2(\phi)\end{aligned} \begin{aligned}\frac{2}{\sqrt{a-b}\sqrt{a+b}}\int\frac{\sec^2(\vartheta)}{\tan^2(\ vartheta)+1}d\vartheta&=\frac{2}{\sqrt{a-b}\sqrt{a+b}}\int~d\vartheta\\<br /> &=\frac{2\vartheta}{\sqrt{a-b}\ So we must make our tedious backsubs $\sqrt{a-b}z=\sqrt{a+b}\tan(\vartheta)\implies \arctan\left(\frac{\sqrt{a-b}z}{\sqrt{a+b}}\right)=\vartheta$ But now remember that $x=2\arctan(z)\implies \tan\left(\frac{x}{2}\right)=z$ So we finally get $\int\frac{dx}{a+b\cos(x)}=\frac{2\arctan\left(\fra c{\sqrt{a-b}\tan\left(\frac{x}{2}\right)}{\sqrt{a+b}}\right) }{\sqrt{a^2-b^2}}+C$ I hope that helps (Hi) Yes, absolutely, it helps tremendously. Now, if I try to work through things on my own, given the equation I am truly interested in: $\int\frac{1+cos(\theta)}{1+b\cdot cos(\theta)}~d\theta$ if I start off with the same substitution you do, I need to take care of the cos term now in the numerator. So, $cos(\theta) = cos(2arctan(z))$ $cos(2x) = cos^2 (x) - sin^2 (x)$ $cos(2arctan(z)) = cos^2 (arctan(z)) - sin^2 (arctan(z))$ $cos(2arctan(z)) = \frac{1}{1+ z^2 } - \frac{z^2 }{1+ z^2 }$ putting this into the larger equation, we get $2\int\frac{1 + \frac{1 - z^2}{1+ z^2}}{a+b\cos\left(2\arctan(z)\right)}\cdot\frac{d z}{1+z^2}$ simplifying the numerator then gives: $4\int\frac{\frac{1}{1+ z^2}}{a+b\cos\left(2\arctan(z)\right)}\cdot\frac{d z}{1+z^2}$ Am I right so far? Now, should I continue from here with the same substitution of Or should I choose a different function than tan? January 2nd 2009, 06:39 PM Eh, you can do it that way...but try seperating the integrals and considering $\int\frac{\cos(x)}{a+b\cos(x)}dx=\int\frac{\cos(x) }{1+a\sqrt{1-\sin^2(x)}}dx\stackrel{\varphi=\sin(x)}{\longmapst o}\int\frac{d\varphi}{a+b\sqrt{1-\varphi^2}}$ January 5th 2009, 07:51 AM Thanks! Now, thinking back, and looking around, I know that $\int\frac{d\varphi}{\sqrt{1-\varphi^2}} = \arcsin(\varphi)$ and if I try $\int\frac{d\varphi}{b\cdot\sqrt{1-\varphi^2}} = \frac{\arcsin(\varphi)}{b}$ But, throwing that 'a' term in there is confusing the dickens out of me. Is there a way to work around that? January 5th 2009, 08:20 AM Thanks! Now, thinking back, and looking around, I know that $\int\frac{d\varphi}{\sqrt{1-\varphi^2}} = \arcsin(\varphi)$ and if I try $\int\frac{d\varphi}{b\cdot\sqrt{1-\varphi^2}} = \frac{\arcsin(\varphi)}{b}$ But, throwing that 'a' term in there is confusing the dickens out of me. Is there a way to work around that? Wow. I just tried an online integral calculator (no idea how accurate, it's at Online Integral Calculator ), and the result it spat out, given the form above was that it cold not be integrated. But, when I removed the $\varphi = \sin(x)$ substitution, and let a=1, was: $\int\frac{\cos(x)}{1+b\cdot\cos(x)}dx =$$-\frac{2\cdot\sqrt{1-b}\cdot\sqrt{1+b}\cdot\arctan(\frac{\sin(x)}{\cos( x)+1}) - 2\cdot\arctan(\frac{\sqrt{1+b}\cdot(\sin(x)}{\sqrt {1-b}\cdot(\cos(x) based on whether $(b-1)\cdot(b+1)$ was positive or negative. But, it gave no suggestion as to how that was all computed. I am not one to look a gift horse in the mouth, but since I started trying to figure these integrals out solely for the excuse to play with math, I'd rather not include a 'wave magic wand *here*' step. Any ideas how this came about? Of course, this is all moot if I am integrating from 0 to 2 $\pi$, as all the arctan terms become 0. January 5th 2009, 12:15 PM Wow. I just tried an online integral calculator (no idea how accurate, it's at Online Integral Calculator ), and the result it spat out, given the form above was that it cold not be integrated. But, when I removed the $\varphi = \sin(x)$ substitution, and let a=1, was: $\int\frac{\cos(x)}{1+b\cdot\cos(x)}dx =$$-\frac{2\cdot\sqrt{1-b}\cdot\sqrt{1+b}\cdot\arctan(\frac{\sin(x)}{\cos( x)+1}) - 2\cdot\arctan(\frac{\sqrt{1+b}\cdot(\sin(x)}{\sqrt {1-b}\cdot(\cos(x) based on whether $(b-1)\cdot(b+1)$ was positive or negative. But, it gave no suggestion as to how that was all computed. I am not one to look a gift horse in the mouth, but since I started trying to figure these integrals out solely for the excuse to play with math, I'd rather not include a 'wave magic wand *here*' step. Any ideas how this came about? Of course, this is all moot if I am integrating from 0 to 2 $\pi$, as all the arctan terms become 0. Yes, this is most likely the correct answer. It is not exactly the same result as mine, but I am sure with an elementary manipulation this reduces to my antiderivative. To handle the integral that I left you with try multiplying the by denominator's conjugate, i.e. $\frac{a-b\sqrt{1-\varphi^2}}{a-b\sqrt{1-\varphi^2}}$. Alternatively, before ever subbing $\varphi =\sin(x)$ you can make the sub $x=2\arctan(\vartheta)$ similar to the first integral. Note that we can "simplify" this slightly \begin{aligned}\int\frac{\cos(x)}{a+b\cos(x)}~dx&= \frac{1}{b}\int\frac{a+b\cos(x)-a}{a+b\cos(x)}~dx\\<br /> &=\frac{1}{b}\int\left\{1-\frac{a}{a+b\cos(x)}\right\}~dx\end{aligned} From there you can just use the original integral. A little easier than my first suggestion :D...but only because we previously calculated that integral.
{"url":"http://mathhelpforum.com/calculus/66370-can-you-separate-integral-product-functions-print.html","timestamp":"2014-04-16T14:27:55Z","content_type":null,"content_length":"48815","record_id":"<urn:uuid:865ee2ca-b5b2-4efb-b8c4-2b4b80df0eed>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
The Derivative as a Function We know that if `f` is a function, then for an `x`-value `c`: • `f'(c)` is the derivative of `f` at `x = c`. • `f'(c)` is slope of the line tangent to the `f`-graph at `x = c`. • `f'(c)` is the instantaneous rate of change of `f` at `x = c`. In this applet we move from thinking about the derivative of `f` at a point , to thinking about the derivative function
{"url":"http://webspace.ship.edu/msrenault/GeoGebraCalculus/derivative_as_a_function.html","timestamp":"2014-04-18T20:54:28Z","content_type":null,"content_length":"7304","record_id":"<urn:uuid:ac2c3fdf-9f95-48c7-8980-66cb1f34d29c>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
Aronszajn tree From Encyclopedia of Mathematics An Aronszajn tree (specifically, an tree with no uncountable levels and no uncountable branches. The study of such trees is of central concern in infinitary combinatorics and also has applications in general topology (cf. also Combinatorial analysis; Topology, general). The existence of Aronszajn trees is a theorem of ZFC set theory; however, many questions concerning their properties are known to be undecidable in ZFC. For example, an Aronszajn tree with no uncountable anti-chain is called a Suslin tree; the existence of a Suslin tree is equivalent to failure of the Suslin hypothesis and is thus undecidable. The formulation of Suslin's hypothesis in terms of trees facilitated its study using set-theoretic methods, particularly forcing (cf. Forcing method). Another example of an undecidable question is whether every Aronszajn tree, when viewed as a topological space, is normal (cf. Normal space). A special Aronszajn tree is one which admits an order-preserving mapping into Suslin hypothesis) plus the negation of the continuum hypothesis, every Aronszajn tree is special, and consequently Suslin's hypothesis is true. Under the proper forcing axiom, one has the stronger property that any two Aronszajn trees are essentially isomorphic in the sense that there is a closed unbounded set of Weaker notions of "being special" have been studied, for example by restricting the domain of the order-preserving mapping to a subset of the tree, leading to yet more undecidability results via advanced forcing methods. Also notable is the fact that, consistently, Suslin's hypothesis does not imply that every Aronszajn tree is special. By generalizing to larger cardinals (cf. Cardinal number), one can obtain more undecidable statements, and the methods of inner models come into play. In contrast with the case of Aronszajn trees are examples of a larger class of trees called axiom of choice). A Kurepa tree is an A comprehensive reference is [a7]. For various notions of "being special" , see [a1], [a2] and [a6]. Aronszajn trees are treated in the standard texts [a3] and [a4]. For [a5] and [a6]. [a1] J. Baumgartner, "Iterated forcing" A.R.D. Mathias (ed.) , Surveys in Set Theory , Cambridge Univ. Press (1979) [a2] J. Baumgartner, J. Malitz, W. Reinhardt, "Embedding trees in the rationals" Proc. Nat. Acad. Sci. USA , 67 (1970) pp. 1748–1753 [a3] T. Jech, "Set theory" , Acad. Press (1978) [a4] K. Kunen, "Set theory: an introduction to independence proofs" , North-Holland (1980) [a5] W. Mitchell, "Aronszajn trees and the independence of the transfer property" Ann. Math. Logic , 5 (1972) pp. 21–46 [a6] S. Shelah, "Proper forcing" , Springer (1982) [a7] S. Todorcevic, "Trees and linearly ordered sets" K. Kunen (ed.) J.E. Vaughan (ed.) , Handbook of Set Theoretic Topology , North-Holland (1984) How to Cite This Entry: Aronszajn tree. Ch. Schlindwein (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Aronszajn_tree&oldid=13462 This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
{"url":"http://www.encyclopediaofmath.org/index.php/Aronszajn_tree","timestamp":"2014-04-20T05:48:43Z","content_type":null,"content_length":"22792","record_id":"<urn:uuid:0ee5e054-1c85-4a46-a5f5-7d1f82d2663c>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
Tangent Lines Passing Through Origin January 17th 2010, 08:59 AM #1 Sep 2009 Tangent Lines Passing Through Origin At how many points on the curve y=4x^5-3x^4+15x^2+6 will the line tangent to the curve pass through the origin. The only way I know how to go about this problem is to find the derivative and thats it. Please Help! It depends on how many different roots $x_0 \in \mathbb{R}$ the equation $f(x) = 0$ has. Observe that between any 2 different roots $x_1,x_2$ that $f(x)$ attains a maximum/minimum in the interval $(x_1,x_2)$. And there exists a $a_0$ with $x_1<a_0<x_2$ where $f'(a_0)(x-a_0) +f(a_0)$ the tangent at $(a_0,f(a_0))$ passes through the origin. But we might just as well calculate how many maxima/minima f(x) attains: Thus if you find all different "real " roots of $f'(x) = 20x^4-12x^3+30x = x(20x^3-12x^2+30) = 0$ you're done. Edit: You don't even have to find the roots of $f'(x)= 0$ explicitly. You can use the intermediate value theorem to decide how many roots in $\mathbb{R}$ this equation has. Last edited by Dinkydoe; January 17th 2010 at 10:10 AM. If you continue by using $\frac{f(x)}{x}$ then you can find the minimum value(s) of this, that will give you the slope of the tangent(s) that passes through the origin, allowing for x being positive or negative. In blue is f(x) and in pink is $\frac{f(x)}{x}$ the tangent slope gives the min value for $\frac{f(x)}{x}$ since if another x is chosen to the right of the origin, the ratio will be greater. January 17th 2010, 09:24 AM #2 January 17th 2010, 09:28 AM #3 MHF Contributor Dec 2009 January 17th 2010, 09:49 AM #4 MHF Contributor Dec 2009
{"url":"http://mathhelpforum.com/calculus/124139-tangent-lines-passing-through-origin.html","timestamp":"2014-04-16T08:16:21Z","content_type":null,"content_length":"42144","record_id":"<urn:uuid:12c04e79-5187-4b74-a7f1-6e1aa085bc39>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
#include <math.h> double pow(double x, double y) float powf(float x, float y) The pow() function computes x raised to the power y. If x is negative, y must be an integer value. To check for errors, set errno to 0 before calling pow(). If errno is non-zero on return, or the return value is NaN, an error has occurred. The powf() function is a single-precision version of pow(). The pow() function returns the value of x**y. If x is 0.0 and y is 0.0, 1.0 is returned unless in SVID mode, in which case 0.0 is returned and matherr() is called. If x is 0.0 and y is negative, then: If x is NaN and y is zero, then: If y is NaN or y is non-zero and x is NaN, NaN is returned. If y is 0.0 and x is NaN, then: If x is negative and y is a non-integer, then: If it overflows, then: If it underflows, 0.0 is returned and: pow(): ANSI/ISO 9899-1990 powf(): PTC MKS Toolkit UNIX APIs extension PTC MKS Toolkit for Professional Developers PTC MKS Toolkit for Enterprise Developers PTC MKS Toolkit for Enterprise Developers 64-Bit Edition exp(), log(), log10(), math(), sqrt() PTC MKS Toolkit 9.6 Documentation Build 9.
{"url":"http://www.mkssoftware.com/docs/man3/pow.3.asp","timestamp":"2014-04-17T01:11:43Z","content_type":null,"content_length":"10007","record_id":"<urn:uuid:03573d77-3751-49eb-99e0-0aa99bb55805>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
Kids.Net.Au - Encyclopedia > Fundamental Theorem of Algebra The fundamental theorem of algebra (now considered something of a misnomer by many mathematicians) states that every complex polynomial of degree n has exactly n zeroes, counted with multiplicity. More formally, if (where the coefficients , ..., can be numbers), then there exist (not necessarily distinct) complex numbers , ..., such that This shows that the field of complex numbers, unlike the field of real numbers, is algebraically closed. An easy consequence is that the product of all the roots equals (-1)^n a[0] and the sum of all the roots equals -a[n-1]. The theorem had been conjectured in the 17th century but could not be proved since the complex numbers had not yet been firmly grounded. The first rigorous proof was given by Carl Friedrich Gauss in the early 19th century. (An almost complete proof had been given earlier by d'Alembert.) Gauss produced several different proofs throughout his lifetime. It is possible to prove the theorem by using only algebraic methods, but nowadays the proof based on complex analysis seems most natural. The difficult step in the proof is to show that every non-constant polynomial has at least one zero. This can be done by employing Liouville's theorem which states that a bounded function which is holomorphic in the entire complex plane must be constant. By starting with a polynomial p without any zeros, one can pass to the holomorphic function 1/p and Liouville's theorem then yields that 1/p and therefore also p are constant. All Wikipedia text is available under the terms of the GNU Free Documentation License
{"url":"http://encyclopedia.kids.net.au/page/fu/Fundamental_Theorem_of_Algebra","timestamp":"2014-04-20T23:28:45Z","content_type":null,"content_length":"14615","record_id":"<urn:uuid:d9072f33-f75b-4a9e-ad10-480f61f60fa7>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
Random updates Random updates June 14, 2010 Posted by Akhil Mathew in Uncategorized. It’s been another two months already since anyone last posted here, hasn’t it? So, first of all, Damien Jiang, Anirudha Balasubramanian, and I have each uploaded the papers resulting from our RSI projects to arXiv. I’ve been discussing the story of my project on representation theory and the mathematics around it on my personal blog (see in particular here and here). There are others from the program who have placed their papers on arXiv as well (but are not involved in this blog). I’d like to congratulate my friend and fellow Rickoid Yale Fan for winning the Young Scientist award at the International Science and Engineering Fair for his project on quantum computation (which deservedly earned him the title “rock star”). I also congratulate his classmate and fellow rock star Kevin Ellis (who did not do RSI, but whom I know from STS) for winning the (again fully deserved) award for his work on parallel computation. There is a press release here. RSI 2010 is starting in just a few more days. I’m not going to have any involvement in the program myself (other than potentially proofreading drafts of kids’ papers from several hundred miles away), nor do I know much about what kinds of projects (mathematical or otherwise) will be happening there. I think I’d be interested in being a mentor someday—maybe in six years time. I’m going to be doing a project probably on something geometric this summer, but it remains to be seen on what. I don’t really know what’s going to become of this blog as we all now finish high school and enter college. It looks like most of us will be in Cambridge, MA next year; this is hardly surprising given the RSI program’s location there. Also, just to annoy Yale, I’m going to further spread the word that he is going to Harvard. If anyone from RSI 2010 wants to join/revive this blog, feel free to send an email to deltaepsilons [at] gmail [dot] com. No comments yet — be the first. Recent Comments sethsnap.com on Integrality, invariant theory… http://www.spunka.se… on Representations of the symmetr… Home Page on Integrality, invariant theory… Felix on Riemann integration in abstrac… http://customlogopin… on Bourbaki 2.0: Or, is massively…
{"url":"http://deltaepsilons.wordpress.com/2010/06/14/random-updates/","timestamp":"2014-04-21T02:41:18Z","content_type":null,"content_length":"68721","record_id":"<urn:uuid:4bb87d18-399f-4d7d-a169-fbc831c24191>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
Experiment 7 The Mass of an Electron The objective of this experiment is to measure the mass of an electron by using electric and magnetic fields. Tuning-eye vacuum tube, high-voltage dc source, two 12 V dc power supplies, two multi-meters, a few cylindrical objects of circular cross-section (of small diameters such as pencils or plastic rods), a solenoid (with an inside diameter greater than the outside diameter of the tuning-eye tube), connecting wires, and a calculator It is possible to use an electric field perpendicular to a magnetic field in order to measure the mass of an electron knowing that the electronic charge is e = �1.6x10^-19 C. When a charge q moving at velocity v crosses a magnetic field B perpendicular to its field lines, the magnetic field exerts a force F[m] on the charge perpendicular to the plane of v and B. The magnitude of F[m] is given by F[m ]= | q v B | . The perpendicularity of F[m] and v guarantees a centripetal force that is indeed the magnetic force F[m] itself. The centripetal force F[c] forces the electron to travel in a circular path. Equating F[m] and F[c] , yields: Dividing both sides by v and solving for R, the radius of revolution, results in If an electron of mass M (to be determined), whose charge is the known value �e, moving at speed v, crosses a magnetic field of strength B, it will be given a radius of revolution R that can be calculated from the equation: ( 1 ) The difficult variable to measure in equation (1) is v, the (magnitude of) the velocity of the electron. Velocity v can be determined as described below. P.E. lost = K.E gained. eV = � M v^ 2. From this equation, v^ 2 may be calculated as ( 2 ) Solving (1) for v and then squaring it yields: ( 3 ) Equating (2) and (3), gives: Dividing both sides by (e/M), ( 4 ) This equation will be used to measure M, the mass of an electron. A tuning-eye is an electronic device widely used in older non-transistor radios. It was used as a visual indicator for best tuning on a desired station. When the tiny filament in a tuning eye is given a low voltage (V[1]), it warms up and glows red, as does an electric heater. In this heated state, the filament releases electrons. Another voltage (V) may be used to create an electric field in which the released electrons can be energized, accelerated, and brought into motion toward a positive dish. The negative filament, the positive dish, and the two voltage sources are shown below: The positive dish-like surface is coated with a metal oxide that glows green/blue as electrons hit it. A metal cap is placed over the element which is held by three thin legs. These legs cast a shadow on the dish, making straight dark lines on it when the tuning-eye tube is in use. Increasing V makes the dish glow brighter. An instruction comes with each tuning-eye tube that must be followed for proper use. The following steps should be taken: 1. Connect the filament wires as indicated in the instruction to an appropriate voltage V[1]. 2. Wait a few seconds for the filament to warm up and observe its reddish color. 3. Connect the other wires as indicated in the instruction to the second appropriate source and increase the voltage to the appropriate level V and observe that the dish attains a bluish or greenish color. Pay attention to the shadow cast on the dish by the cap�s legs and note that they are straight lines as viewed from the top of the tuning eye. 4. Connect a pre-selected solenoid (with known number of turns per meter) to an appropriate power supply that can provide a few amperes. An ammeter must be placed in the solenoid circuit for a more-accurate current measurement. Note that the ammeter wire must be put in its 10-Amp setting. 5. With a current of about 1A passing through the solenoid, place the solenoid over the tuning-eye tube so that the filament is positioned at the center of it, and observe how the legs� shadows (straight lines to begin with) bend as the magnet is lowered. The value you later calculate for B, magnetic field strength, actually occurs at the middle of the solenoid; therefore, make sure that the filament is precisely inside and at the center of the solenoid. 6. Adjust the dish voltage V, and the solenoid current I to get a nice curve for the shadow of each leg in the tuning eye. The radius of curvature R must be measured. Any cylindrical non-metallic object that goes into the solenoid may be used. While holding that object inside the solenoid and looking straight down on to it and the top of the tuning eye, try to match the curvature of the shadow with the curvature of the object by adjusting V and I. 7. Try three different round objects and obtain three different sets of V, I, and R. 8. For each set, calculate B = μ[o]n I , where n is the number of turns per meter of the solenoid. 9. Use B, V, and R along with e to find the value of M for each set by using equation (4). 10. Find a mean value for M using the three values calculated. This is the measured value. M[accepted ]= 9.11x10^-31 kg. n = number of turns per meter of the solenoid obtained from the manufacturer�s paper. μ[o] = 4 π x 10^-7 (T m/A) For each set: V, I, and R Follow the steps in Procedure. Comparison of the Results: The accepted and measured values of M may be used to obtain a percent error: Conclusion: To be explained by students Discussion: To be explained by students
{"url":"http://www.pstcc.edu/nbs/WebPhysics/Experim%2007.htm","timestamp":"2014-04-16T10:11:21Z","content_type":null,"content_length":"15962","record_id":"<urn:uuid:68c63823-cff8-4aa2-b9c9-a8c9505863b0>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Matrix Notation and Matrix Multiplication February 8th 2011, 07:05 PM #1 Matrix Notation and Matrix Multiplication If A and B are n by n matrices with all entries equal to 1, find $\ (ab)_{ij}$. Summation notation turns the product AB, and the law (AB)C=A(BC), into $\\ (ab)_{ij}= \displaystyle \sum_{k}\ a_{ik}b_{kj}\ \displaystyle \sum_{j}\left(\sum_{k}\ a_{ik}b_{kj}\right)\ c_{jl}\ =\displaystyle \sum_{k}\ a_{ik}\left(\sum_j\ b_{kj}c_{il}\right)$ Compute both sides if C is also n by n with every $\ c_{jl}=2$ I haven't even done any work on this yet, because first I don't even understand what the question wants, and second I don't even know where to begin. Last edited by Erich; February 8th 2011 at 07:18 PM. Reason: Trying to make my problem clearer Could you please insert some punctuation into your big equation there? It's impossible to parse as is. That's how it's written in my book. I am not really after a solution, more like am more interested in how I could start working this out. What I think that this question is basically asking, is for a proof of the associative property using the above formulas, but I don't know where to start. Last edited by Erich; February 8th 2011 at 09:05 PM. It looks like you're asked to assume that $A_{ij}=B_{ij}=1$ for all $i,j,$ and $C_{ij}=2$ for all $i,j$. Then you're asked to compute $(AB)_{ij},$ and show that $((AB)C)_{ij}=(A(BC))_{ij}.$ For the first, you have that $\displaystyle (AB)_{ij}=\sum_{k=1}^{n}A_{ik}B_{kj}=\sum_{k=1}^{n }(1\cdot 1)=n.$ For the second, plug into your formula: $\displaystyle((AB)C)_{il}=\sum_{j}\left(\sum_{k}\ a_{ik}b_{kj}\right)\ c_{jl}\ \overset{?}{=}\sum_{k}\ a_{ik}\left(\sum_j\ b_{kj}c_{jl}\right)=(A(BC))_{il},$ yielding the test $\displaystyle((AB)C)_{il}=\sum_{j}\left(\sum_{k}1\ right)2\overset{?}{=}\sum_{k}1\left(\sum_j 2\right)=(A(BC))_{il},$ which yields $\displaystyle((AB)C)_{il}=\sum_{j}2n\overset{?}{=} \sum_{k}1\cdot 2n=(A(BC))_{il}.$ Can you finish? If all I needed to do was compute both sides I believe you've already finished. I could probably even build a 2x2 that supports this. Thank you!! You're welcome. Have a good one! Thought I would let you know Ackbeet that I found the solution to be 2n^2, which made so much more sense after looking at your work, and learned something new about summations and matrices. I would like to thank you once again for giving me a new perspective between proofs, associative law, and computations with summations and matrices. You're welcome! February 8th 2011, 07:07 PM #2 February 8th 2011, 07:10 PM #3 February 9th 2011, 06:17 AM #4 February 9th 2011, 06:33 AM #5 February 9th 2011, 06:36 AM #6 February 10th 2011, 07:55 PM #7 February 11th 2011, 03:57 AM #8
{"url":"http://mathhelpforum.com/advanced-algebra/170621-matrix-notation-matrix-multiplication.html","timestamp":"2014-04-18T21:23:19Z","content_type":null,"content_length":"56333","record_id":"<urn:uuid:8d6e7623-d50b-4c2c-b58d-3431ad364e83>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
An Introduction to Latent Variable Models Results 1 - 10 of 105 , 1993 "... Probabilistic inference is an attractive approach to uncertain reasoning and empirical learning in artificial intelligence. Computational difculties arise, however, because probabilistic models with the necessary realism and flexibility lead to complex distributions over high-dimensional spaces. Rel ..." Cited by 567 (20 self) Add to MetaCart Probabilistic inference is an attractive approach to uncertain reasoning and empirical learning in artificial intelligence. Computational difculties arise, however, because probabilistic models with the necessary realism and flexibility lead to complex distributions over high-dimensional spaces. Related problems in other fields have been tackled using Monte Carlo methods based on sampling using Markov chains, providing a rich array of techniques that can be applied to problems in artificial intelligence. The "Metropolis algorithm" has been used to solve difficult problems in statistical physics for over forty years, and, in the last few years, the related method of "Gibbs sampling" has been applied to problems of statistical inference. Concurrently, an alternative method for solving problems in statistical physics by means of dynamical simulation has been developed as well, and has recently been unified with the Metropolis algorithm to produce the "hybrid Monte Carlo" method. In computer science, Markov chain sampling is the basis of the heuristic optimization technique of "simulated annealing", and has recently been used in randomized algorithms for approximate counting of large sets. In this review, I outline the role of probabilistic inference in artificial intelligence, and present the theory of Markov chains, and describe various Markov chain Monte Carlo algorithms, along with a number of supporting techniques. I try to present a comprehensive picture of the range of methods that have been developed, including techniques from the varied literature that have not yet seen wide application in artificial intelligence, but which appear relevant. As illustrative examples, I use the problems of probabilitic inference in expert systems, discovery of latent classes from data, and Bayesian learning for neural networks. , 1999 "... Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observa ..." Cited by 260 (17 self) Add to MetaCart Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and introducing a new way of linking discrete and continuous state models using a simple nonlinearity. Through the use of other nonlinearities, we show how independent component analysis is also a variation of the same basic generative model. We show that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise. We also review some of the literature involving global and local mixtures of the basic models and provide pseudocode for inference and learning for all the basic models. , 1997 "... Factor analysis, a statistical method for modeling the covariance structure of high dimensional data using a small number of latent variables, can be extended by allowing different local factor models in different regions of the input space. This results in a model which concurrently performs cluste ..." Cited by 225 (18 self) Add to MetaCart Factor analysis, a statistical method for modeling the covariance structure of high dimensional data using a small number of latent variables, can be extended by allowing different local factor models in different regions of the input space. This results in a model which concurrently performs clustering and dimensionality reduction, and can be thought of as a reduced dimension mixture of Gaussians. We present an exact Expectation--Maximization algorithm for fitting the parameters of this mixture of factor analyzers. 1 Introduction Clustering and dimensionality reduction have long been considered two of the fundamental problems in unsupervised learning (Duda & Hart, 1973; Chapter 6). In clustering, the goal is to group data points by similarity between their features. Conversely, in dimensionality reduction, the goal is to group (or compress) features that are highly correlated. In this paper we present an EM learning algorithm for a method which combines one of the basic forms of dime... - Neural Computation , 1999 "... We introduce the independent factor analysis (IFA) method for recovering independent hidden sources from their observed mixtures. IFA generalizes and unifies ordinary factor analysis (FA), principal component analysis (PCA), and independent component analysis (ICA), and can handle not only square no ..." Cited by 219 (9 self) Add to MetaCart We introduce the independent factor analysis (IFA) method for recovering independent hidden sources from their observed mixtures. IFA generalizes and unifies ordinary factor analysis (FA), principal component analysis (PCA), and independent component analysis (ICA), and can handle not only square noiseless mixing, but also the general case where the number of mixtures differs from the number of sources and the data are noisy. IFA is a two-step procedure. In the first step, the source densities, mixing matrix and noise covariance are estimated from the observed data by maximum likelihood. For this purpose we present an expectation-maximization (EM) algorithm, which performs unsupervised learning of an associated probabilistic model of the mixing situation. Each source in our model is described by a mixture of Gaussians, thus all the probabilistic calculations can be performed analytically. In the second step, the sources are reconstructed from the observed data by an optimal non-linear ... , 1995 "... Discovering the structure inherent in a set of patterns is a fundamental aim of statistical inference or learning. One fruitful approach is to build a parameterized stochastic generative model, independent draws from which are likely to produce the patterns. For all but the simplest generative model ..." Cited by 194 (22 self) Add to MetaCart Discovering the structure inherent in a set of patterns is a fundamental aim of statistical inference or learning. One fruitful approach is to build a parameterized stochastic generative model, independent draws from which are likely to produce the patterns. For all but the simplest generative models, each pattern can be generated in exponentially many ways. It is thus intractable to adjust the parameters to maximize the probability of the observed patterns. We describe a way of finessing this combinatorial explosion by maximizing an easily computed lower bound on the probability of the observations. Our method can be viewed as a form of hierarchical self-supervised learning that may relate to the function of bottom-up and top-down cortical processing pathways. , 2001 "... In this paper, on the one hand, we aim to give a review on literature dealing with the problem of supervised learning aided by additional unlabeled data. On the other hand, being a part of the author's first year PhD report, the paper serves as a frame to bundle related work by the author as well as ..." Cited by 165 (3 self) Add to MetaCart In this paper, on the one hand, we aim to give a review on literature dealing with the problem of supervised learning aided by additional unlabeled data. On the other hand, being a part of the author's first year PhD report, the paper serves as a frame to bundle related work by the author as well as numerous suggestions for potential future work. Therefore, this work contains more speculative and partly subjective material than the reader might expect from a literature review. We give a rigorous definition of the problem and relate it to supervised and unsupervised learning. The crucial role of prior knowledge is put forward, and we discuss the important notion of input-dependent regularization. We postulate a number of baseline methods, being algorithms or algorithmic schemes which can more or less straightforwardly be applied to the problem, without the need for genuinely new concepts. However, some of them might serve as basis for a genuine method. In the literature revi... , 1996 "... Linear systems have been used extensively in engineering to model and control the behavior of dynamical systems. In this note, we present the Expectation Maximization (EM) algorithm for estimating the parameters of linear systems (Shumway and Sto er, 1982). We also point out the relationship between ..." Cited by 156 (7 self) Add to MetaCart Linear systems have been used extensively in engineering to model and control the behavior of dynamical systems. In this note, we present the Expectation Maximization (EM) algorithm for estimating the parameters of linear systems (Shumway and Sto er, 1982). We also point out the relationship between linear dynamical systems, factor analysis, and hidden Markov models. - IEEE Transactions on Neural Networks , 1997 "... description length, density estimation. ..." - Philosophical Transactions of the Royal Society B , 1997 "... We describe a hierarchical, generative model that can be viewed as a non-linear generalization of factor analysis and can be implemented in a neural network. The model uses bottom-up, top-down and lateral connections to perform Bayesian perceptual inference correctly. Once perceptual inference has b ..." Cited by 120 (5 self) Add to MetaCart We describe a hierarchical, generative model that can be viewed as a non-linear generalization of factor analysis and can be implemented in a neural network. The model uses bottom-up, top-down and lateral connections to perform Bayesian perceptual inference correctly. Once perceptual inference has been performed the connection strengths can be updated using a very simple learning rule that only requires locally available information. We demonstrate that the network learns to extract sparse, distributed, hierarchical representations. , 1999 "... We consider a logistic regression model with a Gaussian prior distribution over the parameters. We show that an accurate variational transformation can be used to obtain a closed form approximation to the posterior distribution of the parameters thereby yielding an approximate posterior predictiv ..." Cited by 106 (5 self) Add to MetaCart We consider a logistic regression model with a Gaussian prior distribution over the parameters. We show that an accurate variational transformation can be used to obtain a closed form approximation to the posterior distribution of the parameters thereby yielding an approximate posterior predictive model. This approach is readily extended to binary graphical model with complete observations. For graphical models with incomplete observations we utilize an additional variational transformation and again obtain a closed form approximation to the posterior. Finally, we show that the dual of the regression problem gives a latent variable density model, the variational formulation of which leads to exactly solvable EM updates.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=419391","timestamp":"2014-04-19T03:13:26Z","content_type":null,"content_length":"37497","record_id":"<urn:uuid:bd7a3bdf-9cbe-43c7-afdb-eec3fbfb44c2>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
IntroductionSignal Processing from Accelerometer SensorSR-Based Feature Extraction ApproachGraph EmbeddingSpectral Regression AlgorithmSR-Based Fault Feature ExtractionExperiments and AnalysisData AcquisitionSignal ProcessingFeature ExtractionMethod EvaluationConclusionsReferencesFigures and Tables Bearings are one of the most important components in rotating machinery [1]. Many of the faults of rotating machinery relate to the bearings, whose running conditions directly affect the precision, reliability and life of the machine [2]. Breakdowns caused by bearing performance degradation and inappropriate operation can not only lead to huge economic losses for enterprises, but also potentially serious casualties [3]. In recent years, therefore, bearings fault prognosis technology has received more and more attention, in particularly fault feature extraction (FFE) of bearing accelerometer sensor signals has become more and more important in order to avoid the occurrence of accidents. Bearing accelerometer sensor signal analysis-based techniques, which are the most suitable and effective ones for bearing, have been extensively used since in machine prognosis it is easy to obtain sensor signals containing abundant information. These techniques mainly include three categories, namely, time domain analysis, frequency domain analysis and time-frequency domain analysis. Time domain analysis calculates characteristic features of signals statistics such as root mean squares (RMS), kurtosis value, skewness value, peak-peak value, crest factor, impulse factor, margin factor, etc. Frequency domain analysis search for a train of ringing occurring at any of the characteristic defect frequencies, it is widely applied in fast Fourier transform (FFT), spectrum analysis, envelop analysis, etc. Time-frequency domain analysis investigates given signals in both the time and frequency domain, which is successfully developed for non-stationary signals, and some different technologies such as short-time Fourier transform (STFT), wavelet transform (WT), wavelet packet transform (WPT), Hilbert-Huang transform (HHT), etc. are described in the literature [3–6]. Among them, energy features of reconstructed vibration signals are commonly calculated for the purpose of signal analysis, for example, the wavelet energy can represent the characteristics of vibration signals. Consequently, a lot of original features can be generated from accelerometer sensor signals, therefore it is a necessity to deal with large scale feature dimensions. The biggest challenge is how to extract the most useful information that can reflect comprehensive performance degradation. Previous research has shown that different features are sensitive to different faults and degradation stages, for example, kurtosis value and crest factor are sensitive to impulse faults, especially in the incipient stage, but they will decrease to normal-like levels as the damage grows, which shows that the stability of these features is not satisfactory [7]. Feature extraction means transforming the existing features into a lower dimensional space which is useful for feature reduction to avoid the redundancy due to high-dimensional data [8]. Principal component analysis (PCA) might be one of feature extraction techniques which is often used for bearing fault detection or classification, PCA has the ability to discriminate directions with the largest variance in a data set, and extract several representative features by using data projection. Factor analysis (FA) is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors, FA has been demonstrated to be able to extract important knowledge from sensor data based on the inter-correlation of sensor data [9]. Locality preserving projections (LPP) is a linear projective map that arises by solving a variational problem that optimally preserves the intrinsic geometry structure of the dataset in a low-dimensional space. Liao and Lee in [10] used PCA to find the first two principal components (wavelet packet node energy) which contain more than 90 percent of the variation information. Widodo and Yang in [11] employed PCA to obtain one dimensional features of condition monitor histories from which the survival probability of the historical event data. Côme and Oukhellou in [12] applied the independent factor analysis for intelligent fault diagnosis of railway track circuits, and the diagnosis system aimed to recover the latent variables linked to track circuit defects using features extracted, significantly improving estimation accuracy and removing indeterminacy. Yu in [13] used LPP to extract the most representative features for representing the bearing performance, indicating that LPP could find more meaningful low-dimensional information hidden in the high dimensional feature set compared with PCA. PCA, FA and LPP play a manifest role in feature extraction, however, they have their limitations and don't contain a full exploitation of the multivariate nature of the data [14]. Spectral methods have recently emerged as a powerful tool for dimensionality reduction and manifold learning [15], these methods use information contained in the eigenvectors of a data affinity matrix to reveal low dimensional structure in high dimensional data. Spectral regression (SR) is a novel regression framework for efficient regularized subspace learning and feature extraction technology [16]. Different from other similar methods, SR combines the spectral graph analysis and regression to provide an efficient and effective approach for regularized subspace learning problem. It is shown that SR casts the problem of learning an embedding function into a regression framework, which avoids eigen-decomposition of dense matrices. Due to its superior properties, for example, the lower computation cost and the more structured information, it can be used in all unsupervised, semi-supervised or supervised problems. SR has been adopted for various applications such as location of the sensor nodes [17], human action recognition [18], facial image retrieval [19], EEG signals [20], etc. To the best of our knowledge, no research results have been published to data on the use of SR for bearing fault feature extraction and machine prognosis, therefore, this paper will be the first time SR was applied to feature extraction of bearing faults. The rest of the paper is organized as follows: Section 2 proposes the signal processing (including feature calculation) from accelerometer sensors according to the time domain, frequency domain and time-frequency domain. In Section 3, the graph embedding view and SR-based feature extraction approach are introduced. Section 4 gives a description of the experiments and analysis, bearing accelerometer sensor signals from bearings are employed to evaluate the effectiveness of the proposed method. Finally, concluding remarks and future work on this approach are given in Section 5. To diagnose the abnormality, it is important to record certain physical parameters which vary according to the variation in the operation of the machine [21]. Vibration signals are extensively used in signature matching for abnormality detection and diagnosis. Generally, these signals are generated by accelerometer sensors on bearings [22]. The essential aim of signal processing is to map a signal from the time domain into another space in which some important information of the signals can be revealed, and consequently, some dominant features of the signals can be extracted [23]. For this purpose, various original features that can be extracted from accelerometer sensor signals of bearings have been investigated. This section presents a brief discussion of feature generation from time-domain, frequency-domain, and time-frequency domain as they will be used throughout the paper. Time domain features often involve statistical features that are sensitive to impulse faults [13], especially in the incipient stage, so we calculated some dimensional features, such as RMS, square root of the amplitude (SRA), kurtosis value (KV), skewness value (SV) and peak-peak value (PPV), in addition, some dimensionless features, such as crest factor (CF), impulse factor (IF), margin factor (MF), shape factor (SF) and kurtosis factor (KF). These features are defined as follows: X rms = ( 1 N ∑ i = 1 N x i 2 ) 1 / 2 X sra = ( 1 N ∑ i = 1 N | x i | ) 2 X kv = 1 N ∑ i = 1 N ( x i − x ¯ σ ) 4 X sv = 1 N ∑ i = 1 N ( x i − x ¯ σ ) 3 X ppv = max ( x i ) − min ( x i ) X cf = max ( | x i | ) / ( 1 N ∑ i = 1 N x i 2 ) 1 / 2 X if = max ( | x i | ) / 1 N ∑ i = 1 N | x i | X mf = max ( | x i | ) / ( 1 N ∑ i = 1 N | x i | ) 2 X sf = ( 1 N ∑ i = 1 N x i 2 ) 1 / 2 / 1 N ∑ i = 1 N | x i | X kf = 1 N ∑ i = 1 N ( x i − x ¯ σ ) 4 / ( 1 N ∑ i = 1 N x i 2 ) 2 Frequency domain analysis is another description of a signal, that can reveal some information that cannot be found in the time domain [24]. Frequency domain features are calculated on the basis of FFT from time domain vibration signals, these features often involve statistical results of frequency, such as frequency center (FC), RMS frequency (RMSF) and root variance frequency (RVF), etc. These features are defined as follows: X fc = ∫ 0 + ∞ f s ( f ) d f / ∫ 0 + ∞ s ( f ) d f X rmsf = ( ∫ 0 + ∞ f 2 s ( f ) d f / ∫ 0 + ∞ s ( f ) d f ) 1 / 2 X rvf = ( ∫ 0 + ∞ ( f − X f c ) 2 s ( f ) d f / ∫ 0 + ∞ s ( f ) d f ) 1 / 2 Time-frequency domain methods are considered to be best way for analyzing non-stationary signals [25], due to the deficiency of the Fourier transform. Many time-frequency analysis technologies have been developed, including STFT, WT (or WPT), HHT, etc. In this study, we adopt WPT to present bearing accelerometer sensor signals in time-frequency distribution diagrams with multi-resolution. As we know, wavelet packet analysis (WPA) is an extension of WT which provides complete level-by-level decomposition. As shown in Figure 1, wavelet packets are particular linear combinations wavelets. The wavelet packets inherit properties such as orthogonality, smoothness and time-frequency localization from their corresponding wavelet functions. Let Ψ be a wavelet packet function with three integer indices i, j and k which are the modulation or oscillation parameter, the scale parameter, and the translation parameter, respectively: ψ = ψ j , k i ( t ) = 2 j / 2 ψ i ( 2 j t − k ) The wavelet packet coefficients of a signal s can be computed by taking the inner product of the signal and the wavelet packet function: c j , k i = 〈 s , ψ j , k i ( t ) 〉 = ∫ − ∞ + ∞ s ( t ) ψ j , k i ( t ) d t The wavelet packet node energy WPNE(j,k) can represent the characteristics of vibration signals, and it is defined as: WPNE ( j , k ) = ∑ k ( c j , k i ) 2 In this application, we use a specific wavelet function “DB4” from the Daubechies (DB) wavelets family as the mother wavelet and decompose the vibration signals into four levels. In general, the biggest challenge in wavelet analysis is the selection and determination of the mother wavelet function as well as the decomposition level of signals for the different real-world applications [21]. Different mother wavelet functions and corresponding orders have different effects on the feature extraction. Rafiee etc. in [26] presented a novel solution to find the best mother wavelet function for fault classification purposes as well as the best level of decomposing the vibration signals by wavelet analysis in machine condition monitoring; the experimental results demonstrated that a DB4 orthogonal wavelet discloses abnormal transients generated by the bearing damage from the vibration signals more effectively than other wavelets in the range of DB2 and DB20, and the optimized value of the decomposition level is 4. In addition, a large number of previous studies have demonstrated that DB4 has been widely implemented as it matches the transient components in vibration signals and showed effectiveness in defect detection and fault diagnosis of bearings, because it has the advantages of orthogonality and computational simplicity [27]. Subsequently, we calculate wavelet packet node energy in fourth level as the input features of bearing time-frequency domain: X wpne ( j , k ) = WPNE ( j , k ) In this section, after the graph embedding and SR method are presented, SR-based fault feature extraction approach is proposed to extract useful information from the calculated original features of vibration signals. The SR is fundamentally based on regression theory and spectral graph analysis, so it can be incorporated into other algorithms easily [28]. It can be used in all unsupervised, semi-supervised or supervised problems and integrated with different other suggested regularizers to make it more flexible [29]. In concrete applications, an affinity graph will be constructed first via the labeled and unlabeled samples, in order to reveal the intrinsic structured information and to learn the responses with the given data. Subsequently, with these obtained responses, the ordinary regression is applied for learning the embedding function. The SR aims at finding a low-dimensional subspace Z = [z[1], z[2], …, z[m]] (z[i]∈R^d), when given high-dimensional input data X = [x[1], x[2], …, x[m]] (x[i]∈R^n, d ≪ n), where m is the sample number, say x[i] can be represented with z[i]. Let x = [x[1], x[2], …, x[m]]^T be high-dimensional space and y = [y[1], y[2], …, y[m]]^T be the mapped low-dimensional space, a reasonable criterion for choosing a map is to minimize: ∑ i , j ‖ y i − y j ‖ 2 W i jwhere the matrix W[ij] with m × m entries contains the weight of the edge, these edges join points x[i] and x[j] in a nearest-neighbor graph G with m points. The objective function will be heavily penalized if neighboring points x[i] and x[j] are mapped far apart. Therefore, the purpose of minimizing is to ensure that if x[i] and x [j] are “close” then y[i] and y[j] are close as well. Following some algebraic steps, we have: 1 2 ∑ i , j ‖ y i − y j ‖ 2 W i j = y T ( D − W ) y = y T L ywhere D is a diagonal matrix, which contains column sums of W, D[ii] = Σ[j]W[ji], and L = D − W is the graph Laplacian matrix. And then, the minimization problem in Equation (18) reduces to finding: y ∗ = arg min y T D y = 1 y T L y = argmin y T L y y T D y In order to remove the arbitrary scaling factor in the embedding, a constraint y^TDy = 1 will be imposed. Obviously, it is because of L = D − W, Equation (20) is also equivalent to the maximization problem: y ∗ = arg max y T D y = 1 y T W y = argmax y T W y y T D y The optimal y's in Equation (21) can be obtained by solving the generalized eigenvalue problem: W y = λ D y For simply mapping for training samples and new testing samples, we choose a linear function here: y i = f ( x i ) = A T x i , A T = ( a 1 , ⋯where A is a n×d matrix, x[i] is mapped to y[i]. Substituting Equation (23) into Equation (21), we have: A ∗ = argmax y T W y y T D y = argmax A T X W X T A A T X D X T A The optimal A's in Equation (24) can be also obtained by solving the generalized eigenvalue problem: XWX T A = λ XDX T A This maximum eigen-problem formulation in some cases can provide a more numerically stable solution. In the remainder of this paper, we will develop the SR algorithm based on Equation (25). The SR has been used in various applications where it has demonstrated efficacy compared to PCA, FA, and some common manifold techniques in both feature quality and calculation efficiency [15]. Meanwhile, the SR algorithm uses the least squares method to get the best projection direction, rather than computing the density matrix of features, so it also has an advantage in speed. An affinity graph G of both labeled and unlabeled points is constructed to find the intrinsic discriminant structure and to learn the responses with the given data. Then, with these responses, the ordinary regression is applied for learning the embedding function [30]. Given a training set with l labeled samples x[1], x[2], …, x[l] and a testing set with (m − l) unlabeled samples x[l][+1], x[l][+2], …, x[m], where the sample x[i]∈R^n belongs to one of c classes, and let l[k] be the number of labeled samples in the k-th class (the sum of l[k] is equal to l). The SR is summarized as follows: Step1: Constructing the adjacency graph G: Let X be the training set and G denote a graph with m nodes, where the i-th node corresponds to the sample x[i]. In order to model the local structure as well as the label information, then the graph G will be constructed through the following three steps: If x[i] is among p nearest neighbors of x[j] or x[j] is among p nearest neighbors of x[i], then nodes i and j are connected by an edge; If x[i] and x[j] are in the same class (i.e., same label), then nodes i and j are also connected by an edge; Otherwise, if x[i] and x[j] are not in the same class, then the edge will be deleted between nodes i and j. Step2: Constructing the weight matrix W: Let W be the sparse symmetric m×m matrix, where W[ij] having the weight of the edge joining vertices i and j. If there is no any edge between nodes i and j, then W[ij] = 0; Otherwise, if both x[i] and x[j] belong to the k-th class, then W[ij] = 1/l[k], else W[ij] =[δ]. s(i, j), where [δ (]0 < [δ] ≤ 1) is a given parameter to adjust the weight between supervised and unsupervised neighbor information. Therein, s(i, j) is a similarity evaluation function between x[i] and x[j], there are three variations, the first one is Simple-minded function s(i, j) = 1, the second one is Heat kernel function: s ( i , j ) = exp ( − ‖ x i − x j ‖ 2 / 2 σ 2 )where σ∈R, the third one is Cosine weight: s ( i , j ) = x i T x j / ‖ x i ‖ ‖ x i ‖ Step3: Eigen-decomposing: Let D be the m × m diagonal matrix, whose the (i, i)-th element is the sum of the i-th column (or row) of W. Find y[0], y[1], …, y[c][−1], which are the largest c generalized eigenvectors of eigen-problem: W y = λ D ywhere the first eigenvector y[0] is a vector of all ones with eigenvalue 1. Step4: Regularized least squares: Calculate c-1vectors a[1], …, a[c][−1] with a[k]∈R^n (k = 1, …, c−1), therein a[k] is then a solution of regularized least square problem: a k = arg min a ( ∑ i = 1 m ( a T x i − y i k ) 2 + α ‖ a ‖ 2 )where y i k is the i-th element of y[k]. In order to obtain a[k], the following linear equations system can be used to solve through the classic Gaussian elimination method. ( X X T + α I ) a k = X y kwhere I is a n × n identity matrix. Step5: SR Embedding: Let A be an n × (c − 1) obtained transformation matrix through the previously mentioned processes, where A = [a[1], …, a[c][−1]]. The testing samples or new sample can be embedded into c − 1 dimensional subspace by: x → z = A T x Feature extraction, which is a mapping process from the measured signal space to the feature space, can be regarded as the most important step for intelligent fault diagnosis systems [14]. The effective feature extraction is important for the pattern recognition of bearing faults [31]. In this work, we propose an SR-based fault feature extraction scheme for bearing accelerometer sensor signals, The flow chart of the proposed scheme is shown in Figure 2, which includes three parts: i.e., signal processing (or named as feature calculation), feature extraction and method evaluation. Firstly, we calculate 10 features of the time domain directly from bearing vibration signals and three features of the frequency domain based on FFT. Subsequently, we decompose vibration signals into four scales using WPT with ‘DB4’, and then calculate wavelet packet nodes energy in the fourth level as 16 features of the time-frequency domain. So far, we have obtained 29 initial features from vibration signals (see Table 1), which have been enough to represent the bearing performance states and fault severity. As we know, because it is difficult to estimate which features are more sensitive to defect development and propagation in a machine system, as various factors affect the effectiveness of the features. In this case, we believe that it is more helpful to generate more and more various features. Secondly, we extract the most representative features from 29 initial features via the SR-based method. Obviously, very large initial features' dimension will result in decreasing performance of bearing prognosis and therefore also increasing computational costs. How to extract the really effective information of bearing fault is a challenging problem. In this paper, if we choose the first d eigenvectors from A = [a[1], …, a[c][−1]] in Equation (31), where d ≫ c − 1, then the new projection z is: z = A T x , A = ( a 1 , … , a d ) . Based on the new projection data set z using SR-based method, the high-dimensional data space is reduced to a low-dimensional data space, however, retaining the majority of local variation information in the projected data set. With the reduced dimensions and local variance information preservation, the extracted features z will be used as the new input features of pattern recognizers for bearing faults. Finally, we validate the SR-based method using K-means in the case of original features and extracted features. In this paper, we compare the SR-based method with PCA-based, FA-based ones, etc., and the experiment results show that SR-based method is the best for extracting the useful information to represent bearing performance conditions from the available original features as you can calculate from the vibration signals. Moreover, the validated result confirms that the features extracted by SR ensure effective fault recognition at higher accuracy than the 29 original features. Data acquisition is a process of collecting and storing useful data from targeted physical assets for the purpose of Condition-based Maintenance (CBM). This process is an essential step in implementing a CBM program for machinery fault diagnosis and prognosis. To evaluate the effectiveness of the signal processing and feature extraction methods for bearings, the vibration data related to the bearing and the system investigation in this paper were provided by the Bearing Data Center of the Case Western Reserve University (CWRU), and acquired by bearing accelerometer sensors under different operating loads and bearing conditions [32]. The bearing data of CWRU has been validated in many research works and become a standard dataset for bearing studies [2,13,14,21]. The test-rig shown in Figure 3 consists of a 2 HP motor (left), a torque transducer/encoder (center), a dynamometer (right), and control electronics (not shown). The test bearing type is a 6205-2RS JEM SKF, which is a deep groove ball bearing, the dynamometer is controlled so that desired torque load levels can be achieved. Accelerometer sensors were placed at the 12 o'clock position at the drive end of the motor housing. The experimental rotating frequency is about 30 Hz, the test bearings support the motor shaft and the load was 2 HP at the speed of 1,797 rpm, single point faults were introduced to the inner race, ball and outer race of the test bearings using electro-discharge machining (EDM) with fault diameters of 0.007, 0.014, 0.021 and 0.028 inches, and the fault depth is 0.011 inches. More detailed information about the test-rig can be found in [32]. The vibration signals were collected through accelerometers using a 16 channel digital audio tape (DAT) recorder at the sampling frequency 12 kHz. In order to evaluate the performance of the SR-based feature extraction approach proposed in this paper, we separate the experimental vibration data into four datasets, named as D_IRF, D_ORF, D_BF and D_MIX. Specifically, similar to the ORF and BF datasets, the IRF dataset includes five severity conditions, i.e., normal, and four types of fault bearings with faulty diameter: 0.007 (IRF07), 0.014 (IRF14), 0.021 (IRF21) and 0.028 (IRF28) inches in the inner race of the bearings, respectively. The D_MIX dataset, however, contains four different states which are normal, and three types of faults, i.e., inner race fault (IRF), ball fault (BF) and outer race fault (ORF) all with a fault diameter of 0.014 inches. The length of the signal data in every dataset is 1,024, that is, every example data includes 1,024 points. We extracted 100 examples for each severity condition, and thus the D_MIX and D_ORF dataset consists of 400 examples, simultaneously, the D_IRF and D_BF datasets contain 500 examples, respectively. The detailed description with respect to the experimental datasets is presented in Table 2, where “07”, “14”, “21” and “28” mean that fault diameter is 0.007, 0.014, 0.021 and 0.028 inches. For verifying the proposed scheme in this study, the overall datasets are split into two portions, i.e., training datasets (50%) and test datasets (50%). Figure 4 presents the vibration signal waveforms from four signal samples of the different fault types in the D_MIX dataset, note that there is a manifest difference in the overall vibration magnitude for the new health bearing when compared with other three types of fault bearings. Nevertheless, we still need to process the signal (calculate signal features) due to very high dimensions of the original vibration signals. For the obtained vibration signal data, we calculate original features following the time domain, frequency domain and time-frequency domain for the next feature extraction. Time domain features could be calculated directly from vibration signals using Equations (1)–(10). For validating the employed time domain features in this work, Table 3 lists the average value of the statistical time domain features in the D_MIX dataset. It can be seen from Figure 5, there are some differences in the various fault types of bearings in the D_MIX dataset, but some existed differences is still not easy to be distinguished, especially in the ball fault bearings. As mentioned earlier, furthermore, some statistical features of time domain are sensitive to inchoate faults, for instance, RMS and kurtosis values should be able to capture the mutual difference in the time domain signal for the fault and healthy bearings. Figure 6 shows this character of four statistical features of time domain in the D_IRF dataset, we note that the feature of RMS can recognize differences in four bearing conditions, however, kurtosis values, peak-peak value and impulse factor only can capture better infancy fault, where it shows poor ability to identify much more severe faults. The advantage of frequency domain analysis over time domain analysis is its ability to easily identify and isolate certain frequency components of interest. The most widely used conventional analysis is the spectrum analysis by means of fast Fourier transform (FFT), which is a well-established method because of its simplicity. Figure 7 shows the spectrum based on FFT for a normal sample and three different fault samples in the D_MIX dataset, and Figure 8 displays the corresponding spectrum for a normal bearing and three outer race fault bearings with faulty diameter: 0.007 (ORF07), 0.014 (ORF14) and 0.021 (ORF21) inches in the D_ORF dataset, respectively. The Fourier spectrum analysis provides a general method for examining the global energy-frequency distribution. The main idea of spectrum analysis is to either look at the whole spectrum or look closely at certain frequency components of interest and thus extract features from the obtained vibration signal data. On this basis, we calculate frequency domain features, such as frequency center, RMS frequency and root variance frequency using Equations (11)–(13). However, the features from the FFT analysis results tend to average out transient vibrations and don't provide a wholesome measure of bearing health states. Therefore, one manifest limitation of frequency domain analysis is its inability to handle non-stationary waveform signals, which are very common when machinery faults occur Time-frequency analysis, which investigates waveform signals in both time and frequency domain, has been developed for non-stationary waveform signals. Traditional time-frequency analysis uses time-frequency distributions, which represent the energy or power of waveform signals in two-dimensional functions of both time and frequency to better reveal fault patterns for more accurate diagnosis. In this study, we decompose vibration signals obtained from the test-rig into four scales using WPT with mother wavelet ‘DB4’, Figure 9 displays the original and decomposed signals from a normal bearing sample and a ball fault bearing sample in the D_BF dataset, therein, we list only eight decomposed signals for the purpose of simplifying indication. From Figure 9, we note that there is a relatively large difference between the normal bearing and the ball fault bearing, especially at the high frequency of the decomposed signals. For the purpose of comparison, we calculate the average value of the wavelet packet nodes energy from decomposed signals of the normal bearings and the ball fault bearings with faulty diameter: 0.007 (BF07), 0.014 (BF14), 0.021 (BF21) and 0.028 (BF28) inches in the D_BF dataset using Equation (16), respectively. The normalized wavelet packet energy was analyzed from the corresponding sixteen decomposed signal nodes, the results are shown in Figure 10, its distribution of energy are different mutually. In the technique presented in this paper, the total 29 features were calculated from 10 time domain features, three frequency domain features and 16 time-frequency domain features. In general, it is difficult to estimate which features are more sensitive to fault development and propagation in a machine system, furthermore, the effectiveness of these original features could change under different working conditions. In addition, this amount of original features is too many, thus it could be a burden and decrease the performance of the classifier or recognizer. Therefore, feature extraction and dimension reduction using some related technique are proposed in this study, so that more salient and low dimensional features are extracted for performing bearings fault diagnosis or At first, we take two experiments, each select randomly three features from the total 29 features in the D_MIX dataset, which are illustrated in Figure 11(a,b), respectively. Similarly, we also select randomly three features in the D_IRF dataset, the first and the second selected features are presented in Figure 12(a,b), respectively. It is shown that these features cannot separate well among the conditions of bearing fault because of high-dimensional data tends to redundancy, therefore, we cannot input them into the classifier directly. In order to validate the performance of SR-based method for feature extraction, SR is originally implemented in the D_MIX dataset, the first d eigenvectors corresponding to the large d eigenvalues are selected to implement data projection using Equation (32). Figure 13(a) shows the data projection result with the first two eigenvectors corresponding to the large two eigenvalues, where the first two projected column vectors are plotted. For the purpose of the comparison, the projected results using PCA, FA and LPP are also illustrated in Figure 13(b–d), respectively. In addition, Figure 14 presents corresponding comparison of the data projection result with the first three eigenvectors. We generally keep the first several eigenvectors corresponding to the large eigenvalues which can keep most variance information of the given data. However, high input data dimensions could decrease the recognition performance of the classifiers and result in more training time cost. Thus, the selection of the number of the eigenvectors should be based on the requirement of the real-world applications [1]. In this study, we select the first three eigenvectors of data projection result for inputting the classifiers, and we also display the first two eigenvectors of data projection result for visualization well. As shown in Figures 13 and 14, it is obvious that the data projection result with the first two or three eigenvectors using SR outperforms other methods in the D_MIX Similarly, we also perform these four feature extraction algorithms in the D_IRF dataset, the extracted first two and three features are compared in Figures 15 and 16, respectively. Severity recognition references to the identification of the differentiation of defective states of the bearings, e.g., normal, IRF07, IRF14, IRF21, IRF28 in the D_IRF dataset. From the corresponding compared results, we can observe that SR has better projection performance over other three methods, as it can obtain a more clear separation of the clustering on the map for the corresponding severity recognition. This is due to the fact that SR is capable of discovering local structured information of the data manifold. However, PCA aims to discover the global structure of the Euclidean space. For the D_IRF dataset, each of fault severity classes is a local structure, SR preserves the intrinsic geometry structure of the dataset in a low-dimensional space. This illustrates that the local information could be more meaningful than the global information from given dataset in some industrial situations. In addition, LPP shows better performance than PCA and FA, since LPP is also graph embedding method based on the local structure of the manifold. This result indicates that features extracted via spectral graph embedding analysis could be more effective than which extracted via global structure by PCA and FA, which illustrates that SR-based feature extraction is very effective to extract most sensitive features for fault classification and severity recognition tasks. As we know, the clearer the separation, the more robust a classifier is. Consequently, the extracted features by SR are able to improve the performance of the classifiers more effectively, which further proves that SR is capable of extracting the most effective features from original features without too much calculation cost. In this study, K-Means is adopted to evaluate the performance of SR, PCA, FA and LPP. The first three extracted features corresponding to the largest eigenvalues are employed as the input features of K-Means. K-Means was implemented to recognize the clusters of the different bearing fault types, the acquired training dataset and testing dataset are used for modeling K-Means and checking misclassification, respectively. For given dataset, the accuracy rates are presented in Table 4, the classification results based on the original 29 features (OF29) and the first three features extracted by SR, PCA, FA and LPP are also presented in Table 4 and Figure 17. It can be observed that PCA and FA don't improve the recognition performance of K-Means in comparison with using OF29, both LPP and SR improve the accuracy rate, respectively. The results of this experiment are consistent with the actual situation of the CWRU dataset, since the data quality of the artificially introduced faults on bearings is very good, thus the features in different fault conditions are pretty separable. The K-Means recognized relatively accurately all of the different severity classes through the use of the methods based on PCA, LPP and SR. In addition, it is shown that the SR all gives more satisfied results as compared to others in four datasets, this further demonstrates the effectiveness of SR for feature extraction or dimensionality reduction of the given input space, and also be confirmed to improve the performance of the classifier obviously. Therefore, we can safely make use of SR in order to extract the most effective features among the practical applications. In order to further evaluate the proposed SR-based method, we adopt other experimental data, in particular, the bearing fault data acquired from an accelerated bearing life tester (ABLT-1) at the Hangzhou Bearing Test and Research Center in China (detailed information is described in [1]). The differentiation of fault states of bearings include three classes: normal, slightly degradation, and severe degradation (failure). The fault conditions can be estimated by the magnitude of the representative features, which are produced by the effective feature extraction methods. For this case, we collect the data from the whole life of the bearing to implement fault classification, and randomly select 100 samples from each fault states, and thus 300 samples are collected for the test bearing, 50% of samples are used as the training set to construct K-means model, while the remaining 50% of samples are used as the testing set to test the classification accuracy rate of K-means using the first three extracted features corresponding to the largest eigenvalues. In this case, we not only compare with PCA, FA and LPP, but also compare with some other Graph Embedding based approaches, such as Laplacian Eigenmap (LE) and Linear Discriminant Analysis (LDA). The experimental results of K-means is shown in Table 5, the accuracy rate of classification by K-Means using the features extracted by SR is significantly better than that of K-Means using the features extracted by other methods, SR shows a similar performance with the supervised-based LDA. In addition, we can observe that there are some differences in the computational time of feature extraction consumed among the several methods. In Table 5, the computational time of LDA method is the highest, although it seeks the projective functions which are very perfect in the training set and testing set, so it is computationally expensive. The PCA method fails to show more improvement in the computational time; this is probably due to the fact that PCA does not encode discriminating information. The SR method achieves significantly better performance than other methods, which suggests that SR only needs to solve c-1 regularized least squares problems which are very efficient, This nice property makes it possible to apply SR to high dimensional large datasets in real-world applications. We also note that the classification accuracy rate of LPP-based and LE-based methods are also relatively higher, this is mostly due to the fact that the structured information in the experimental data is very important for feature extraction. Fortunately, SR is similar to them, and it is capable to discover local structured information in the data manifold. Specifically, this important property may enable SR to find more meaningful low-dimensional information hidden in the high-dimensional features compared with PCA and FA methods. Overall, this case also further demonstrates that the SR-based feature extraction method is very effective to improve the performance of It is noted that we tested the performance of the SR processing using the whole training and testing data for feature extraction in this experiment, which is not related to new test samples. In fact, handling data out samples (i.e., new inputs) problems presents a big challenge in the area of feature extraction. Due to space limitation, this problem is not discussed detail in the paper. In the real-world application, we should firstly transfer the training data into the project space under the weight matrix W, then using the same weight matrix W to treat the new testing data.
{"url":"http://www.mdpi.com/1424-8220/12/10/13694/xml","timestamp":"2014-04-18T18:32:54Z","content_type":null,"content_length":"120809","record_id":"<urn:uuid:4dd9ea96-a31a-4258-837a-88b0a65cdb48>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
Direct measurement of the spin gaps in a gated GaAs two-dimensional electron gas We have performed magnetotransport measurements on gated GaAs two-dimensional electron gases in which electrons are confined in a layer of the nanoscale. From the slopes of a pair of spin-split Landau levels (LLs) in the energy-magnetic field plane, we can perform direct measurements of the spin gap for different LLs. The measured g-factor g is greatly enhanced over its bulk value in GaAs (0.44) due to electron–electron (e-e) interactions. Our results suggest that both the spin gap and g determined from conventional activation energy studies can be very different from those obtained by direct measurements. Spin; g-factor; Disorder With the growing interest in spin-based quantum computation and spintronic applications [1], there is an increasing need to understand and accurately determine critical parameters of the electron spin degree of freedom. It is well established that when measuring an electron spin in an external magnetic field B, it can either align parallel to or antiparallel to B. The energy difference between these two discrete states, also known as the spin gap or Zeeman splitting, is given by gμ[B]B where g is the Lande g-factor and μ[B] is the Bohr magneton. It is worth mentioning that successful application of the wide range of possible spin-dependent phenomena requires effective techniques for the electrical injection of spin-polarized currents [2]. It has been demonstrated that a net spin current can be produced when where kT and Γ are the thermal and level broadening, respectively [3]. For practical applications, it is highly desirable that the generation of the spin currents can be accomplished without requiring the use of extremely high B. Therefore, an accurate measurement of the spin gap and g-factor would allow one to ensure that only a moderate B is required so that Equation1 holds. Moreover, the precise measurement of the g-factor [4] would shed light on the predicted divergence of spin susceptibility χ ∝ g m* and ferromagnetic ground state [5], where the system exhibits the unexpected metal-insulator transition [6]. Here m* represents the effective mass of electron (or hole). Given that the spin gap is the most important energy scale in any spin system and the g-factor is the central quantity characterizing the response of an electron or hole spin to an applied B, there have been many attempts to measure the spin gap in the literature. A standard method of obtaining the spin gap is to perform activation energy measurements at the minimum of the longitudinal resistivity , where Δ[s] is the spin gap [7]. However, such a measurement is rather restrictive as ρ[xx] must be very low and has to vary over at least an order of magnitude as a function of T. Moreover, Δ[s] has to be much greater than the thermal energy kT over the whole measurement range. Most importantly, activation energy measurements yield the ‘mobility gap’, the width of the localized states in the energy spectrum. This may be quite different from the real spin gap which corresponds to the energy difference between the two maxima densities of neighboring extended states [4,8]. In this paper, we report a method to directly measure the spin gaps in two-dimensional electron gases (2DEGs), in which the electrons are usually confined in layers of the nanoscale. We can change the applied gate voltage V[g] to vary the electron density n[2D] and hence the local Fermi energy E in our system. By studying the peak positions of ρ[xx] at various n[2D] and B, we can construct the Landau levels in the E-B diagram. As shown later, from the difference between the slopes of a pair of spin-split Landau levels in the E-B plane, we are able to measure the g-factors for different Landau level indices n in the zero disorder limit. We find that the measured g-factors (approximately 10) are greatly enhanced over their bulk value (0.44). Most importantly, our results provide direct experimental evidence that both the spin gap and g-factor determined from the direct measurements are very different from those obtained by the conventional activation energy studies. A possible reason is that our method is conducted in the zero disorder limit, whereas activation studies are performed under the influence of the disorder within the quantum Hall system. In the integer quantum Hall effect (IQHE), when the spin of the 2DEG is taken into consideration, in the zero disorder limit each Landau level splits into two with the corresponding energy given by where ω[C] is the cyclotron frequency, and n = 0, 1, 2, 3…, respectively. According to early experimental work [9], it was established that in 2D systems in a magnetic field the g-factor is greatly enhanced over its bulk value due to exchange interactions [10,11]. The precise measurement of the g-factor in 2D systems is a highly topical issue [4] since it has been predicted to be enhanced in strongly interacting 2D systems that exhibit the unexpected zero-field metal-insulator transition [6]. Experimental details Magnetoresistance measurements were performed on three gated Hall bars (samples A, B and C) made from modulation-doped GaAs/Al[0.33]Ga[0.67]As heterostructures. For sample A, the structure consists of a semi-insulating (SI) GaAs (001) substrate, followed by an undoped 20-nm GaAs quantum well, an 80-nm undoped Al[0.33]Ga[0.67]As spacer, a 210-nm Si-doped Al[0.33]Ga[0.67]As, and finally a 10-nm GaAs cap layer. For sample B, the structure consists of an SI GaAs (001) substrate, followed by an undoped 20-nm GaAs quantum well, a 77-nm undoped Al[0.33]Ga[0.67]As spacer, a 210-nm Si-doped Al [0.33]Ga[0.67]As, and finally a 10-nm GaAs cap layer. Sample C is a modulation-doped GaAs/AlGaAs heterostructure in which self-assembled InAs quantum dots are inserted into the center of the GaAs well [12]. The following sequence was grown on an SI GaAs (001) substrate: 40-nm undoped Al[0.33]Ga[0.67]As layer, 20-nm GaAs quantum well inserted with 2.15 monolayer of InAs quantum dots in the center, a 40-nm undoped Al[0.33]Ga[0.67]As spacer, a 20-nm Si-doped Al[0.33]Ga[0.67]As, and finally a 10-nm GaAs cap layer. Because of the lack of inversion symmetry and the presence of interface electric fields, zero-field spin splitting may be present in GaAs/AlGaAs heterostructures. However, it is expected that the energy splitting will be too small (0.01 K) to be important in our devices [13]. For sample A, at V[g] = 0 the carrier concentration of the 2DEG was 1.14 × 10^11 cm^-2 with a mobility of 1.5 × 10^6 cm^2/Vs in the dark. For sample B, at V[g] = 0 the carrier concentration of the 2DEG was 9.1 × 10^10 cm^-2 with a mobility of 2.0 × 10^6 cm^2/Vs in the dark. The self-assembled InAs dots act as scattering centers in the GaAs 2DEG [12,14]; thus, the 2DEG has a mobility much lower than those for samples A and B. For sample C, at V[g] = 0 the carrier concentration of the 2DEG was 1.48 × 10^11 cm^-2 with a mobility of 1.86 × 10^4 cm^2/Vs in the dark. Experiments were performed in a He3 cryostat and the four-terminal magnetoresistance was measured with standard phase-sensitive lock-in techniques. Results and discussion Figure1 shows the four-terminal magnetoresistance measurements R[xx] as a function of B at V[g] = -0.08 V for sample A. When the Fermi level is centered at a Landau level, there exists a peak in R[ xx] as shown in Figure1. By studying the evolution of the peaks in R[xx] at different gate voltages (and hence n[2D]), we are able to locate the position of the Landau levels in the n[2D]-B plane. Figure2a,b shows such results obtained from sample A and sample B, respectively. It is known that in the low disorder or high B limit, the filling factor of a resistivity (or conductivity) peak is given exactly by the average value of the filling factors of the two adjacent quantum Hall states [15]. This is equivalent to the situation when the Fermi energy coincides with a Landau level. It is worth pointing out that the peak position of magnetoresistance oscillations can be given by , where ν is the Landau level filling factor. At first glance, the peak position does not depend on either the g-factor or the effective mass of the 2D system. However, as shown later, in our case the energy of the Landau levels can be considered directly proportional to the density via the free electron expression [16], where m^* = 0.067 m[e] in GaAs and m[e] being the rest mass of a free electron. Then the effective mass should be considered when constructing the energy-magnetic field diagram. Here the oscillation of the Fermi energy is not considered. It may be possible that the effective mass of the 2DEGs will increase due to strong correlation effect [17]. In order to measure the effective mass of our 2DEG, we plot the logarithm of the resistivity oscillating amplitudes divided by temperature ln (Δρ[xx] / T) as a function of temperature at different magnetic fields in Figure 3. Following the procedure described by the work of Braña and co-workers [18], as shown in the inset to Figure3, the measured effective mass is very close to the expected value 0.067 m[e]. Therefore it is valid to use m^* = 0.067 m[e] in our case. We can see that the Landau levels show a linear dependence in B as expected. At low B and hence low n[2D], the slight deviation from the straight line fits can be ascribed to experimental uncertainties in measuring the positions of the spin-up and spin-down resistivity peaks. Figure 1. Magnetoresistance measurements R[xx ]( B ) at V[g ]= -0.08 V for sample A at T = 0.3 K. The maxima in R[xx ]occur when the Fermi energy lies in the nth spin-split Landau levels as indicated by n = 3↓ and n = 3↑, n = 2↓ and n = 2↑, and n = 1↓ and n = 1↑, respectively. Figure 2. The Local Fermi energy E and the corresponding 2D carrier density n[2D ]for different Landau levels. (a) Sample A and (b) sample B at T = 0.3 K. Circle, 3↓ and 1↓; square, 3↑ and 1↑; star, 2↓; triangle, 2↑. Figure 3. Logarithm of the amplitudes of the oscillations. The logarithm of the amplitudes of the oscillations divided by T ln(Δρ[xx ]/ T) as a function of temperature at different magnetic field for sample C at V[g ]= 0. The curves correspond to fits described by [18]. The inset shows the measured effective mass at different magnetic fields. In our system as the spin-split resistivity peaks are not observed at the same magnetic field, we need to describe the method of measuring the g-factors as follows. Equation2 can be rewritten as where we consider the effective Lande g-factor g^*. We can see that Equation3 corresponds to two straight line fits through the origin for a pair of spin-split Landau levels in the E-B plane as shown in Figure2a,b. Such an approach was applied to a GaN-based 2DEG in our previous work [19]. We note that our method does depend on the exact functional form of the Landau band since the peak positions of the Landau level is only related to the carrier density in our system. Let us now consider the region ν = 3 between the two linear fits corresponding to two spin-split Landau levels n = 1↓ and n = 1↑. According to Equation3, the difference between the slopes of the spin-split Landau levels is given by g^*Φ06Δ[B]B. Thus we are able to measure g^* for different Landau level indices (n = 1, 2, 3,…). In our system, the spin gap value is proportional to the magnetic field with good accuracy and corresponds to a constant g^* for a pair of given spin-split Landau levels. Figure4 shows the measured g^* as a function of Landau level index n for samples A and B. In all cases, the measured g^* is greatly enhanced over its bulk value in GaAs (0.44). We ascribe this enhancement to exchange interactions. We suggest that the determined g^* is in the zero disorder limit since the positions of the spin-split Landau levels are located using Equation2. Figure 4. The measured g^* as a function of Landau level index n. The measured g^* as a function of Landau level index n for samples A and B at T = 0.3 K. It is worth mentioning that conventional activation energy studies are not applicable to our data obtained on sample A, sample B as well as the GaN-based 2DEG in our previous work [19]. The reason for this is that the values of the R[xx] (and σ[xx]) minima are high; therefore, it is not appropriate to speak of electrons being thermally activated from the localized states to the extended states. In order to provide further understanding on the measurements of the spin gap, we have studied the slopes of the spin-split Landau levels in the E-B plane and have also performed conventional activation energy measurements on sample C over the same magnetic field range. Sample C is a more disordered device compared with samples A and B thus we can only perform measurements in the regime where the ρ[xx] corresponding to a spin-split ν = 3 state is resolved. Figure5 shows the evolution of the n = 1↓ and n = 1↑ resistivity peaks at different magnetic fields for sample C. From the difference between the two slopes of n = 1↓ and n = 1↑ spin-split Landau levels, the exchange-enhanced g-factor for the n = 1 Landau level is measured to be 11.65 ± 0.14, which is in close agreement with those obtained on a much higher mobility in samples A and B. We note that in a dilute GaAs 2DEG, the enhancement factor of g can decrease from about 6 to 3 as the density is reduced [20]. It may be possible that as our 2DEG density is considerably higher than those reported in the seminal work of Tutuc, Melinte, and Shayegan. Therefore we do not see such a trend in our system. Figure 5. Local Fermi energy E and the corresponding 2D carrier density n[2D]. The local Fermi energy E and the corresponding 2D carrier density n[2D ]for n = 1↓ and n = 1↑, Landau levels as a function of B for Sample C at T = 0.3 K. Let us now turn our attention to the activation energy measurements. Figure6 shows ln (ρ[xx]) as a function of 1/T for eight different carrier densities while maintaining the filling factor at ν = 3 for sample C. The resistivity shows activated behavior . Figure6 shows the activation energy Δ[s] determined from a least-square fit to the experimental data shown in Figure5. We can see that the spin gaps Δ[s] drops approximately linearly to zero at a critical magnetic field B[c] ~ 3.47 T. The spin gap is expected to have the form Δ[s] = g[0]μ[B]B + E[ex] = g^*μ[B]B[12], where E[ex] is the many-body exchange energy which lifts the g-factor from its bare value (0.44 in GaAs) to its enhanced value g^*. Figure7 shows that the measured Δ[s] is greatly enhanced over the single particle Zeeman energy (shown in the dotted line), yielding g^* = 4.64 ± 0.30. Moreover, the exchange energy shows a roughly linear B dependence. The disorder broadening Γ[s] can be estimated from the critical magnetic B[c] [12]. From this we obtain a quantum lifetime of Γ[s] = 0.71 ps, in qualitative agreement with the value 0.40 ps obtained from the Dingle plot. For the low-field regime where Δ [s] < Γ[s], the many-body interactions are destroyed by the disorder, and there is no spin-splitting for the magnetic field less than B[c]. As shown in Figure7, the ‘spin gap’ measured by the conventional activation energy studies is very different from that measured by the direct measurements (shown in the dashed line). This is consistent with the fact that activation energy studies yield a mobility gap which is smaller than the real spin gap in the spectrum. Moreover, the measured by studying the slopes of the n = 1 spin-split Landau levels is approximately 2.4 times larger than that determined from the activation energy studies. Our data shows that both the spin gaps and g^* measured by the activation energy studies are very different from those determined from direct measurements. A possible reason for this is that there exists disorder within 2D system which is indispensable to the observation of the IQHE. The direct measurements are performed in the zero disorder limit. On the other hand, in the activation energy studies, the disorder within the quantum Hall system must be considered. As shown in the inset of Figure7, the spin gap in the zero disorder limit is the energy difference between neighboring peaks in the density of states N(E) which is larger than the energy spacing between the edges of the localized states given the finite extended states. We suggest that further theoretical studies are required in order to obtain a full understanding of our results on the spin gaps and g^*. Figure 6. Logarithm of ρ[xx]( B )(ν=3) versus the inverse of temperature 1/ T . The logarithm of ρ[xx](B)(ν=3) versus the inverse of temperature 1/T at different gate voltages (and hence B) for sample C. From left to right: B = 5.72 (pentagon), 5.46 (star), 5.21 (hexagon), 4.97 (diamond), 4.70 (inverted triangle), 4.55 (triangle), 4.39 (heptagon) and 4.25 (square) T, respectively. The slopes of the straight line fits Δ[s ]are shown in Figure7. Figure 7. The experimentally determined Δ[s]/k[B ]at various B. The straight line fit is discussed in the text. The dotted line is the bare Zeeman energy assuming g[0 ]= 0.44. The dashed line corresponds to the spin gap using the measured g^* = 11.65 by the direct measurements. The inset corresponds to a schematic diagram (density of states N(E) versus E) showing the spin gap Δ[s] as a result of the activated behavior from the localized states (hatched areas) to the extended states (in blue). The spin gap in the zero disorder limit Δ[s] is the energy difference between the neighboring peaks in N(E). In conclusion, we have performed direct measurements of the spin gaps in gated GaAs 2DEGs by studying the slopes of spin-split Landau levels in the energy-magnetic field plane. The measured g-factor is greatly enhanced over its bulk value (0.44). Since disorder exists in any experimentally realized system, conventional activation energy studies always measure the mobility gap due to disorder which is different from the real spin gap as shown in our results. As the spin gap is one of the most important energy scales and governs the electron spin degree of freedom, our experimental results provide useful information in the field of spintronics, spin-related phenomena, and quantum computation applications. Authors’ contributions TYH and CTL performed the measurements. CTL, YFC, and GHK coordinated the projects. MYS and DAR grew the samples. TYH, YFC, and CTL drafted the paper. All authors read and approved the final TYH, CTL and YFC were supported by the NSC, Taiwan and National Taiwan University (grant no. 102R890932 and grant no. 102R7552-2). The work at Cambridge was supported by the EPSRC, UK. This research was supported by the World Class University program funded by the Ministry of Education, Science and Technology through the National Research Foundation of Korea (R32-10204). 1. Shen C, Trypiniotis T, Lee KY, Holmes SN, Mansell R, Husain M, Shah V, Li XV, Kurebayashi H, Farrer I, de Groot CH, Leadley DR, Bell G, Parker EHC, Whall T, Ritchie DA, Barnes CHW: Spin transport in germanium at room temperature. Appl Phys Lett 2010, 97:162104. Publisher Full Text 2. Watson SK, Potok RM, Marcus CM, Umansky V: Experimental realization of a quantum spin pump. Phys Rev Lett 2003, 91:258301. PubMed Abstract | Publisher Full Text 3. Khrapai S, Shashkin AA, Dolgopolov VT: Direct measurements of the spin and the cyclotron gaps in a 2D electron system in silicon. Phys Rev Lett 2003, 91:126404. PubMed Abstract | Publisher Full Text 4. Attaccalite C, Moroni S, Gori-Giorgi P, Bachelet GB: Correlation energy and spin polarization in the 2D electron gas. Phys Rev Lett 2002, 88:256601. PubMed Abstract | Publisher Full Text 5. Abrahams E, Kravchenko SV, Sarachik MP: Metallic behavior and related phenomena in two dimensions. Rev Mod Phys 2001, 73:251. Publisher Full Text 6. Nicholas RJ, Haug RJ, von Klitzing K, Weimann G: Exchange enhancement of the spin splitting in a GaAs-Ga[x]Al[1 - x]As heterojunction. Phys Rev B 1988, 37:1294. Publisher Full Text 7. Dolgopolov VT, Shashkin AA, Aristov AV, Schmerek D, Hansen W, Kotthaus JP, Holland M: Direct measurements of the spin gap in the two-dimensional electron gas of AlGaAs-GaAs heterojunctions. Phys Rev Lett 1997, 79:729. Publisher Full Text 8. Fang FF, Stiles PJ: Effects of a tilted magnetic field on a two-dimensional electron gas. Phys Rev 1968, 174:823. Publisher Full Text 9. Janak JF: g Factors for an interacting electron gas. Phys Rev 1969, 178:1416. Publisher Full Text 10. Nicholas RJ, Haug RJ, von Klitzing K, Weimann G: Exchange enhancement of the spin splitting in a GaAs-Ga[x]Al[1 - x]As heterojunction. 11. Kim GH, Nicholls JT, Khondaker SI, Farrer I, Ritchie DA: Tuning the insulator-quantum Hall liquid transitions in a two-dimensional electron gas using self-assembled InAs. Phys Rev B 2000, 61:10910. Publisher Full Text 12. Thomas KJ, Nicholls JT, Simmons MY, Pepper M, Mace DR, Ritchie DA: Possible spin polarization in a one-dimensional electron gas. Phys Rev Lett 1996, 77:135. PubMed Abstract | Publisher Full Text 13. Liang C-T, Lin L-H, Chen KY, Lo S-T, Wang Y-T, Lou D-S, Kim G-H, Chang YH, Ochiai Y, Aoki N, Chen JC, Lin Y, Huang CF, Lin S-D, Ritchie DA: On the direct insulator-quantum Hall transition in two-dimensional electron systems in the vicinity of nanoscaled scatterers. Nanoscale Res Lett 2011, 6:131. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 14. Goldman VJ, Jain JK, Shayegan M: Nature of the extended states in the fractional quantum Hall effect. Phys Rev Lett 1990, 65:907. PubMed Abstract | Publisher Full Text 15. Glozman I, Johnson CE, Jiang HW: Fate of the delocalized states in a vanishing magnetic field. Phys Rev Lett 1995, 74:594. PubMed Abstract | Publisher Full Text 16. Nomura S, Yamaguchi M, Akazaki T, Tamura H, Maruyama T, Miyashita S, Hirayama Y: Enhancement of electron and hole effective masses in back-gated GaAs/Al[x]Ga[1 - x]As quantum wells. 17. Braña AF, Diaz-Paniagua C, Batallan F, Garrido JA, Muñoz E, Omnes F: Scattering times in AlGaN/GaN two-dimensional electron gas from magnetoresistance measurements. J Appl Phys 2000, 88:932. Publisher Full Text 18. Cho KS, Huang T-Y, Huang CP, Chiu YH, Liang C-T, Chen YF, Lo I: Exchange-enhanced g-factors in an Al0.25Ga0.75N/GaN two-dimensional electron system. J Appl Phys 2004, 96:7370. Publisher Full Text 19. Tutuc E, Melinte S, Shayegan M: Spin polarization and g factor of a dilute GaAs two-dimensional electron system. Phys Rev Lett 2002, 88:036805. PubMed Abstract | Publisher Full Text Sign up to receive new article alerts from Nanoscale Research Letters
{"url":"http://www.nanoscalereslett.com/content/8/1/138","timestamp":"2014-04-21T02:10:54Z","content_type":null,"content_length":"105331","record_id":"<urn:uuid:a04d490c-8a6e-4a01-b288-02246e606013>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
Stability of the viscous flow of a fluid through a flexible tube Kumaran, K (1995) Stability of the viscous flow of a fluid through a flexible tube. In: Journal of Fluid Mechanics, 294 . pp. 259-281. Stability_of_the_viscous.pdf - Published Version Restricted to Registered users only Download (1807Kb) | Request a copy The stability of Hagen-Poiseuille flow of a Newtonian fluid of viscosity eta in a tube of radius R surrounded by a viscoelastic medium of elasticity G and viscosity eta(s) occupying the annulus R < r < HR is determined using a linear stability analysis. The inertia of the fluid and the medium are neglected, and the mass and momentum conservation equations for the fluid and wall are linear. The only coupling between the mean flow and fluctuations enters via an additional term in the boundary condition for the tangential velocity at the interface, due to the discontinuity in the strain rate in the mean flow at the surface. This additional term is responsible for destabilizing the surface when the mean velocity increases beyond a transition value, and the physical mechanism driving the instability is the transfer of energy from the mean flow to the fluctuations due to the work done by the mean flow at the interface. The transition velocity Gamma(t) for the presence of surface instabilities depends on the wavenumber k and three dimensionless parameters: the ratio of the solid and fluid viscosities eta(r) = (eta(s)/eta), the capillary number Lambda = (T/GR) and the ratio of radii H, where T is the surface tension of the interface. For eta(r) = 0 and Lambda = 0, the transition velocity Gamma(t) diverges in the limits k much less than 1 and k much greater than 1, and has a minimum for finite k. The qualitative behaviour of the transition velocity is the same for Lambda > 0 and eta(r) = 0, though there is an increase in Gamma(t) in the limit k much greater than 1. When the viscosity of the surface is non-zero (eta(r) > 0), however, there is a qualitative change in the Gamma(t) vs. k curves. For eta(r) < 1, the transition velocity Gamma(t) is finite only when k is greater than a minimum value k(min), while perturbations with wavenumber k < k(min) are stable even for Gamma--> infinity. For eta(r) > 1, Gamma(t) is finite only for k(min) < k < k(max), while perturbations with wavenumber k < k(min) or k > k(max) are stable in the limit Gamma--> infinity. As H decreases or eta(r) increases, the difference k(max)- k(min) decreases. At minimum value H = H-min, which is a function of eta(r), the difference k(max)-k(min) = 0, and for H < H-min, perturbations of all wavenumbers are stable even in the limit Gamma--> infinity. The calculations indicate that H-min shows a strong divergence proportional to exp (0.0832 eta(r)(2)) for eta(r) much greater than 1. Actions (login required)
{"url":"http://eprints.iisc.ernet.in/37919/","timestamp":"2014-04-19T22:38:24Z","content_type":null,"content_length":"30429","record_id":"<urn:uuid:860c9d7b-091d-4f22-9692-65ff13c480ee>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
IB Physics/Mechanics From Wikibooks, open books for an open world Return to IB Physics Topic 2: Mechanics Kinematics (2.1)[edit] Kinematic concepts[edit] 2.1.1 Define displacement, velocity, speed and acceleration. Kinematic Units Symbol Definition SI Unit Vector or Scalar? Displacement s The distance moved in a particular direction m Vector Velocity v or su The rate of change of displacement. Velocity = change of displacement over time taken m s^-1 Vector Speed v or u The rate of change of distance. Speed = distance gone over time taken m s^-1 Scalar Acceleration a The rate of change of velocity. Acceleration = change of velocity over time taken m s^-2 Vector • Vector quantities always have a direction associated with them. 2.1.2 Define and explain the difference between instantaneous and average values of speed, velocity and acceleration. • Average value - over a period of time. • Instantaneous value - at one particular time. 2.1.3 Describe an object's motion from more than one frame of reference. Graphical representation of motion[edit] 2.1.4 Draw and analyse distance–time graphs, displacement–time graphs, velocity–time graphs and acceleration–time graphs. 2.1.5 Analyse and calculate the slopes of displacement–time graphs and velocity – time graphs, and the areas under velocity–time graphs and acceleration–time graphs. Relate these to the relevant kinematic quantity. Uniformly accelerated motion[edit] Determine the velocity and acceleration from simple timing situations Derive the equations for uniformly accelerated motion. Describe the vertical motion of an object in a uniform gravitational field. Describe the effects of air resistance on falling objects. Solve problems involving uniformly accelerated motion. Forces and Dynamics (2.2)[edit] Forces and free-body diagrams[edit] Newton’s first law[edit] Newton's First Law of Physics states that a body will remain at rest or moving with a constant velocity unless acted upon by an unbalanced force. Equilibrium is the condition of a system in which competing influences (such as forces) are balanced. Newton’s second law[edit] ΣF = ma Alternately: ΣF = Δp/Δt In words, the resultant force is all that matters in the second law. The direction of motion depends on the direction of the resultant force. Newton’s third law[edit] If body A exerts a force on body B, then body B exerts an equal and opposite force on body A. Inertial Mass, Gravitational Mass and Weight (2.3)[edit] An object's inertial mass is defined as the ratio of the applied force F, to its acceleration, a. $m_{Inertial}= \dfrac{F}{a}$ State Newton's first law of motion (2.2.4)[edit] In ancient times, Aristotle had maintained that a force is what is required to keep a body in motion. The higher the speed, the larger the force needed. Aristotle's idea of force is not unreasonable and is in fact in accordance with experience from everyday life: It does require a force to push a piece of furniture from one corner of a room to another. What Aristotle failed to appreciate is that everyday life is plagued by friction. An object in motion comes to rest because of friction and thus a force is required if it is to keep moving. This force is needed in order to cancel the force of friction that opposes the motion. In an idealized world with no friction, a body that is set into motion does not require a force to keep it moving. Galileo, 2000 years after Aristotle, was the first to realize that the state of no motion and the state of motion with constant speed in a straight line are indistinguishable from each other. Since no force is present in the case of no motion, no forces are required in the case of motion in a straight line with constant speed either. Force is related to changes in velocity (i.e. acceleration) Newton's first law (generalizing Galileo's statements) states the following: When no forces act on a body, that body will either remain at rest or continue to move along a straight line at constant speed. A body that moves with acceleration (i.e. changing speed or changing direction of motion) must have a force acting on it. An ice hockey puck slides on ice with practically no friction and will thus move with constant speed in a straight line. A spacecraft leaving the solar system with its engines off has no force acting on it and will continue to move in a straight line at constant speed (until it encounters another body that will attract or hit it). Using the first law, it is easy to see if a force is acting on a body. For example, the earth rotates around the sun and thus we know at once that a force must be acting on the Earth. Newton's first law is also called the law of Inertia Inertia is the reluctance of a body to change its state of motion. Inertia keeps the body in the same state of motion when no forces act on the body. When a car accelerates forward, the passengers are thrown back into their seats. If a car brakes abruptly, the passengers are thrown forward. This implies that a mass tends to stay in the state of motion it was in before the force acted on it. The reaction of a body to a change in its state of motion is inertia. A well-known example of inertia is that of a magician who very suddenly pulls the tablecloth off a table leaving all the plates, glasses, etc., behind on the table. The inertia of these objects make them 'want' to stay on the table where they are. Similarly, if you pull very suddenly on a roll of kitchen paper you will tear off a sheet. But if you pull gently you will only succeed in making the paper roll rotate. Work, Energy and Power (2.5)[edit] Work refers to an activity involving a force and movement along the direction of the force. It is a scalar quantity that is measured in Joules (Newton meters in SI units) which can be defined as: Work done= F×s×cosθ Where F is the force applied to the object, s is the displacement of the object and cosθ is the cosine of the angle between the force and the displacement. In a linear example (with the force being exerted in the same direction as the displacement), the cosθ is equal to 1 and the equation simplifies to $Work=F\times s$. Example calculation: If a force of 20 newtons pushes an object 5 meters in the same direction as the force what is the work done? F= 20 N s=5 m W=F×s=20×5= 100 J 100 Joules of work is done Examples (when is work done?): Force making an object move faster (accelerating) Lifting an object up (moving it to a higher position in the gravitational field) Compressing a spring When is work not done When there is no force Object moving at a constant speed Object not moving Some useful equations; If an object is being lifted vertically the work done to it can be calculated using the equation Work done= mgh Where m is the mass in kilograms, g is the earth's gravitational field strength (10N kg^-1), and h is the height in meters Work done in compressing or extending a spring Work done = ½ kx^2 Where k is Hooke's constant and x is the displacement Energy and Power[edit] Energy is the capacity for doing work. The amount of energy you transfer is equal to the work done. Energy is a measure of the amount of work done, this means that the units for energy and work must be the same- joules. Energy is like the "currency" for performing work. To do 100 joules of work, you must expend 100 joules of energy. Conservation of energy In any situation the change in energy must be accounted for. If it is 'lost' by one object it must be gained by another. This is the principle of conservation of energy which can be stated in several The total overall energy of a closed system must be constant Energy is neither created or destroyed, it just changes form. there is no change in the total energy of the universe Energy can be in many different types these include: Kinetic energy, Gravitational potential energy, Elastic potential energy, Electrostatic potential energy, Thermal energy, Electrical energy, Chemical energy, Nuclear energy, Internal energy, Radiant energy, Solar energy, and light energy. You will need equations for the first three Kinetic energy = ½ mv^2 where m is the mass in kg, v is the velocity (in ms^-1) Gravitational potential energy= mgh where m is the mass in kg, g is the gravitational field strength, and h is the change in height Elastic potential energy =½ kx^2 where k is the spring constant and x is the extension Power -measured in Watts (W) or Joules per second (Js-1)- is the rate of doing work or the rate at which energy is transferred. Power= energy transferred÷time taken= energy transferred÷time taken If something is moving at a constant velocity v against a constant frictional force f, the power P needed is P= fv If you do 100 joules of work in one second (using 100 joules of energy), the power is 100 watts. Efficiency is the ratio of useful energy to the total energy transferred . Work-Energy Principle[edit] The change in the kinetic energy of an object is equal to the net work done on the object. $W_{net} = \dfrac{1}{2}mv_{final}^2 - \dfrac{1}{2}mv_{initial}^2$ This fact is referred to as the Work-Energy Principle and is often a very useful tool in mechanics problem solving. It is derivable from conservation of energy and the application of the relationships for work and energy, so it is not independent of the conservation laws. It is in fact a specific application of conservation of energy. However, there are so many mechanical problems which are solved efficiently by applying this principle that it merits separate attention as a working principle. For a straight-line collision, the net work done is equal to the average force of impact times the distance traveled during the impact. Average impact force x distance traveled = change in kinetic energy If a moving object is stopped by a collision, extending the stopping distance will reduce the average impact force. Uniform Circular Motion (2.6)[edit] The centripetal force with constant velocity $v$, at a distance $R$ from the center is defined as: $F = m\dfrac{v^2}{R}$
{"url":"http://en.wikibooks.org/wiki/IB_Physics/Mechanics","timestamp":"2014-04-19T04:44:04Z","content_type":null,"content_length":"43615","record_id":"<urn:uuid:0899c582-a1dc-4831-962d-c73e84ac9b71>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
How well can you approximate a function by a band-restricted function? up vote 2 down vote favorite Say I have a compactly supported $C^1$ function $f:\mathbb{R} \to \mathbb{R}$. Let $R>0$. Let $\nu$ be some reasonable measure on $\mathbb{R}$ -- take, for instance, (a) $d\nu(t)=dt$ or (b) $d\nu(t)= e^{-t}$ for $t>0$ and $d\nu(t)=0$ for $t\leq 0$. Let $\delta(R)$ be the minimum of $|f-\widehat{g}|_2 = \left( \int_\mathbb{R} |f(t)-\widehat{g}(t)|^2 d\nu(t)\right)^{1/2}$ over all functions $g:\mathbb{R} \to \mathbb{C}$ supported on $\lbrack -R,R What is $\delta(R)$? How fast does it decrease as $R\to \infty$? Given $R$, can one construct a $g$ that attains the minimum? (A variation on the same question: allow measures, not just functions $g$, supported on $\lbrack -R,R\rbrack$.) Update: for $d\nu(t) = t$ this is very easy by isometry, as mentioned below; the minimum is attained for $g$ equal to the restriction of $\widehat{f}$ to $\lbrack -R,R\rbrack$ -- and so, if $f$ is in $C^k$, $\delta(R)$ decreases at least as fast as $1/R^{k-1}$ as $R\to \infty$. I am really more interested in the answers for the measure $\nu$ given in (b) above. fourier-analysis nt.number-theory ca.analysis-and-odes 2 Really? $\widehat g$ is entire if $g$ is compactly supported, so we do not have any choice at all as to what to take for $g$ and most of the time we have no possibility to find such $g$ with any $R$. I hope you meant something else or I just misunderstood what is written... – fedja Nov 4 '12 at 14:05 Hm, I guess I really had (b) in mind. Let me rewrite this... – H A Helfgott Nov 4 '12 at 15:55 All right, rewritten. Thanks. – H A Helfgott Nov 4 '12 at 16:15 2 Maybe I'm missing something, but in case (a) (Lebesgue measure), $|f-\widehat{g}|_2 = |\widehat{f}-g|_2$ by Plancherel, so shouldn't you just take $g$ to be the restriction of $\widehat{f}$ to $ [-R,R]$? – Henry Cohn Nov 4 '12 at 17:36 I was about to say, yes, for (a) this is trivial by isometry. But what can one say about (b), say? – H A Helfgott Nov 4 '12 at 20:10 show 11 more comments Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged fourier-analysis nt.number-theory ca.analysis-and-odes or ask your own question.
{"url":"https://mathoverflow.net/questions/111445/how-well-can-you-approximate-a-function-by-a-band-restricted-function","timestamp":"2014-04-18T21:44:22Z","content_type":null,"content_length":"53769","record_id":"<urn:uuid:73b97b51-ccd4-41c1-8cb0-0999ba8ff740>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
Unbiased Shifts for Brownian Motion Add to your list(s) Download to your calendar using vCal If you have a question about this talk, please contact Julia Blackwell. Unbiased shifts of Brownian motion Based on joint work with Günter Last and Peter Mörters Let B = (B(t) : t in R) be a two-sided standard Brownian motion. Let T be a real-valued measurable function of B. If T is a nonnegative stopping time then the shifted process (B(T + t) – B(T) : t nonnegative) is a one-sided Brownian motion independent of B(T). However, the two-sided process (B(T + t) – B(T) : t in R) need not be a Brownian motion. Moreover, the example of a fixed time T = s shows that even if it is, it need not be independent of B(T). Call T an unbiased shift of B if (B(T + t) – B(T) : t in R) is a Brownian motion independent of B(T). Unbiased shifts can be characterized in terms of allocation rules balancing additive functionals of B. For any probability distribution Q on R we construct a stopping time T with the above properties such that B(T) has distribution Q. Also moment and minimality properties of unbiased shifts are The case when Q is concentrated at zero is of special interest. We obtain a rigorous formulation of the intuitive idea that B looks globally the same from all its zeros, thus resolving an issue raised by Mandelbrot in The Fractal Geometry of Nature. The result can be stated as follows: if we travel in time according to the clock of local time we always see a two-sided Brownian motion. This talk is part of the Probability series. This talk is included in these lists: Note that ex-directory lists are not shown.
{"url":"http://www.talks.cam.ac.uk/talk/index/32521","timestamp":"2014-04-20T03:24:54Z","content_type":null,"content_length":"12959","record_id":"<urn:uuid:e8503a02-6b4a-439d-bd90-fd2db0b9284f>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
Parity calculation [Archive] - Parallax Forums Ken Peterson 08-29-2007, 12:02 AM Anybody know of a really efficient parity bit calculation routine in SPIN?· I have an application that needs ODD parity and I was wondering if there is a more elegant way than shifting and XORing. Thankyou!· http://forums.parallax.com/images/smilies/smile.gif The more I know, the more I know I don't know.· Is this what they call Wisdom?
{"url":"http://forums.parallax.com/archive/index.php/t-96803.html","timestamp":"2014-04-17T06:43:52Z","content_type":null,"content_length":"13846","record_id":"<urn:uuid:431f9465-8c9c-4fc1-b789-73a5d136ca23>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
AES-0.2.3: Fast AES encryption/decryption for bytestrings Source code Contents Index An occasionally pure, monadic interface to AES type AES s a = AEST (ST s) a Source type AEST m a = ReaderT AESCtx (WriterT ByteString m) a Source Modes ECB and CBC can only handle full 16-byte frames. This means the length of every strict bytestring passed in must be a multiple of 16; when using lazy bytestrings, its component strict bytestrings must all satisfy this. Other modes can handle bytestrings of any length. However, encrypting a bytestring of length 5 and then one of length 4 is not the same operation as encrypting a single bytestring of length 9; they are internally padded to a multiple of 16 bytes. For OFB and CTR, Encrypt and Decrypt are the same operation. For CTR, the IV is the initial value of the counter. class Cryptable a where Source A class of things that can be crypted The crypt function returns incremental results as well as appending them to the result bytestring. crypt :: a -> AES s a Source :: MonadUnsafeIO m => Mode The AES key - 16, 24 or 32 bytes -> ByteString The IV, 16 bytes -> ByteString -> Direction -> AEST m a -> m (a, ByteString) Run an AES computation => Mode The AES key - 16, 24 or 32 bytes -> ByteString The IV, 16 bytes -> ByteString -> Direction -> forall s. AES s a -> (a, ByteString) Run an AES computation Produced by Haddock version 2.6.0
{"url":"http://hackage.haskell.org/package/AES-0.2.3/docs/Codec-Crypto-AES-Monad.html","timestamp":"2014-04-24T17:03:51Z","content_type":null,"content_length":"14832","record_id":"<urn:uuid:455dc6ce-b74f-4feb-bf83-6202421a4c9f>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
floating point unit programming problem 12-15-2003 #1 Join Date Jan 2003 floating point unit programming problem I'm bored, no school today, so I'm learning how to do assembly with the floating point unit instruction set. I'm trying to normalize a vector, but I cannot access the vector's x,y,or z components, instead i've got to copy to a local variable. Any way around doing that? Here's the code (with comments) void Vector3::Normalize() #if 1 float basic = x*x + y*y + z*z; FLD basic; //Put basic length into st(0) FSQRT; //Put SQUARE ROOT of basic into st(0) FSTP basic; //Store square root of basic into memory address of basic, pop stack if(basic>.0001) //won't ever be exactly zero //Can't seem to access this->x,y,or z float x1 = x; float y1 = y; float z1 = z; FLD x1; //Load into FPU stack element zero FDIV basic; //divide by length FSTP x1; //store in x1 and pop stack FLD y1; FDIV basic; FSTP y1; FLD z1; FDIV basic; FSTP z1; x = x1; y = y1; z = z1; float magnitude = sqrt( (x * x) + (y * y) + (z * z) ); x /= magnitude; y /= magnitude; z /= magnitude; Last edited by Silvercord; 12-15-2003 at 01:59 PM.
{"url":"http://cboard.cprogramming.com/brief-history-cprogramming-com/48373-floating-point-unit-programming-problem.html","timestamp":"2014-04-18T16:27:32Z","content_type":null,"content_length":"40016","record_id":"<urn:uuid:767c248c-1398-41c5-bbc6-678b99d31072>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
Children Are Born Mathematicians : Supporting Mathematical Development, Birth to Age 8 Why Rent from Knetbooks? Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option for you. Simply select a rental period, enter your information and your book will be on its way! Top 5 reasons to order all your textbooks from Knetbooks: • We have the lowest prices on thousands of popular textbooks • Free shipping both ways on ALL orders • Most orders ship within 48 hours • Need your book longer than expected? Extending your rental is simple • Our customer support team is always here to help
{"url":"http://www.knetbooks.com/children-born-mathematicians-supporting/bk/9780131116771","timestamp":"2014-04-21T00:25:04Z","content_type":null,"content_length":"34518","record_id":"<urn:uuid:180d6dad-19d1-469d-a9bd-e902e18f5dc7>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Induction March 24th 2010, 06:01 PM #1 Junior Member Feb 2010 Math Induction Let a be a non zero number and m and n be integers. Prove by induction that: (1) $a^{m+n} = a^{m} a^{n}$ (2). ${(ab)}^{n} = {a^n}{b^n}$ can anyone explain me the basis and inductive steps here? Do I have to start by supposing $n=0$ in (1) and $n=1$ in (2)? what about the inductive step? Exactly what properties of exponentiation are you given to work with? If you don't know what exponentiation is, you have no hope of proving anything about it. (Not saying you don't know how to exponentiate--just that your book/prof is not expecting you to prove this without giving you something to start from. Look in your chapter or ask your prof what you're allowed to assume about exponentiation to begin with.) When $n=0 \text{ and } m=0$, $a^{m+n} = a^{m} a^{n}$ holds since $a^{0+0}=1=a^{0 \cdot 0}$ For induction hypothesis, assume $a^{p+q} = a^{p} a^{q}$, where $p$ and $q$ are integers. We begin by $a^{p+q} = a^{p} a^{q}$. Multiplying through by a, we obtain $a \cdot (a^{p+q}) = a \cdot a^{p} a^{q}$. Plug in the assumption for the left handside in the parethesis, we obtain $a \cdot (a^{p} a^{q})= a \cdot a^{p} a^{q}$. The result is as desired. Consequently, by induction hypothesis $a^{m+n} = a^{m} a^{n}$ for all integers $m$ and $n$. March 25th 2010, 07:43 AM #2 March 25th 2010, 07:57 AM #3 Junior Member Feb 2010 March 25th 2010, 08:01 AM #4 March 26th 2010, 02:24 PM #5 Sep 2009
{"url":"http://mathhelpforum.com/discrete-math/135537-math-induction.html","timestamp":"2014-04-17T05:49:36Z","content_type":null,"content_length":"43155","record_id":"<urn:uuid:6329f0ce-d8a9-4ed5-a7ef-5b07dc507fe9>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
For a representation V of a finite group G, when is Hom(W, W⊗V) trivial for all irreps W? up vote 6 down vote favorite This is probably really easy, but I just need someone to help me get mentally unstuck. As part of a description of the McKay correspondence, I want to show that if $G$ is a finite subgroup of $SU(2)$ and $V$ the corresponding 2-dimensional representation, then $\dim \text{Hom}(W, W \otimes V) = 0$ for any irreducible representation $W$ of $G$. I suspect the result is true in slightly greater generality, but it clearly can't always be true. Since $\dim \text{Hom}(W, W \otimes V) = \dim \text{Hom}(W \otimes W^{\ast}, V)$, the result is false if, for example, $V \simeq W \otimes W^{\ast}$ or is a direct summand thereof for some $W$. So I am wondering when, for a given $G$ and $V$, it is always true that $\dim \text{Hom}(W, W \otimes V) = 0$ for all irreducible representations $W$. One can easily reduce to the case that $V$ is rt.representation-theory gr.group-theory finite-groups If you know the representation theory of $\rm SU(2)$, the fact that $\hom(W,W\otimes V) = 0$ is immediate for $W$ a finite-dimensional irrep. Indeed, remember that the finite-dimensional irreps of $\rm SU(2)$ are classified by their dimension &mdash; there is one for each positive integer &mdash; and we have the decomposition $[n] \otimes 2 = [n-1] \oplus[n+1]$. But I assume that you know this line of reasoning, and are either trying to prove the representation theory or are simply interested in the more general case. – Theo Johnson-Freyd Apr 25 '10 at 20:27 1 To be more precise: yes, I know this, but it says nothing about how the representations [n], [n-1], [n+1] decompose as representations of G. – Qiaochu Yuan Apr 25 '10 at 21:25 add comment 4 Answers active oldest votes One necessary condition is that the center of $G$ needs to act trivially in $V$ for $\mathrm{Hom}(W, W \otimes V)$ to ever be non-trivial. The character of the center justs multiplies in a tensor product, and so we can't have a map from $W$ to $V \otimes W$ if $V$ has a non-trivial central character. (This also follows from Ben's observation above.) I don't know if this is also sufficient. This boils down to looking at groups with trivial center. up vote 5 down vote In any case, you originally asked about finite subgroups of SU(2) and its fundamental representation. These are all classified, and none of them have trivial center, so all satisfy this accepted property. Proof of the last bit: any subgroup $G$ of SU(2) gives a subgroup $\overline{G}$ of SO(3) by projection. If $\overline{G}$ has even order, it has an element of order 2, which necessarily lifts to an element of order 4 whose square is $-1$, so $-1 \in G$. The only subgroups of SO(3) of odd order are the odd cyclic groups, which lift to Abelian groups. Some references are is these notes by Dolgachev or an earlier MO question. The center of the cyclic group of order n certainly doesn't act trivially, but the result is still true for that case...? – Qiaochu Yuan Apr 25 '10 at 22:02 Perhaps you were confused about which direction I was proving? I tried to clarify. As a special case of what I wrote (or what Ben explained), for any non-trivial rep $V$ of any abelian group, $\mathrm{Hom}(W, W \otimes V)$ is always trivial. – Dylan Thurston Apr 25 '10 at 22:13 1 He's answering the contra-positive. If $Hom(V,W^*\otimes W)$ is non-zero, then Z(G) acts trivially on $V$, you agree? – David Jordan Apr 25 '10 at 22:14 Ah. Yes; I had also come to this conclusion by playing around with characters. Can it be shown that a finite subgroup of SU(2) has nontrivial center without already knowing what they are? – Qiaochu Yuan Apr 25 '10 at 22:25 I can prove that a nontrivial finite subgroup of SU(2) has nontrivial center without the classification, which means that your answer is what I need; thanks! The proof is as follows: 2 if V is irreducible, then dim V = 2 divides the order of G, so -I is in G. Otherwise, V is a sum of two one-dimensional representations and G is abelian. – Qiaochu Yuan Apr 26 '10 at show 2 more comments This is true if and only if $V$ doesn't occur in the permutation representation of $G$ acting on itself by conjugation (since that's the sum over irreps of $W\otimes W^*$). This is probably the cleanest description you're likely to get. up vote 13 down vote Of course, one can also state this in terms of characters in which case you want $\sum_{g\in G} \chi_v(g)|C_{G}(g)|=0$. Thanks! I'm going to hold out for an answer more specific to the case I care about, but if one isn't forthcoming I'll accept this answer. – Qiaochu Yuan Apr 25 '10 at 21:13 Another easy remark is that if $g_1,g_2,\ldots,g_k$ is a full set of representatives for the conjugacy classes of $G$, then Ben's condition is equivalent to $\sum_{i=1}^{k} \chi_ {V}(g_i) =0.$ – Geoff Robinson Jun 2 '11 at 19:56 add comment For a compact group $G$ one can define the following equivalence: given two irreps $X$ and $Y$, $X \sim Y$ if they both appear as summands in a finite string of tensor products of irreps $X_1 \otimes X_2 \otimes \dots X_n$. The equivalence classes have the structure of an abelian group which turns out to be the dual of the centre of $G$. This was conjectured in http:// up vote 2 arXiv.org/abs/math/0311170 and proven in http://arxiv.org/abs/math/0312257. Thus $Hom(W \otimes W^*,V) \neq 0$ iff V is in the identity class (i.e. the centre acts trivially on $V$). down vote This sounds great, but I'm not sure what the meaning of "compare" is in your first sentence. – Qiaochu Yuan Apr 25 '10 at 23:21 I'm concerned. He only wants to allow $W^*\otimes W$, and not a string of products of such. If he allows strings of size larger than two, then I think it's well known for $G$ finite that $V$ descends to a rep of $G/Z$, since $G/Z$ acts faithfully on $C[G]$, so that every $G/Z$ rep appears in some tensor power of $C[G]$. – David Jordan Apr 25 '10 at 23:28 @Qiaochu: I replaced "compare" with "appear" (sorry, I mixed words from different languages). @David: it is enough to find one such W to conclude that V is in the identity class, you don't need to check it for all strings. – pasquale zito Apr 26 '10 at 0:27 Thank you for clarifying, but I'm still confused. I see that the Hom condition implies that V is in the identity class. But I don't see why every V in the identity class must be contained in W^* ot W for an irreducible W. It seems that V could be contained in W^* ot W ot W^* ot W (here "ot" is shorthand for \otimes). Is there a statement about strings of length 2 in the irreps? Sorry if I'm being dense. – David Jordan Apr 26 '10 at 0:36 @David: you are absolutely right, silly me! – pasquale zito Apr 26 '10 at 3:31 add comment If you care only about semisimple Lie theory, you can get most of the way just from looking at the weights of the corresponding representations. Let $W$ be simple with highest weight $\mu$, and let $\mu^*$ be the highest weight of $W^*$. Let ${\rm wt}(W)$ be the set of weights of $W$, and let $Q$ be the root lattice for your Lie group. Then ${\rm wt}(W) \subseteq \mu + Q$ and $ {\rm wt}(W^*) \subseteq \mu^* + Q$, and so ${\rm wt}(W \otimes W^*) \subseteq \mu + \mu^* + Q$ is again contained within some coset of $Q$ in the weight lattice $P$. But we know that $0$ is a weight of $W \otimes W^*$, and so $\mu + \mu^* + Q = Q$. Which is to say that the weights of $W \otimes W^*$ are all roots. up vote 1 down In particular, if $V$ is semisimple with highest weight that is not a root, then it is certainly the case that $\hom(V, W\otimes W^*) = 0$. This handles e.g. the defining representations of vote ${\rm SL}(n)$. When ${\rm wt}(V) \subseteq Q$, I'm not sure of the answer. I think you're answering a different question than I'm posing. How is highest weight theory applicable to finite group representations? – Qiaochu Yuan Apr 25 '10 at 21:13 Yes, when wt(V) is in Q it's automatically a summand of something of the form WW*. For one proof see Claim on the bottom of page 17 of arxiv.org/abs/0810.0084 There's an interesting related result about fusion categories having a "universal grading group" whose trivial part is exactly the summands of WW* see arxiv.org/abs/math/0610726 – Noah Snyder Apr 25 '10 at 21:17 @QY: I must have misunderstood the question. Oh, I see: I missed the words "if G is a finite subgroup of" and thought you were ust interested in the representation theory of SU(2). My bad. – Theo Johnson-Freyd Apr 25 '10 at 23:08 add comment Not the answer you're looking for? Browse other questions tagged rt.representation-theory gr.group-theory finite-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/22530/for-a-representation-v-of-a-finite-group-g-when-is-homw-wv-trivial-for-all/22540","timestamp":"2014-04-19T15:26:35Z","content_type":null,"content_length":"89194","record_id":"<urn:uuid:ca04ce04-2d62-44f4-a6eb-84c67b3fbc52>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
RSS SAS Short Course: Module 4 VI. SAS Procedures The following covers some of the most commonly used SAS procedures with which you can run some basic statistical analyses. Go to File, Import Data... to import the Example Data 1 file using the Import Wizard with SPSS File (*.sav) source and member name example1 as was done previously. Before we really begin; you should consider the use of the OPTIONS statement when submitting any program (i.e. syntax). The options statement can be tacked on to just about any program or procedure. What the options statement does is allow you to control the number of characters per line and lines per page of the output generated by the program or procedure to which the options statement is included. The generic form of the options statement follows: OPTIONS LINESIZE=x PAGESIZE=y; The x refers to the number of characters per line and the y refers to the number of lines per page. The reason the options statement is mentioned here is because, SAS can be quite costly in terms of the amount of output generated when one considers printing it or copying and pasting it into a word processing program. For instance, the sixth edition of the Publication Manual of the American Psychological Association (APA) generally recommends using Times New Roman 12 point font on a page with 1 inch margins at top, bottom, left, and right. This configuration in Microsoft Word results in a page that contains approximately 78 characters per line and 46 lines per page. Therefore, if you are accustom to using the APA Publication Manual guidelines for formatting documents, you may want to use an options statement to configure each SAS output so that it fits neatly on a pre-formatted document page. An example of the use of the options statement is provided in the syntax for the PROC PRINT example below -- noticeable because, like all usable syntax on these web pages, it is shown in bold Courier New 10 point font on the web page. 1. PROC PRINT PROC PRINT is frequently used to check the data being read by SAS. It prints out the observations in a SAS data set, using any or some of the variables. The complete syntax for PROC PRINT is as PROC PRINT DATA= SAS-data-set SPLIT= 'split-character' HEADING= direction ROWS= page-format WIDTH= column-width; VAR variable-list; ID variable-list; BY variable-list; PAGEBY BY-variable; SUMBY BY-variable; SUM variable-list; The most common use is to have the PROC PRINT following the data step to verify the data: For the current example with ExampleData1.sav (using member name example1 in SAS); use the following syntax (with optional OPTIONS statement included): PROC PRINT DATA=example1; OPTIONS LINESIZE=78 PAGESIZE=46; 2. PROC CONTENTS This procedure prints descriptions of the contents of one or more files from a SAS library. Another common procedure to verify the data set read into SAS library, especially for a sizeable data set. It is crucial, for example, to check if all observations and variables are read in correctly. PROC CONTENTS prints descriptions of the contents of one or more files from a SAS data library. It is useful for documenting permanent SAS data sets (library members of DATA type). Specific information pertaining to the physical characteristics of a member depends on whether the file is a SAS data set or another type of SAS file. PROC CONTENTS <DATA= <libref.>member> <MEMTYPE= (mtype-list)> <OUT= SAS-data-set> For the current example: PROC CONTENTS DATA=example1; An often used command when first looking at data is the data command in conjunction with the label command to assign labels to variables. For the current example; we assign a new data step consisting of our data, but with some variables having been assigned labels. DATA example1a; SET example1; LABEL Sex ="Gender" recall1 ="Recall at time 1" recall2 ="Recall at time 2"; PROC CONTENTS DATA=example1a; 3. PROC MEANS PROC MEANS computes statistics for an entire SAS data set or for groups of observations in the data set. If you use a BY statement, PROC MEANS calculates descriptive statistics separately for groups of observations. Each group is composed of observations having the same values of the variables used in the BY statement. The groups can be further subdivided by the use of the CLASS statement. PROC MEANS can optionally create one or more SAS data sets containing the statistics calculated. The full syntax for PROC MEANS is as follows: PROC MEANS <option-list> <statistic-keyword-list>; VAR variable-list; BY variable-list; CLASS variable-list; FREQ variable; WEIGHT variable; ID variable-list; OUTPUT <OUT= SAS-data-set> <output-statistic-list> <MINID|MAXID <(var-1<(id-list-1)> We can get descriptive statistics for all of the variables using proc means as shown below. PROC MEANS DATA=example1; We can get descriptive statistics separately by gender (i.e., broken down by SEX) as shown below. PROC MEANS DATA=example1; CLASS Sex; We can get descriptive statistics on the outcome or dependent variable recall at time 1 (recall1) separately by gender (i.e., broken down by SEX) as shown below. PROC MEANS DATA=example1; CLASS Sex; VAR recall1; We can get descriptive statistics on recall1 separated by gender (i.e., broken down by SEX) and class standing (cl_st) as shown below. PROC MEANS DATA=example1; CLASS Sex cl_st; VAR recall1; We can also subset the data do get very specific descriptive statistics. For instance, if we review the output or know the numeric codes for each value of our variables, we can request a subset of the data (example1fj) be generated from the original data (example1) which contains only persons who are sex = 1 and cl_st = 3 which corresponds to females whose class standing is Junior. DATA example1fj; SET example1; IF sex='1'AND cl_st='3'; PROC MEANS DATA=example1fj; VAR recall1; We can verify we have gotten what we wanted by referring to the previous output showing descriptive statistics for males and female across all four levels of class standing. In both the current output and previous output we notice there were 27 females who were Juniors. 4. PROC UNIVARIATE This procedure is useful for basic descriptive statistics of the variables. It provides detail on the distribution of a variable. Features include: • detail on the extreme values of a variable • quartiles, such as the median • several plots to picture the distribution • frequency tables • a test to determine that the data are normally distributed. If a BY statement is used, descriptive statistics are calculated separately for groups of observations. PROC UNIVARIATE DATA= SASdataset PCTLDEF= value VARDEF= DF|WEIGHT|WGT|N|WDF ROUND= roundoff unit...; VAR variables; BY variables; FREQ variable; WEIGHT variable; ID variables; OUTPUT OUT= SASdataset keyword= names...; We can get detailed descriptive statistics for family income using proc univariate as shown below. PROC UNIVARIATE DATA=example1; VAR fam_income; We can also use PROC UNIVARIATE to get conditional univariate summaries using the 'by' command; but first, we need to sort the 'by variable'. PROC SORT DATA=example1; BY Sex; PROC UNIVARIATE DATA=example1; BY Sex; VAR recall1; Another very handy function which can be performed with PROC UNIVARIATE is identification of outliers. To accomplish this, we insert two optional commands or statements into the basic proc univariate syntax. These optional statements are NORMAL and PLOT. PROC UNIVARIATE DATA=example1 NORMAL PLOT; VAR recall1; ID id; In the preceding syntax, we ran a PROC UNIVARIATE program on recall at time 1 (recall1) and use values of the variable participant identification (id) to IDENTIFY (ID) outlying values of recall1. In the next syntax we perform the same basic procedures, but separately for each gender (produces 7 pages of output). PROC UNIVARIATE DATA=example1 NORMAL PLOT; BY Sex; VAR recall1; ID id; 5. PROC FREQ The procedure produces one-way to n-way frequency and crosstabulation tables. It shows the distribution of variable values and crosstabulation tables with combined frequency distributions for two or more variables. For one-way tables, PROC FREQ can compute chi-square tests for equal or specified proportions. For two-way tables, PROC FREQ computes tests and measures of association. For n-way tables, PROC FREQ does stratified analysis, computing statistics within as well as across strata. PROC FREQ options; OUTPUT <OUT= SAS-data-set><output-statistic-list>; TABLES requests / options; WEIGHT variable; EXACT statistic-keywords; BY variable-list; We can get a frequency distribution of age using proc freq as shown below. PROC FREQ DATA=example1; TABLES age; We can make a two way table showing the frequencies for class standing by sex as shown below. PROC FREQ DATA=example1; TABLES cl_st * Sex; Labeling values is a two step process. First, we must create the label formats with proc format using a value statement. Next, we attach the label format to the variable with a format statement. This format statement can be used in either proc or data steps. An example of the proc format step for creating the value formats on class standing (cl_st) follows. VALUE cl_stf 1="Fre" Now that the format for class standing (cl_st) have been created, they must be linked to the variable class standing. This is accomplished by including a format statement in either a proc or a data step. In the program below the format statement is used in a proc freq to change 'cl_st'. PROC FREQ DATA=example1; FORMAT cl_st cl_stf.; TABLES cl_st; 6. PROC TABULATE PROC TABULATE constructs tables of descriptive statistics using class variables, analysis variables, and keywords for statistics. Tables can have one to three dimensions: column; row and column; or page, row, and column. The statistics that PROC TABULATE computes are many of the same statistics computed by other descriptive procedures such as MEANS, FREQ, and SUMMARY. In order for PROC TABULATE to execute, you need either a CLASS or VAR statement, and a TABLE statement. There are no default variables chosen for the procedure. PROC TABULATE <option-list>; CLASS class-variable-list; VAR analysis-variable-list; FREQ variable; WEIGHT variable; FORMAT variable-list-1 format-1 <...variable-list-n format-n>; LABEL variable-1='label-1' <...variable-n='label-n'>; BY <NOTSORTED> <DESCENDING> variable-1 <...<DESCENDING> VARIABLE-N>; TABLE <<page_expression,> row_expression,> column_expression </ table-option-list>; KEYLABEL keyword-1 ='description-1' We can create a basic table of individuals' recall at time 2 (recall2) by gender (sex). PROC TABULATE DATA=example1; CLASS sex; VAR recall2; TABLE (recall2)*mean, sex; 7. PROC GCHART & PROC GPLOT Making a simple graph in SAS. We can make a simple vertical bar chart; with recall at time 1. Because recall 1 is a continuous variable, SAS automatically assigns five bins. TITLE 'Simple Vertical Bar Chart '; PROC GCHART DATA=example1; VBAR recall1; You can control the number of bins for a continuous variable with the level= option on the vbar statement. The program below creates a vertical bar chart with seven bins for recall1. TITLE 'Bar Chart - Control Number of Bins'; VBAR recall1/LEVELS=9; On the other hand, cl_st has only four categories and SAS's tendency to bin into five categories and use midpoints would not do justice to the data. So when you want to use the actual values of the variable to label each bar you will want to use the discrete option on the vbar statement. We can make a bar chart showing the frequencies of family income as shown below. TITLE 'Bar Chart with Discrete Option'; PROC GCHART DATA=example1; VBAR cl_st/DISCRETE; Simply changing 'VBAR' to 'HBAR' will produce the same graph horizontally opposed to vertically. TITLE 'Bar Chart with Discrete Option'; PROC GCHART DATA=example1; HBAR cl_st/DISCRETE; We can create a variety of scatter plots using the PROC PLOT function. It allows us to see the relationship between two continuous variables. The program below creates a scatter plot for recall2 * recall1. This means that recall2 will be plotted on the vertical axis, and recall1 will be plotted on the horizontal axis. TITLE 'Scatterplot - Two Variables'; PROC GPLOT DATA=example1; PLOT recall2*recall1; You may want to examine the relationship between two continuous variables and see which points fall into one or another category of a third variable. The program below creates a scatter plot for recall2*recall1 with each gender (Sex) marked. You specify recall2*recall1=Sex on the plot statement to have each level of sex identified on the plot. TITLE 'Scatterplot - Male/Female Marked'; PROC GPLOT DATA=example1; PLOT recall2*recall1=Sex; The program below creates a scatter plot for recall2*recall1 with each level of Sex marked. The proc gplot is specified exactly the same as in the previous example. The only difference is the inclusion of symbol statements to control the look of the graph through the use of the operands V=, I=, and C=. SYMBOL1 V=circle C=black I=none; SYMBOL2 V=star C=red I=none; TITLE 'Scatterplot - Different Symbols'; PROC GPLOT DATA=example1; PLOT recall2*recall1=Sex; Symbol1 is used for the lowest value of Sex and symbol2 is used for the next lowest value. V= controls the type of point to be plotted. We requested a circle to be plotted for domestic cars, and a star (asterisk) for males. I= none causes SAS not to plot a line joining the points. C= controls the color of the plot. We requested black for females, and red for males. (Sometimes the C= option is needed for any options to take effect.) To plot a regression line along with the points we use the I operand of the symbol statement. The program below creates a scatter plot for recall2*recall1 with such an OLS regression line. The regression line is produced with the I=R operand on the symbol statement. SYMBOL1 V=circle C=blue I=r; TITLE 'Scatterplot - With Regression Line '; PROC GPLOT DATA=example1; PLOT recall2*recall1;
{"url":"http://www.unt.edu/rss/class/Jon/SAS_SC/SAS_Module4.htm","timestamp":"2014-04-19T01:49:23Z","content_type":null,"content_length":"31172","record_id":"<urn:uuid:7a4f3d8c-4f0e-448b-b8bf-c1b2d9addb2d>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
pdf of function of two random variables (It is little difficult) November 23rd 2009, 04:16 AM #1 Nov 2009 cdf of function of two random variables (It is little difficult) I am Ph.D. student studying electrical engineering. I have a question related with cdf of function of two random variables. The question is attached in this post as pdf file. Please, give me an answer. Last edited by kuywam; November 23rd 2009 at 04:27 AM. (a) is ... weird Oo First thing : if X and Y are independent, you can write the product $f_X(x')f_Y(y')$ if not, you have to find the joint pdf of X and Y. Second thing : The whole formula is not very correct... Assuming they're independent : \begin{aligned}<br /> P\left(\tfrac{X}{1+Y}\leq z\right)=P(X \leq z(1+Y))<br /> &=P\left(Y\geq \tfrac{X-z}{z}\right) \\<br /> &=\int_{x'=0}^\infty \int_{y'=0}^{\tfrac{x'-z}{z}} f_Y(y')f_X(x') ~dy' ~dx' \quad (*) \\<br /> &=\int_{x'=0}^\infty f_X(x')\left(\int_{y'=0}^{\tfrac{x'-z}{z}} f_Y(y') ~dy'\right) ~dx' \\<br /> &=\int_{x'=0}^\infty f_Y(y') F_Y\left(\tfrac{x'-z}{z}\right) ~dx \\ <br /> &=\mathbb{E}\left[F_Y\left(\tfrac{X-z}{z}\right)\right]<br /> \end{aligned} For the boundaries in (*), that's using the fact that $P\left(Y\geq \frac{X-z}{z}\right)=\mathbb{E}\left[\bold{1}_{Y\geq \frac{X-z}{z}}\right]$ and that for any measurable function h, $\mathbb{E}[h(X,Y)]=\iint h(x,y)f_X(x)f_Y(y) ~dx~dy$ if X and Y are independent.* erm... good luck How did you get your equality (a) ? As an addition to Moo's post, I think what you were looking for is the following formula: (looks very much like (b), but this one is correct) $P\left(\frac{X}{1+Y}\leq z\right)=$$P(X\leq z(1+Y))=\int_0^{\infty} P(X\leq z(1+y))f_Y(y)dy = \int_0^\infty F_X(z(1+y))f_Y(y)dy$ (assuming X and Y are independent) Thank you so much...^^ Thank you, moo... Thank you, Laurent... I will not forget your kind help...^^ November 23rd 2009, 10:07 AM #2 November 23rd 2009, 11:35 AM #3 MHF Contributor Aug 2008 Paris, France November 23rd 2009, 03:28 PM #4 Nov 2009 November 23rd 2009, 06:38 PM #5
{"url":"http://mathhelpforum.com/advanced-statistics/116277-pdf-function-two-random-variables-little-difficult.html","timestamp":"2014-04-21T00:56:00Z","content_type":null,"content_length":"47950","record_id":"<urn:uuid:1cb83cdb-a55d-46f6-a339-1617b6432a25>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
Minor Program in Statistics For further information about the Statistics Minor, contact the Minor Coordinator, Weiwen Miao. Statistics Minor Requirements The requirements for minoring in statistics are: 1. One of the following courses (Introduction to Statistics): Stat203/Econ204/Psyc200/Soci215; 2. Stat286 (Applied Multivariate Statistical Analysis); 3. Math218 (Probability); 4. Math215 (Linear Algebra); 5. Math121 or Math216 (Multivariable Calculus); 6. One of the following: □ Stat328 (Mathematical Statistics) □ Stat396 (Advanced Topics in Probability and Statistics) □ Economics304 (Econometrics) □ Sociology320 (Advanced Quantitative Methods for Sociologists) 1. A math minor can also be a statistics minor. If a student wants to be a math minor and a statistics minor, the following courses: Stat203, Econ204, Math218, Stat286, Stat328 and Stat396, cannot be counted to satisfy both the math minor and statistics minor. 2. A math major can also be a statistics minor. If a student wants to be a math major and a statistics minor, the following apply: □ Stat203, Econ204 and Stat286 cannot be counted to satisfy both the math major and statistics minor requirement; □ At most one of the following courses can be counted to satisfy both the math major and statistics minor: Math218, Stat328 and Stat396. 3. Math majors with economics concentration: If a math major wants to be an econ concentrator and a statistics minor, Math218, Stat286, Stat328 and Stat396 cannot be counted toward both the econ concentration and the statistics minor. 4. Economics majors with math concentration: If an economics major wants to be a math concentrator and also a statistics minor, the following apply: □ Math218, Stat286, Stat328 and Stat396, cannot be counted to satisfy both the stat minor and the math concentration requirement. □ Econ304 cannot be counted toward statistics minor. (Econ304 is required by economics major.) Last modified: Mon Oct 22 17:09:45 EDT 2012 by David Lippel.
{"url":"http://www.haverford.edu/mathematics/academic_programs/statistics_minor.php","timestamp":"2014-04-18T14:00:26Z","content_type":null,"content_length":"16922","record_id":"<urn:uuid:1b2f18c3-0bee-4fd5-be15-74b1d9228ec2>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Steady-state bifurcation with Euclidean symmetry Melbourne, Ian (1999) Steady-state bifurcation with Euclidean symmetry Transactions of the American Mathematical Society, 4. pp. 1575-1603. Download (451Kb) We consider systems of partial differential equations equivariant under the Euclidean group E(n) and undergoing steady-state bifurcation (with nonzero critical wavenumber) from a fully symmetric equilibrium. A rigorous reduction procedure is presented that leads locally to an optimally small system of equations. In particular, when n = 1 and n = 2 and for reaction-diffusion equations with general n, reduction leads to a single equation. (Our results are valid generically, with perturbations consisting of relatively bounded partial differential operators.) In analogy with equivariant bifurcation theory for compact groups, we give a classification of the different types of reduced systems in terms of the absolutely irreducible unitary representations of E(n). The representation theory of E(n) is driven by the irreducible representations of O(n - 1). For n = 1, this constitutes a mathematical statement of the `universality' of the Ginzburg-Landau equation on the line. (In recent work, we addressed the validity of this equation using related techniques.) When n = 2, there are precisely two significantly different types of reduced equation: scalar and pseudoscalar, corresponding to the trivial and nontrivial one-dimensional representations of O(1). There are infinitely many possibilities for each n equal to or greater than 3. Item Type: Article Additional Information: First published in Transactions of the American Mathematical Society, 4, 1575-1603, published by the American Mathematical Society. © 1999 American Mathematical Society. Divisions: Faculty of Engineering and Physical Sciences > Mathematics Depositing User: Mr Adam Field Date Deposited: 27 May 2010 14:41 Last Modified: 23 Sep 2013 18:33 URI: http://epubs.surrey.ac.uk/id/eprint/1465 Actions (login required) Downloads per month over past year Information about this web site
{"url":"http://epubs.surrey.ac.uk/1465/","timestamp":"2014-04-21T02:55:38Z","content_type":null,"content_length":"28654","record_id":"<urn:uuid:f5178bfe-4481-4b60-a992-aaabec91f246>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: help figuring it out what this is? C3N6H6 • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. That's a tough one. I haven't done IR, NMR in a while. I'll try my best to help you out tho. I think the two 3400 peaks means you have a primary amine, while the 3300 suggests a 2ndary amine. The 3130 suggests an alkene. I can't use the drawing tool right now, but I think you have 2 amines on primary (opposite). I'm not sure how to read the 2nd chart, the esims one. What is that? Best Response You've already chosen the best response. I think the best way is to discuss this, kind of like they do...idk if they have that at your school, "discussion course" for each of your classes, where a TA gives you problems and you just discuss them amongst a smaller class of like abt 20 students. Since my courses usually all had 200+ students. Best Response You've already chosen the best response. 1H NMR shoed three peaks mean three different type of protons are there Best Response You've already chosen the best response. one H is above 6 mean it is to do with aromatic Proton Best Response You've already chosen the best response. but u have only C3 then i t cant be aromatic Best Response You've already chosen the best response. LOL :( its too difficult to discuss it here as @abb0t said Best Response You've already chosen the best response. thanks for the input guys, il take it into consideration. My class is only about 50 people but this is for a problem set which were technically not allowed to discuss it, so i can't bring it up on the school website lol the ESI-MS is from an electrospray ionization mass spectrometer btw. Best Response You've already chosen the best response. good Atlast you got the anser or nooo LOL:) Best Response You've already chosen the best response. Well, im taking a look at the IR again and it looks like those are 2 single peaks, so I think they're 2ndary amines and I think aromatic due to the unsaturation of the formula. I'm going to take a guess at the structure: |dw:1365090552424:dw| Best Response You've already chosen the best response. DUDE thats totally it. i reverse looked it up and found an IR from that same molecule. thanks a bunch Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/515c9c26e4b07077e0c2014e","timestamp":"2014-04-19T02:23:52Z","content_type":null,"content_length":"63272","record_id":"<urn:uuid:e1e242e0-7f61-4448-a9f3-341632bb8d8f>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
Avon, MA Science Tutor Find an Avon, MA Science Tutor ...I am trained in several multi-step reading programs, 6+ traits of writing, Dibles Reading, as well as many types of math programs. Along with these trainings, I have also strongly developed the skills of reading IEP's and consulting with schools. I look forward to working with you or your child in a variety of academic areas. 7 Subjects: including psychology, reading, elementary (k-6th), special needs ...I love the enthusiasm of elementary science students. I encourage them to ask questions, put information together to draw conclusions, create and read graphs and charts and consider what it means, what happens next and why that matters. I have not taught elementary school but I have tutored elementary students K-6 in all subjects, but mostly reading, spelling and writing. 33 Subjects: including ACT Science, biology, psychology, English ...My teaching philosophy is simply that every student can learn, achieve and succeed in mathematics if they have the correct, positive attitude and are willing to work hard. I am passionate about teaching mathematics and enjoy seeing students develop excellent study and learning abilities over tim... 13 Subjects: including physics, calculus, statistics, geometry ...While earning my degree in chemistry, my concentration was in physical chemistry. I took three semesters of honors physics, one year of physical chemistry, and one year of graduate level advanced physical chemistry. My senior research dealt with the contribution of electron tunneling to the reaction mechanisms of light-activated inorganic metal complexes. 3 Subjects: including chemistry, physics, organic chemistry ...Whether you want to solidify your knowledge and get ahead or get a fresh perspective if your are struggling, I am confident I can help you. I have the philosophy that anything can be understood if it is explained correctly. Teachers and professors can get caught up using too much jargon which can confuse students. 19 Subjects: including chemistry, physics, physical science, biology Related Avon, MA Tutors Avon, MA Accounting Tutors Avon, MA ACT Tutors Avon, MA Algebra Tutors Avon, MA Algebra 2 Tutors Avon, MA Calculus Tutors Avon, MA Geometry Tutors Avon, MA Math Tutors Avon, MA Prealgebra Tutors Avon, MA Precalculus Tutors Avon, MA SAT Tutors Avon, MA SAT Math Tutors Avon, MA Science Tutors Avon, MA Statistics Tutors Avon, MA Trigonometry Tutors
{"url":"http://www.purplemath.com/avon_ma_science_tutors.php","timestamp":"2014-04-18T19:00:20Z","content_type":null,"content_length":"24023","record_id":"<urn:uuid:1dca7f80-1c4e-41b4-af7a-f0a201dec15b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
Prediction: the Lasso vs. just using the top 10 predictors Posted on February 23, 2012 by admin One incredibly popular tool for the analysis of high-dimensional data is the lasso. The lasso is commonly used in cases when you have many more predictors than independent samples (the n « p) problem. It is also often used in the context of prediction. Suppose you have an outcome Y and several predictors X[1],…,X[M], the lasso fits a model: Y = B[0] + B[1] X[1] + B[2] X[2] + … + B[M] X[M] + E subject to a constraint on the sum of the absolute value of the B coefficients. The result is that: (1) some of the coefficients get set to zero, and those variables drop out of the model, (2) other coefficients are “shrunk” toward zero. Dropping some variables is good because there are a lot of potentially unimportant variables. Shrinking coefficients may be good, since the big coefficients might be just the ones that were really big by random chance (this is related to Andrew Gelman’s type M errors). I work in genomics, where n«p problems come up all the time. Whenever I use the lasso or when I read papers where the lasso is used for prediction, I always think: “How does this compare to just using the top 10 most significant predictors?” I have asked this out loud enough that some people around here started calling it the “Leekasso” to poke fun at me. So I’m going to call it that in a thinly veiled attempt to avoid Stigler’s law of eponymy (actually Rafa points out that using this name is a perfect example of this law, since this feature selection approach has been proposed before at least once). Here is how the Leekasso works. You fit each of the models: Y = B[0] + B[k]X[k] + E take the 10 variables with the smallest p-values from testing the B[k] coefficients, then fit a linear model with just those 10 coefficients. You never use 9 or 11, the Leekasso is always 10. For fun I did an experiment to compare the accuracy of the Leekasso and the Lasso. Here is the setup: • I simulated 500 variables and 100 samples for each study, each N(0,1) • I created an outcome that was 0 for the first 50 samples, 1 for the last 50 • I set a certain number of variables (between 5 and 50) to be associated with the outcome using the model X[i] = b[0i] + b[1i]Y + e (this is an important choice, more later in the post) • I tried different levels of signal to the truly predictive features • I generated two data sets (training and test) from the exact same model for each scenario • I fit the Lasso using the lars package, choosing the shrinkage parameter as the value that minimized the cross-validation MSE in the training set • I fit the Leekasso and the Lasso on the training sets and evaluated accuracy on the test sets. The R code for this analysis is available here and the resulting data is here. The results show that for all configurations, using the top 10 has a higher out of sample prediction accuracy than the lasso. A larger version of the plot is here. Interestingly, this is true even when there are fewer than 10 real features in the data or when there are many more than 10 real features ((remember the Leekasso always picks 10). Some thoughts on this analysis: 1. This is only test-set prediction accuracy, it says nothing about selecting the “right” features for prediction. 2. The Leekasso took about 0.03 seconds to fit and test per data set compared to about 5.61 seconds for the Lasso. 3. The data generating model is the model underlying the top 10, so it isn’t surprising it has higher performance. Note that I simulated from the model: X[i] = b[0i] + b[1i]Y + e, this is the model commonly assumed in differential expression analysis (genomics) or voxel-wise analysis (fMRI). Alternatively I could have simulated from the model: Y = B[0] + B[1] X[1] + B[2] X[2] + … + B[M] X [M] + E, where most of the coefficients are zero. In this case, the Lasso would outperform the top 10 (data not shown). This is a key, and possibly obvious, issue raised by this simulation. When doing prediction differences in the true “causal” model matter a lot. So if we believe the “top 10 model” holds in many high-dimensional settings, then it may be the case that regularization approaches don’t work well for prediction and vice versa. 4. I think what may be happening is that the Lasso is overshrinking the parameter estimates, in other words, you give up too much bias for a gain in variance. Alan Dabney and John Storey have a really nice paper discussing shrinkage in the context of genomic prediction that I think is related. Recent Comments • David Quigley on Writing good software can have more impact than publishing in high impact journals for genomic statisticians • Karl Broman on Writing good software can have more impact than publishing in high impact journals for genomic statisticians • Josh Schraiber on Writing good software can have more impact than publishing in high impact journals for genomic statisticians This entry was posted in Uncategorized and tagged lasso, Leekasso, prediction, R. Bookmark the permalink.
{"url":"http://simplystatistics.org/2012/02/23/prediction-the-lasso-vs-just-using-the-top-10/","timestamp":"2014-04-20T11:38:29Z","content_type":null,"content_length":"26184","record_id":"<urn:uuid:23c9afba-53b3-44c1-8a99-7fa43bf637ff>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
MassJoin My Conversation | Join My ConversationMass Mass is a measure of how much matter an object is made of, and is related to weight. In physics it appears in the equation acceleration = force / mass which is one of Newton’s three famous laws. Notice that the larger the mass, the lesser the acceleration given the same force. If you push with the same force on a ’78 Impala and a shopping cart, the cart will accelerate much faster. In the demos so far mass has been left out of the equation: acceleration = force which is simply giving mass a value of 1. In animation, you may want to include mass in the equation so that you could give many objects the same behavior but different masses. For example, it would be useful in making collisions of objects of different sizes look realistic. When using different masses with gravity, the equation used should be the general gravity equation, not the specialized one given in surface gravity.
{"url":"http://www.jmckell.com/mass/","timestamp":"2014-04-19T06:52:07Z","content_type":null,"content_length":"32573","record_id":"<urn:uuid:420fe03c-917f-4b39-80d1-9a91b21fc110>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
Descriptions of and Information for MATH and CPSC Courses Descriptions for and other information regarding mathematics and computer science courses at Salem College. Mathematics and Computer Science Course Descriptions and Information The information provided below is intended as a guide for students as they plan their time at Salem College. Please refer to the most current edition of the Salem College Undergraduate Catalog for the most accurate information regarding courses in mathematics and computer science. MATH 020. College Algebra (1 course) Structure of algebraic properties of real numbers, polynomials and their roots, rational expressions, exponents and radical expressions, binomial theorem, solution of equations and inequalities, properties of functions and graphing. The course is designed to prepare first-year students for further mathematics courses, such as MATH 025 and MATH 070. Some familiarity with basic algebra is expected. Not included in the major. Prerequisite: placement. Note: This course does not satisfy the Salem Signature Liberal Arts Disciplinary Requirement in Mathematics or the Salem Signature requirement in Quantitative Interpretation/Evidence Based Thinking. This course is usually taken by students whose degree programs require a course in Calculus, either MATH 070 or MATH 100. This course is the minimum prerequisite for BUAD 240: Business Statistics, CHEM 110: General Chemistry, and SOCI 215: Social Statistics. MATH 025. Elementary Functions and Graphs (1 course) Functions, including the trigonometric functions, exponential functions and logarithmic functions, will be studied in detail. In addition, topics in analytic geometry, including conic sections and solutions of systems of equations using matrices will be covered. This course is designed to prepare the student for calculus. Prerequisite: MATH 020 or placement. Note: This course does not satisfy the Salem Signature Liberal Arts Disciplinary Requirement in Mathematics or the Salem Signature requirement in Quantitative Interpretation/Evidence Based Thinking. This course is usually taken by students whose degree programs require MATH 100. Placement in MATH 025 or higher satisfies the prerequisite for BIOL 100: Cell and Molecular Biology. Completion or placement out of this course satisfies a prerequisite for BIOL 205: Biometry. Placement in this course satisfies the prerequisite for CHEM 110: General Chemistry MATH 060. Introduction to Finite Mathematics (1 course) A course in mathematics that is applicable in a variety of fields, including business, accounting and the social sciences. Topics include sets, Venn diagrams, probability, statistics, linear functions, linear regression, systems of linear equations and matrix algebra. Applications are used throughout the course. Other topics such as graphic linear programming, the Simplex method, the mathematics of finance, the game theory, logic and Markov processes may be included at the discretion of the instructor. Some familiarity with basic algebra is expected. Prerequisite: one year of high school algebra or placement. Note: This course satisfies the Salem Signature Liberal Arts Disciplinary Requirement in Mathematics. This course satisfies a prerequisite for BUAD 240: Business Statistics, EXER 310: Exercise Physiology, EXER 320: Biomechanics of Sport and Exercise, EXER 330: Measurement, Assessment and Evaluation of Exercise and Sport. MATH 070. Essential Calculus (1 course) An algebra-intensive introduction to calculus with emphasis on applications to business, accounting and social sciences. Derivatives and integrals of polynomial, rational and exponential and logarithmic functions will be discussed. Applications include optimization, price elasticity of demand, point of diminishing returns and producer and consumer surplus. Not included in the mathematics major. Students may not receive credit for both MATH 070 and MATH 100. Prerequisite: MATH 020 or placement. Note: This course satisfies the Salem Signature Liberal Arts Disciplinary Requirement in Mathematics and the Interdisciplinary Dimensions Requirement in Quantitative Interpretation/Evidence-Based Thinking. This course also satisfies the requirements for the BS in Accounting, the BA in Biology, the BS and the BSBA in Business Administration, the BA in Economics, and the BS in Exercise Science. This course satisfies a prerequisite for ACCT 140: Intermediate Accounting I, a prerequisite for BIOL 205: Biometry, a prerequisite for BIOL 230: Genetics, BIOL 290: Honors Independent Study in Biology, BUAD 240: Business Statistics, FINC 302: Corporate Finance, CHEM 207: Solutions, . Placement in this course satisfies the prerequisites for CHEM 110: General Chemistry; completion of this course satisfies a prerequisite for ECON 250: Mathematical Economics, ECON 320: Econometrics, and PHYS 201: General Physics I. MATH 100. Calculus I (1 course) Functions, limits, continuity, the derivative and its applications and The Fundamental Theorem of Calculus. Prerequisite: Placement or a grade of C or better in MATH 025. Note: This course satisfies the Salem Signature Liberal Arts Disciplinary Requirement in Mathematics and the Interdisciplinary Dimensions Requirement in Quantitative Interpretation/Evidence-Based Thinking. This course also satisfies requirements for the BS in Accounting, the BS in Biochemistry, the BS in Biology, the BS and the BSBA in Business Administration, the BS in Chemistry, the BA in Economics, the BA in Teaching Schools and Society with the Mathematics Concentration, the BA in Environmental Studies with the Conservation Ecology Concentration, and the BS in Exercise Science. This course satisfies a prerequisite for ACCT 140: Intermediate Accounting I, a prerequisite for BIOL 205: Biometry, a perquisite for BIOL 230: Genetics, a prerequisite for BIOL 290: Honors Independent Study in Biology, BUAD 240: Business Statistics, FINC 302: Corporate Finance. Placement in this course satisfies the prerequisite for CHEM 110: General Chemistry, ECON 250: Mathematical Economics, ECON 320: Econometrics, and PHYS 210: General Physics I. MATH 101. Calculus II (1 course) Applications of the integral, integration techniques, inverse trigonometric functions, exponential and logarithmic functions, L’Hopital’s Rule, improper integrals, conic sections, parametric and polar equations. Prerequisite: MATH 100. Note: This course satisfies the Salem Signature Interdisciplinary Dimensions Requirement in Quantitative Interpretation/Evidence-Based Thinking. This course also satisfies the requirements for the BS in Biochemistry, the BS in Chemistry, the BA in Teaching Schools and Society with the Mathematics Concentration, and the BA in Environmental Studies with the Computational Environmental Analysis MATH 102. Calculus III (1 course) Infinite series, vectors and vector algebra, surfaces in space, lines and planes in space, vector valued functions and an introduction to partial differentiation. Prerequisite: MATH 101. Note: This course satisfies the Salem Signature Interdisciplinary Dimensions Requirement in Quantitative Interpretation/Evidence-Based Thinking. This course also satisfies the requirements for the BS in Chemistry. It is a prerequisite for CHEM 311: Physical Chemistry I. MATH 103. Calculus IV (1 course) Partial differentiation, properties of the gradient, optimization of multivariate functions, the method of Lagrange multipliers, multiple integrals in rectangular spherical and cylindrical coordinates, vector fields, line and surface integrals, Greens Theorem, the Divergence Theorem and Stokes theorem. An introduction to differential equations may also be included. Prerequisite: MATH Note: This course satisfies the Salem Signature Interdisciplinary Dimensions Requirement in Quantitative Interpretation/Evidence-Based Thinking. MATH 110. Introductory Linear Algebra (1 course) Vector methods in geometry, real vector spaces, systems of linear equations, linear transformations and matrices, equivalence of matrices and determinants. Prerequisite: MATH 101. Note: This course also satisfies the requirements for the BA in Teaching Schools and Society with the Mathematics Concentration. MATH 122. Probability (1 course) Probability theory, including discrete and continuous random variables, moments and moment-generating functions, bivariate distributions, the Central Limit Theorem, Chebychev’s Inequality and the Law of Large Numbers. Required for secondary certificate. Prerequisite: MATH 101. Note: This course also satisfies the requirements for the BA in Teaching Schools and Society with the Mathematics Concentration. MATH 132. Mathematical Statistics (1 course) A calculus-based treatment of both descriptive and inferential statistics. Topics will include organizing data, sampling distributions, hypothesis testing, estimation theory, regression, correlation and analysis of variance. Emphasis will be placed on both theory and applications. Prerequisite: MATH 122. MATH 140. Introduction to Numerical Analysis (1 course) Solutions of equations in one variable, interpolation and polynomial approximation, numerical differentiation and integration, solutions of linear systems and initial value problems for ordinary differential equations. Examples will be taken from the physical and biological sciences. Prerequisite: MATH 102 and CPSC 140. Offered as needed. MATH 142. Statistical Methods with R (1 course) This course presents statistical inference with a focus on statistical computing in the R environment. Topics include: graphical representations of data; measures of central tendency and dispersion; binomial, normal, Student’s t, chi2- and F-distributions as they apply to inferential statistics; sampling methods; linear and multi-linear regression, correlation; hypothesis testing; analysis of variance. Three lectures and a two-hour laboratory per week. Prerequisite: MATH 100; CPSC 140 strongly recommended. Note: This course also satisfies the requirements for the BA in Teaching Schools and Society with the Mathematics Concentration, and the BA in Environmental Studies core. This course also satisfies the requirements for the Computational Analysis Concentration and the Conservation Ecology Concentration of the Environmental Studies major. MATH 162. Mathematics of Finance (1 course) This course covers the basic mathematical concepts in consumer-related instruments and derivative asset pricing. The mathematical formulas associated with consumer instruments, including effective rates of interest, annuities, sinking funds, and amortized loans, will be derived and explained in detail. A discussion of the principal assets traded in financial markets, such as Arbitrage Pricing Theory, will be followed by detailed explanations and derivations of the formulas associated with bond valuation, and the pricing of options and derivative securities in the contexts of binomial probability trees and the Black-Scholes option-pricing model. Both American- and European-style options are included in the course. Prerequisite: MATH 102. MATH 200. Independent Study in Mathematics (0.5 – 2 courses) Independent study under the guidance of a faculty advisor. Open to students with a 2.0 cumulative average and permission of the chair of department. Independent study may take the form of readings, research, conference, project and/or field experience. Independent study may be taken for a total of four courses, no more than two in any term. MATH 202. College Geometry (1 course) An axiomatic approach to the foundations of finite geometries, Euclidean, Hyperbolic and Elliptic geometries, transformational geometry in the plane, convexity and an introduction to topology. Additional topics, including graph theory, knot theory, fractal theory, projective geometry and Euclidean constructions, may also be included at the discretion of the instructor. Required for secondary certificate. Prerequisite: MATH 110. Note: This course also satisfies the requirements for the BA in Teaching Schools and Society with the Mathematics Concentration. MATH 210. Differential Equations (1 course) Basic theory of ordinary differential equations of first order and first degree with applications; linear differential equations and linear systems; operational methods, numerical methods, solutions in series, existence and uniqueness theorems. Prerequisite: MATH 101. Note: This course satisfies the Salem Signature Interdisciplinary Dimensions Requirement in Quantitative Interpretation/Evidence-Based Thinking. This course also satisfies the requirements for the BA in Environmental Studies with the Computational Environmental Analysis concentration. MATH 221. Modern Algebra (1 course) Elementary theory of groups, rings, integral domains and fields; properties of number systems; polynomials; and the algebraic theory of fields. Required for secondary certificate. Prerequisite: MATH Note: This course also satisfies the requirements for the BA in Teaching Schools and Society with the Mathematics Concentration. MATH 240. Topology (1 course) Point set topology, including basic topological properties, metric spaces, topological spaces and product spaces. Offered as needed. MATH 242. Nonparametric Statistical Methods (1 course) This course is an introduction to the methods of statistical analysis appropriate to categorical and other data when no assumptions are or can be made about the parent distribution of the data. The Wilcoxon Rank-Sum test and other rank tests, goodness of fit tests and signed tests will be discussed. Data sets will be included from marketing, sociology, biology, psychology and education. Computer usage required, though students may use whatever statistical computing environment with which they are familiar. Prerequisite: MATH 070 or 100 and either BIOL 205, BUAD 240, ECON 320, MATH 132, MATH 142, PSYC 101, or SOCI 215. Note: This course also satisfies the requirements for the BA in Environmental Studies with the Computational Environmental Analysis concentration and serves as an elective for the BA in Sociology. MATH 250. History of Mathematics (1 course) A general survey of the history and development of mathematical ideas and thought. Topics include Egyptian, Babylonian, Hindu-Indian, ancient Greek and Arabic mathematics, as well as mathematics from outside Western tradition. The birth of Calculus and selected topics from the 19^th and 20^th centuries will be included. Biographical and historical content will be supplemented by the study and application of techniques and procedures used in earlier eras. Thus, this will be a “working” course in which students will focus on doing sample problems in ways that illustrate important developments in mathematics. Prerequisite: MATH 101. MATH 270. Internship in Mathematics (1 course) An opportunity to use the knowledge and skills the student has learned in coursework to solve problems in a real work setting; the apprenticeship aspect of the internship implies that the student has some base of knowledge and will increase her knowledge and skills by direct contact with an experienced, knowledgeable mentor. Open to sophomores, juniors and seniors with a 2.0 cumulative average; maximum credit per term is (1 course); admission by application only. MATH 280. Special Topics in Mathematics (1 course) Investigation of a topic, issue or problem in mathematics. Topics might include mathematical modeling, dynamical systems, graphical programming. MATH 290. Honors Independent Study in Mathematics (1 course) Advanced independent study under the guidance of a faculty advisor. Normally open to juniors and seniors with a 3.5 average in mathematics. Subject to the approval of the chair of the department. Honors work may be taken for a maximum of two courses. MATH 321. Real Analysis (1 course) A rigorous treatment of the real number system, limits, continuity, sequences, series, differentiation and Riemann integration. Prerequisite: MATH 103. MATH 330. Complex Variables (1 course) The complex number system; complex-valued functions; limits and continuity; complex differentiation and analytic functions; complex integration and Cauchy Theory; infinite series. Prerequisites: MATH 102 and 110. CPSC 140. Introduction to Programming I (1 course) Computer programming in an object-oriented language such as Java for algorithmic problem solving. Programming concepts such as classes, objects, inheritance, variables and data types, methods, looping, strings, arrays, basic sorting, scientific computations and elementary drawing will be introduced. Requires competence in high school algebra. Note: This course is required for both the BA and the BS in Mathematics. It also satisfies the requirements for the BA in Environmental Studies with the Computational Environmental Analysis CPSC 141. Introduction to Programming II (1 course) Computer programming in an object-oriented language such as Java for algorithmic problem. Programming concepts not covered in Computer Science 140, such as collections, recursions, sorting, searching, input/output and exceptions, advanced drawing and elementary data structures will be introduced. Prerequisite: CPSC 140. Offered as needed.
{"url":"http://www.salem.edu/academics/undergrad/mathematics/math-and-cpsc-courses","timestamp":"2014-04-17T03:58:32Z","content_type":null,"content_length":"78942","record_id":"<urn:uuid:d5209f35-8b60-498b-8b6b-acee6d956e8b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: February 2012 [00482] [Date Index] [Thread Index] [Author Index] Probability Distribution Function • To: mathgroup at smc.vnet.net • Subject: [mg125172] Probability Distribution Function • From: Niles <niels.martinsen at gmail.com> • Date: Sun, 26 Feb 2012 04:18:52 -0500 (EST) • Delivered-to: l-mathgroup@mail-archive0.wolfram.com I have a probablity distribution (Maxwell-Boltzmann) giving the probability of a classical particle having some velocity v. Now, what I have is a function to calculate the trajectory for a particle with some velocity v_i. I need to apply this function to the whole distribution. My question is regarding how I should do this. Originally what I had thought about doing is to partition the distribution into N small bins, and associate a velocity to each bin. My plan was then to calculate the trajectory for each velocity (= bin), and the "output-velocity" I weigh with the original probability/ 1) My first question is if this is a correct method I am using? 2) I have already implemented this is Mathematica. However, for some distinct bins some of the "output"-velocities are the same. So I need to figure out some way to add them up, which I don't find that easy. My problem is to determine how close two data points have to be in order to be binned together. Best regards,
{"url":"http://forums.wolfram.com/mathgroup/archive/2012/Feb/msg00482.html","timestamp":"2014-04-18T20:52:42Z","content_type":null,"content_length":"26013","record_id":"<urn:uuid:a7068f59-dd06-4bcb-b635-601fed99bb45>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
Pringle, PA Math Tutor Find a Pringle, PA Math Tutor ...I am also capable of teaching SAT Math, SAT Reading, ACT Math, ACT English, ACT Science, Trigonometry, Algebra 1 & 2, Pre-calculus, and up to Calculus 3. In addition, I have extensive Chemistry and Biology lab experience.I have passed Organic Chemistry 1 & 2 at Bloomsburg University. I also taught the material to many of my classmates. 18 Subjects: including algebra 2, geometry, precalculus, trigonometry ...I give at least one public speaking engagement a week and teach a 6-week class twice a week. I specialize in conversational public speaking. Organic Chemistry is the study of the chemistry mainly on carbon but can and does look at and require the chemistry of other elements. 10 Subjects: including geometry, precalculus, algebra 1, trigonometry ...I enjoy tutoring and find myself to be fair and reasonable with my prices. My main focus is to help students in need. I enjoy working with students either in a one on one or group setting. 80 Subjects: including algebra 2, biology, calculus, chemistry ...If there is something I do not know how to do, I will research until I understand it and will help you understand, too. I am a visual learner - this means that I learn better when I can SEE something. I tend to teach my own students this way, but I am also trained to recognize a student's own learning style and adapt my teaching to help them learn. 30 Subjects: including algebra 1, algebra 2, prealgebra, geometry ...The math courses I have taken so far are Algebra, Pre-Calculus, and Calculus I, all with a 4.0 GPA. My overall GPA is 3.981 and I am a member of the Phi Theta Kappa Honor Society. I previously worked for a cyber-school setting up and running field trips for students. 8 Subjects: including algebra 1, algebra 2, prealgebra, precalculus Related Pringle, PA Tutors Pringle, PA Accounting Tutors Pringle, PA ACT Tutors Pringle, PA Algebra Tutors Pringle, PA Algebra 2 Tutors Pringle, PA Calculus Tutors Pringle, PA Geometry Tutors Pringle, PA Math Tutors Pringle, PA Prealgebra Tutors Pringle, PA Precalculus Tutors Pringle, PA SAT Tutors Pringle, PA SAT Math Tutors Pringle, PA Science Tutors Pringle, PA Statistics Tutors Pringle, PA Trigonometry Tutors Nearby Cities With Math Tutor Courtdale, PA Math Tutors Dallas, PA Math Tutors Edwardsville, PA Math Tutors Exeter, PA Math Tutors Forty Fort, PA Math Tutors Kingston, PA Math Tutors Luzerne, PA Math Tutors Plymouth, PA Math Tutors Sugar Notch, PA Math Tutors Swoyersville, PA Math Tutors Warrior Run, PA Math Tutors West Pittston, PA Math Tutors West Wyoming, PA Math Tutors Wyoming, PA Math Tutors Yatesville, PA Math Tutors
{"url":"http://www.purplemath.com/Pringle_PA_Math_tutors.php","timestamp":"2014-04-18T13:58:50Z","content_type":null,"content_length":"23867","record_id":"<urn:uuid:8c40e3ac-8dd7-4f0c-9b9e-2aedf12e91d4>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
The purpose of the journal is to provide a forum for the publication of high quality research and tutorial papers in computational mathematics. In addition to the traditional issues and problems in numerical analysis, the journal also publishes papers describing relevant applications in such fields as physics, fluid dynamics, engineering and other branches of applied science. The journal strives to be flexible in the type of papers it publishes and their format. Equally desirable are: (i) Full papers, which should be complete and relatively self-contained original contributions with an introduction that can be understood by the broad computational mathematics community. Both rigorous and heuristic styles are acceptable. Of particular interest are papers about new areas of research, in which other than strictly mathematical arguments may be important in establishing a basis for further developments. (ii) Tutorial review papers, covering some of the important issues in Numerical Mathematics, Scientific Computing and their Applications. The journal will occasionally publish contributions which are larger than the usual format for regular papers. (iii) Short notes, which present specific new results and techniques in a brief communication.
{"url":"http://www.efluids.com/efluids/pages/j_midpages/applied_numerical_math.htm","timestamp":"2014-04-16T18:58:20Z","content_type":null,"content_length":"7524","record_id":"<urn:uuid:b3fd86cb-3725-4d8c-b5ef-5ac407b47dd7>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Interpretability in Q/CORRECTION Harvey Friedman friedman at math.ohio-state.edu Thu Dec 23 08:12:28 EST 2004 This is a CORRECTED copy of my previous email of 12/22/04 6:39PM. Hopefully, this will pass scrutiny, and can be compared with what Solovay On 12/20/04 6:05 AM, "Edward T. Dean" <edean at myway.com> wrote: > I have been skimming through Edward Nelson's _Predicative Arithmetic_ > recently, and he writes (at the tail end of Ch. 15) that he does not know the > answer to a certain compatibility problem regarding interpretability in > Robinson arithmetic: for formulas A and B, if both Q[A] and Q[B] are > interpretable in Q, then is Q[A,B] interpretable in Q? I'm just wondering if > anyone on FOM does know the answer, as the book is decently aged. The answer is no. In fact, I give a simple (and well studied) Pi01 sentence A such that Q[A] and Q[notA] are both interpretable in Q. THEOREM A. Let T be finitely axiomatized systems in predicate calculus with equality. The following are equivalent. i) EFA = ISigma0(exp) proves "T is cut free consistent"; ii) T is interpretable in Q. We write Con*(ISigma0) for "ISigma0 is cut free consistent". It is convenient to use Q', the obvious axiomatization of Q by a single universal sentence, using 0,S,+,x,<=,<,=, and cutoff subtraction. LEMMA 1. EFA proves Con*(ISigma0). LEMMA 2. EFA proves Q + Con*(ISigma0) is cut free consistent. Proof: Argue in EFA. Suppose 1 = 0 can be derived from Q + Con*(ISigma0) by a cut free proof. Then 1 = 0 can be derived from Q' + Con*(ISigma0) by a cut free proof. Hence Q' + Con*(ISigma0) is false. This contradicts Lemma 1. QED LEMMA 3. ISigma0 does not prove Con*(ISigma0). LEMMA 4. EFA proves the following. There is no cut free proof in ISigma0 of LEMMA 5. EFA proves that ISigma0 + Con*(ISigma0) and ISigma0 + notCon*(ISigma0) are both cut free consistent. THEOREM B. Q + Con*(ISigma0) and Q + notCon*(ISigma0) are both interpretable in Q. By sharpening Theorem 1, we can also get THEOREM C. ISigma0 + Con*(ISigma0) and Isigma0 + notCon*(ISigma0) are both interpretable in Q. Years ago, I had a series of theorems to the effect that "relative consistency is the same as interpretability". This stuff has been published and reworked by several authors. I don't remember seeing this particular refinement: THEOREM. Let S,T be finitely axiomatized systems in predicate calculus with equality, where Q is interpretable in S. The following are equivalent. i) EFA = ISigma0(exp) proves "if S is cut free consistent then T is cut free ii) T is interpretable in S. Harvey Friedman More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2004-December/008686.html","timestamp":"2014-04-16T22:13:10Z","content_type":null,"content_length":"5109","record_id":"<urn:uuid:3e33d605-13cf-4eab-a8bb-11808780bf5c>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
A new inequality for the von Neuman entropy Seminar Room 1, Newton Institute Strong subadditivity of von Neumann entropy, proved in 1973 by Lieb and Ruskai, is a cornerstone of quantum coding theory. All other known inequalities for entropies of quantum systems may be derived from it. I will describe some work with Andreas Winter in which we prove a new inequality for the von Neumann entropy which is independent of strong subadditivity. This work sheds light on extremal types of entanglement for multi-party quantum states.
{"url":"http://www.newton.ac.uk/programmes/QIS/seminars/2004082616201.html","timestamp":"2014-04-20T05:48:42Z","content_type":null,"content_length":"4673","record_id":"<urn:uuid:8bc27871-8948-4579-9ddd-1a175e3ce100>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry Tutors Acton, MA 01720 Math Tutor for High School, Jr. High, Middle School ...I am the father of 3 teens, and have been a soccer coach, youth group leader, and scouting leader. I am also an engineering and business professional with BS and MS degrees. I tutor Algebra, , Pre-calculus, Pre-algebra, Algebra 2, Analysis, Trigonometry,... Offering 10+ subjects including geometry
{"url":"http://www.wyzant.com/Worcester_MA_geometry_tutors.aspx","timestamp":"2014-04-17T10:41:11Z","content_type":null,"content_length":"60577","record_id":"<urn:uuid:c9fe7de3-1f7d-408c-847f-724dc805e036>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
Sample questions from the second half of the course Note: this is not a sample final exam. For one thing, there are too many questions. For another thing, there are no questions from the first half of the course. You should be particularly sure to do problems 15 and 16, since these deal with topics that were not on any of the homework assignments. Question 1 List the nodes of the tree below in preorder, postorder, and breadth-first order. Question 2 In the binary search tree below, carry out the following operations in sequence: Add 5, add 17, delete 23, delete 9. Question 3 T is a binary tree of height 3. What is the largest number of nodes that T can have? What is the smallest number? Question 4 T is a min heap of height 3. What is the largest number of nodes that T can have? What is the smallest number? Question 5 True or false: In a preorder traversal of a binary search tree, the first item printed out is always the smallest one. If true, explain why; if false, give an example where it is false. Question 6 True or false: In a breadth-first traversal of a min heap, the first item printed out is always the smallest one. If true, explain why; if false, give an example where it is false. Question 7 A. Show how the min heap below would be implemented in an array. B. Show the result of executing deleteMin() and add(4) in sequence starting with the min heap below. (You may give your answer either in the form of a tree or in the form of an array.) Question 8 Show the result of running the partition subroutine of quicksort on the following array, assuming that the index of the pivot is chosen to be 0 (the pivot is A[0]=17). What value does partition A=[17, 2, 34, 23, 6, 11, 49, 7, 22, 33] Question 9 Explain how bucket sort can be used to sort the numerical values below. Assume that the range 0.0 - 1.0 is divided into 10 buckets, each of size 0.1 [0.92, 0.15, 0.03, 0.71, 0.95, 0.43, 0.12, 0.69, 0.19, 0.52] Question 10 If you are sorting a million items, roughly how much faster is heapsort than insertion sort? (Note: log(1,0000,000) = 20.) Question 11 Given a list of integers, you wish to find the mode; that is, the value that appears most often in the list. Let N be the length of the list. For instance, in the list [1, 5, 2, 5, 2, 5, 5, 1], N=9 and the mode is 5, which appears four times. Assume that the values are all between 1 and M, that you have enough memory to construct an array of size M, and that you can create an array initialized to 0 in unit time. Given an algorithm to find the mode in time O(N). Note: This time bound should be valid even if M is much bigger than N Question 12 Suppose that a graph is implemented using an adjacency-list representation using the following Java class definitions: (For this question, we are only interested in the data fields, so the methods are omitted.) class Vertex { public String tag; public Edge firstOutarc; class Edge { public Integer tag; public Vertex tail; public Vertex head; public Edge nextEdge; Assume that the nextEdge fields are used to organize the outarcs from a given vertex into a singly-linked list with no header. Draw the objects with links that would be an implementation of the following directed graph: Note: there are many more than 3 objects and 4 links. Question 13 Show how the graph in question 12 would be represented as an adjacency array. Question 14 For the class definition in question 12, write a method U.addEdge(V,L) which adds an edge from U to V labelled L. Question 15 For each of the directed graphs shown below: • If it is acyclic, give a topological sort. • If it not acyclic, find a cycle. Question 16 A. Construct a 2-3 tree with the following elements: 2,5,8,11,14,17,21,24,28. Do not do successive adds; just build the tree left to right, bottom up. B. Show the results of doing the following operations in sequence: Add 18, add 19, delete 8, delete 5, delete 11.
{"url":"http://cs.nyu.edu/courses/Spring13/CSCI-UA.0102-002/SampleFx.html","timestamp":"2014-04-19T01:49:04Z","content_type":null,"content_length":"4993","record_id":"<urn:uuid:34038fcf-fe74-4596-8d72-6209084c9d82>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Longitudinal Traffic model: The IDM In this simulation, we have used the Intelligent-Driver Model (IDM) to simulate the longitudinal dynamics, i.e., accelerations and braking decelerations of the drivers. Model Structure The IDM is an microscopic traffic flow model, i.e., each vehicle-driver combination constitutes an active "particle" in the simulation. Such model characterize the traffic state at any given time by the positions and speeds of all simulated vehicles. In case of multi-lane traffic, the lane index complements the state description. More specifically, the IDM is a car-following model. In such models, the decision of any driver to accelerate or to brake depends only on his or her own speed, and on the position and speed of the "leading vehicle" immediately ahead. Lane-changing decisions, however, depend on all neighboring vehicles (see the lane-changing model MOBIL). The model structure of the IDM can be described as follows: • The influencing factors (model input) are the own speed v, the bumper-to-bumper gap s to the leading vehicle, and the relative speed (speed difference) Delta v of the two vehicles (positive when • The model output is the acceleration dv/dt chosen by the driver for this situation. • The model parameters describe the driving style, i.e., whether the simulated driver drives slow or fast, careful or reckless, and so on: They will be described in detail below. Model Equations The IDM model equations read as follows: The acceleration is divided into a "desired" acceleration a [1-(^v/[v[0]])^delta] on a free road, and braking decelerations induced by the front vehicle. The acceleration on a free road decreases from the initial acceleration a to zero when approaching the "desired speed" v0. The braking term is based on a comparison between the "desired dynamical distance" s^*, and the actual gap s to the preceding vehicle. If the actual gap is approximatively equal to s^*, then the breaking deceleration essentially compensates the free acceleration part, so the resulting acceleration is nearly zero. This means, s^* corresponds to the gap when following other vehicles in steadily flowing traffic. In addition, s^* increases dynamically when approaching slower vehicles and decreases when the front vehicle is faster. As a consequence, the imposed deceleration increases • decreasing distance to the front vehicle (one wants to maintain a certain "safety distance") • increasing own speed (the safety distance increases) • increasing speed difference to the front vehicle (when approaching the front vehicle at a too high rate, a dangerous situation may occur). The mathematical form of the IDM model equations is that of coupled ordinary differential equations: • They are differential equations since, in one equation, the dynamic quantities v (speed) and its derivative dv/dt (acceleration) appear simultaneously. • They are coupled since, besides the speed v, the equations also contain the speed v[l]=v-Delta v of the leading vehicle. Furthermore, the gap s obeys its own kinematic equation, ds/dt=-Delta v coupling the gap s to the speeds of the two vehicles. Model Parameters The IDM has intuitive parameters: • desired speed when driving on a free road, v0 • desired safety time headway when following other vehicles, T • acceleration in everyday traffic, a • "comfortable" braking deceleration in everyday traffic, b • minimum bumper-to-bumper distance to the front vehicle, s0 • acceleration exponent, delta. In general, every "driver-vehicle unit" can have its individual parameter set, e.g., • trucks are characterized by low values of v0, a, and b, • careful drivers drive at a high safety time headway T, • aggressive ("pushy") drivers are characterized by a low T in connection with high values of v0, a, and b. Often two different types are sufficient to show the main phenomena. The standard parameters used in the simulations are the following: ┃ Parameter │ Value Car │ Value Truck │ Remarks ┃ ┃ Desired speed v[0] │ 120 km/h │ 80 km/h │ For city traffic, one would adapt the desired speed while the other parameters essentially can be left unchanged. ┃ ┃ Time headway T │ 1.5 s │ 1.7 s │ Recommendation in German driving schools: 1.8 s; realistic values vary between 2 s and 0.8 s and even below. ┃ ┃ Minimum gap s[0] │ 2.0 m │ 2.0 m │ Kept at complete standstill, also in queues that are caused by red traffic lights. ┃ ┃ Acceleration a │ 0.3 m/s^2 │ 0.3 m/s^2 │ Very low values to enhance the formation of stop-and go traffic. Realistic values are 1-2 m/s^2 ┃ ┃ Deceleration b │ 3.0 m/s^2 │ 2.0 m/s^2 │ Very high values to enhance the formation of stop-and go traffic. Realistic values are 1-2 m/s^2 ┃ Simulation of the Model Simulation means to numerically "integrate", i.e., approximatively solve the coupled differential equations of the model. For this, one defines a finite numerical update time interval Δt, and integrates over this time step assuming constant accelerations. This so-called ballistic method reads new speed: v(t+Δt)=(dv/dt) Δt, new position: x(t+Δt)=x(t)+v(t)Δt+1/2 (dv/dt) (Δt)^2, new gap: s(t+Δt)=x[l](t+Δt)-x(t+Δt)-L[l], where dv/dt is the IDM acceleration calculated at time t, x is the position of the front bumper, and L[l] the length of the leading vehicle. For the Intelligent-Driver Model, any update time steps below 0.5 seconds will essentially lead to the same result, i.e., sufficiently approximate the true solution. Strictly speaking, the model is only well defined if there is a leading vehicle and no other object impeding the driving. However, generalizations are straightforward: • If there is no leading vehicle and no other obstructing object ("free road"), just set the gap to a very large value such as 1000 m (The limes gap to infinity is well-defined for any meaningful car-following model such as the IDM). • If the next obstructing object is not a leading vehicle but a red traffic light or a stop-signalized intersection, just model the red light or the stop sign by a standing virtual vehicle of length zero positioned at the stopping line. When simulating a transition to a green light, just eliminate the virtual vehicle. (See the szenario "traffic Lights") • If a speed limit (either directly by a sign or indirectly, e.g., when crossing the city limits) becomes effective, reduce the desired speed, if the present value is above this limit (scenario "Laneclosing"). Likewise, reduce the desired speed of trucks in the presence of gradients (scenario "Uphill Grade") Further information: Martin Treiber
{"url":"http://www.vwi.tu-dresden.de/~treiber/MicroApplet/IDM.html","timestamp":"2014-04-20T00:50:13Z","content_type":null,"content_length":"10401","record_id":"<urn:uuid:f755c1f3-8216-42cd-9d47-10feb5d9c7d1>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculate the three currents I1, I2, and I3 indicated in the circuit diagram shown in the figure... Get your Question Solved Now!! Calculate the three currents I1, I2, and I3 indicated in the circuit diagram shown in the figure... Introduction: Electricity More Details: Calculate the three currents I1, I2, and I3 indicated in the circuit diagram shown in the figure (Figure 1) . Please log in or register to answer this question. 0 Answers Related questions
{"url":"http://www.thephysics.org/132023/calculate-three-currents-indicated-circuit-diagram-figure","timestamp":"2014-04-21T05:21:23Z","content_type":null,"content_length":"104909","record_id":"<urn:uuid:753384b2-915b-49d0-92ff-e816a5e2b098>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
elliptic curves Results 1 - 10 of 13 , 2001 "... 1 Let E be an elliptic curve defined over Q and of conductor N. For a prime p ∤ N, we denote by E the reduction of E modulo p. We obtain an asymptotic formula for the number of primes p ≤ x for which E(Fp) is cyclic, assuming a certain generalized Riemann hypothesis. The error terms that we get are ..." Cited by 14 (3 self) Add to MetaCart 1 Let E be an elliptic curve defined over Q and of conductor N. For a prime p ∤ N, we denote by E the reduction of E modulo p. We obtain an asymptotic formula for the number of primes p ≤ x for which E(Fp) is cyclic, assuming a certain generalized Riemann hypothesis. The error terms that we get are substantial improvements of earlier work of J.-P. Serre and M. Ram Murty. We also consider the problem of finding the size of the smallest prime p = pE for which the group E(Fp) is cyclic and we show that, under the generalized Riemann hypothesis, pE = O � (log N) 4+ε � if E is without complex multiplication, and pE = O � (log N) 2+ε � if E is with complex multiplication, for any 0 < ε < 1. 1 , 2004 "... - a survey- ..." "... 1 Let E be an elliptic curve over Q. For a prime p of good reduction, let Ep be the reduction of E modulo p. We investigate Koblitz’s Conjecture about the number of primes p for which Ep(Fp) has prime order. More precisely, our main result is that if E is with Complex Multiplication, then there exis ..." Cited by 3 (0 self) Add to MetaCart 1 Let E be an elliptic curve over Q. For a prime p of good reduction, let Ep be the reduction of E modulo p. We investigate Koblitz’s Conjecture about the number of primes p for which Ep(Fp) has prime order. More precisely, our main result is that if E is with Complex Multiplication, then there exist infinitely many primes p for which #Ep(Fp) has at most 5 prime factors. We also obtain upper bounds for the number of primes p ≤ x for which #Ep(Fp) is a prime. 1 - TRANSACTIONS OF AMERICAN MATHEMATICAL SOCIETY , 2003 "... Let E be an elliptic curve defined over Q and with complex multiplication. For a prime p of good reduction, let E be the reduction of E modulo p. We find the density of the primes p ≤ x for which E(Fp) is a cyclic group. An asymptotic formula for these primes had been obtained conditionally by J.-P. ..." Cited by 3 (1 self) Add to MetaCart Let E be an elliptic curve defined over Q and with complex multiplication. For a prime p of good reduction, let E be the reduction of E modulo p. We find the density of the primes p ≤ x for which E (Fp) is a cyclic group. An asymptotic formula for these primes had been obtained conditionally by J.-P. Serre in 1976, and unconditionally by Ram Murty in 1979. The aim of this paper is to give a new simpler unconditional proof of this asymptotic formula, and also to provide explicit error terms in the formula. - Lecture Notes in Comput. Sci. 2369 , 2002 "... Abstract. We study the density of integral points on punctured abelian surfaces. Linear growth rates are observed experimentally. 1 ..." - J. Ramanujan Math. Soc "... Let E be an elliptic curve defined over Q. For any prime p of good reduction, let Ep be the reduction of E mod p. Denote by Np the cardinality of Ep(Fp), where Fp is the finite field of p elements. Let P (Np) be the greatest prime divisor of Np. We prove that if E has CM then for all but o(x / log x ..." Cited by 2 (1 self) Add to MetaCart Let E be an elliptic curve defined over Q. For any prime p of good reduction, let Ep be the reduction of E mod p. Denote by Np the cardinality of Ep(Fp), where Fp is the finite field of p elements. Let P (Np) be the greatest prime divisor of Np. We prove that if E has CM then for all but o(x / log x) of primes p ≤ x, P (Np)> p ϑ(p), where ϑ(p) is any function of p such that ϑ(p) → 0 as p → ∞. Moreover we show that for such E there is a positive proportion of primes p ≤ x for which P (Np)> p ϑ, where ϑ is any number less than ϑ0 = 1 − 1 2 prove the following. Let Γ be a free subgroup of rank r ≥ 2 of the group of rational points E(Q), and Γp be the reduction of Γ mod p, then for a positive proportion of primes p ≤ x, we have where ɛ> 0. e− 1 4 = 0.6105 · · ·. As an application of this result we |Γp |> p ϑ0−ɛ Keywords: Reduction mod p of elliptic curves, Elliptic curves over finite fields, Brun-Titchmarsh inequality in number fields, Bombieri-Vinogradov theorem in number fields, Abelian extensions of imaginary quadratic number fields. 2000 Mathematics Subject Classification. Primary 11G20, Secondary 11N37. 1 "... Introduction Let K be a global field of char p and let F q K denote the algebraic closure of F p in K. We fix an elliptic curve E/K with non-constant j-invariant and a torsion-free subgroup E(K) of rank r > 0. We write V for the open set of places v of K such that the special fiber E v is an e ..." Cited by 1 (0 self) Add to MetaCart Introduction Let K be a global field of char p and let F q K denote the algebraic closure of F p in K. We fix an elliptic curve E/K with non-constant j-invariant and a torsion-free subgroup E(K) of rank r > 0. We write V for the open set of places v of K such that the special fiber E v is an elliptic curve and, for v in V , we let # v E v (k v ) be the image of # under reduction modulo v, where k v is the residue field of K at v. We fix a finite set of (rational) prime numbers S which is large enough to include the exceptional primes which we will define explicitly in section 2.4 and section 3), and we let G(#, S) denote the subset of v V such that # v contains the prime-to-S part of E v (k v ). For every n > 0, we write V n for the subset of v V such that deg(v) = n and let G n (#, S) = V n G(#, S). Theorem 1. Suppose r 6. There exists constants a, b satisfying 0 1 and depending only on r and S such that, for each n 1 there exists # n (#, S), depending on r, S an , 1993 "... Various generalizations of the Artin's Conjecture for primitive roots are considered. It is proven that for at least half of the primes p, the first log p primes generate a primitive root. A uniform version of the Chebotarev Density Theorem for the field ) valid for the range l < log x is proven. ..." Add to MetaCart Various generalizations of the Artin's Conjecture for primitive roots are considered. It is proven that for at least half of the primes p, the first log p primes generate a primitive root. A uniform version of the Chebotarev Density Theorem for the field ) valid for the range l < log x is proven. A uniform asymptotic formula for the number of primes up to x for which there exists a primitive root less than s is established. Lower bounds for the exponent of the class group of imaginary quadratic fields valid for density one sets of discriminants are determined. RESUM E Nous considerons di #erentes generalisations de la conjecture d'Artin pour les racines primitives. Nous demontrons que pour au moins la moitie des nombres premiers p, les premiers log p nombres premiers engendrent une racine primitive. Nous demontrons une version uniforme du Theoreme de Densite de Chebotarev pour le corps Q(# l , 2 ) pour l'intervalle l < log x. On etablit une formule asymptotique uniforme pour les nombres de premiers plus petits que x tels qu' il existe une racine primitive plus petite que s. Nous determinons des minorants pour l'exposant du groupe de classe des corps quadratiques imaginaires valides pour ensembles de discriminants de densite 1. Contents "... Abstract. Let E be an elliptic curve defined over Q. Let Γ be a subgroup of rank r of the group of rational points E(Q) of E. For any prime p of good reduction, let ¯ Γ be the reduction of Γ modulo p. Under certain standard assumptions, we prove that for almost all primes p (i.e. for a set of primes ..." Add to MetaCart Abstract. Let E be an elliptic curve defined over Q. Let Γ be a subgroup of rank r of the group of rational points E(Q) of E. For any prime p of good reduction, let ¯ Γ be the reduction of Γ modulo p. Under certain standard assumptions, we prove that for almost all primes p (i.e. for a set of primes of density one), we have | ¯ Γ | ≥ p f(p), where f(x) is any function such that f(x) → ∞, at an arbitrary slow speed, as x → ∞. This provides additional evidence in support of a conjecture of Lang and Trotter from 1977. 1. , 1990 "... mathematicae ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=475253","timestamp":"2014-04-17T14:33:59Z","content_type":null,"content_length":"33851","record_id":"<urn:uuid:1245c34d-48b9-4888-877e-f0466f806c64>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
3000 Homework Math 3000 Homework Answers #21 From Smith, Eggen, & St. Andre, A Transition to Advanced Mathematics, 5^th Ed. Answers to * problems are given in the back of the book and will not be reproduced here. (pg. 213 : 3, 5, 7, 8, 12, 16, 21 ) 3. (a) Hint: Use contradiction. (b) Consider the function f: A 5. (b) not finite. (d) not finite. (f) finite (in fact empty) (g) finite (h) finite (i) not finite (j) not finite (k) finite (l) finite (m) not finite. 7. (b) Let A be infinite and A 8. (a) Define the function f: N[k]x N[m]N[km] by f(a,b) = (a-1)m + b. We will show that this function is a bijection and thus show that N[k]x N[m] is finite because it is equivalent to N[km]. Suppose f(a,b) = f(c,d), then (a-1)m + b = (c-1)m + d. This can be rewritten as (a-c)m = d - b. Since both d and b are N[km]. By the division algorithm we can write r = sm + t with t < m. If t = 0, we can write r = (s-1)m + m. Notice that with this convention, the largest the coefficient of m can be is k-1. Now, f(s+1, t) = sm + t = r, so the function is onto, and thus a bijection. (b) Since A is finite, there exists a bijection f: A N[k] and since B is finite there exists a bijection g: BN[m]. Define the function h: A x B N[k]x N[m] by h(a,b) = (f(a), g(b)). We show that h is a bijection. Suppose h(a,b) = h(c,d). Then (f(a), g(b)) = (f(c), g(d)). But this means that f(a) = f(c) and g(b) = g(d). Since f is one-to-one, we have a = c. Since g is one-to-one we have b = d, so (a,b) = (c,d) and h is one-to-one. Now, let (r, s) be any element of N[k]x N[m]. Since f is onto, there exists an a so that f(a) = r. Since g is onto, there exists a b so that g(b) = s, therefore, h (a,b) = (f(a), g(b)) = (r, s). So, h is onto and therefore a bijection. Now, by composing h with the bijection of part (a) we get a bijection from A x B onto N[km], and so, A x B is finite. 12. Let A be a finite set and B an infinite set. If B - A is finite, then (B - A) 16. Let n = 2, then r = 1 since r < n. Any function from {1} to {1,2} can not be onto, clearly. So assume that the statement is true for n = k. Consider a function f: N[r]N[k+1]. Suppose that f is onto. Then, there exists an a in N[r] such that f(a) = k+1. If a is not r, let f(r) = m and define a new function f': N[r]N[k+1] by f'(a) = m, f'(r) = k+1 and f'(x) = f(x) otherwise. The functions f and f' have the same image. Now, restrict f (or f') to the set N[r] - {r}, which is N[r-1]. The image of this restriction is N[k]. By the induction hypothesis, this restricted function is not onto. So, there exists an element y in N[k] which has no pre-image under the restriction. But this this element would also have no pre-image under the original function f either, so f is not onto. Therefore, the statement is true by PMI. 21. (a) C . This proof does not correctly construct the bijection needed to show that this is true. (c) F. The claim is false. If B is the empty set, then A x B will be finite, but A could be any size set.
{"url":"http://www-math.ucdenver.edu/~wcherowi/courses/m3000/abhw20.html","timestamp":"2014-04-18T23:15:50Z","content_type":null,"content_length":"5455","record_id":"<urn:uuid:02364601-264e-481f-9909-b81d934763ee>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
The Book of THOTH Chapter XI (3376 total words in this text) (1518 reads) Plato said in regard to the Elements of the Universe: “God fashioned them by form and number.” We have had something to say about Form but have touched very little upon Number. It will be well for use to make a few observations in the light of the Forms we have reconstructed. I shall have little to say in regard to the Numbers attributed to the ordinary Qabalistic Plan of the Sephiroth, these having been dealt with in “Q.B.L.” and elsewhere. It is merely necessary to recapitulate as follows: The simple two-dimensional figure consists of 10 Sephiroth; 10 being the sum of the numbers from 1 to 4. This importance of this lies in the Fourfold Nature of the Ineffable Name which is the Formula of the whole System. There are 22 (2 + 2 = 4) connecting links or Paths. These consist of 3 + 7 + 12, and correspond to the Elements, Planets, and Signs of the Zodiac. In all we have 32, called, in relation to this System, the 32 Paths of Wisdom, representing the whole figure. One of the special virtues of this number is that it represents the coalescence of Macroprosopus and Microprosopus in the Divine Name AHIHVH, and thus shows the connection between Kether—The Highest Crown—and the Nine lower Sephiroth, which emanated from it. When we allow this simple figure to expand in one direction, as previously explained, we find, since Malkuth remains One and the same throughout, that the Second Tree contains 19 Sephiroth, which is a prime number, reduces by addition to 10. Likewise the Third Tree consists of 28 Sephiroth, a Perfect Number which again reduces to the original 10 and therefore to 1 or Unity. The Fourth Tree contains 37 Sephiroth, another prime number reducing to 10. The Fifth gives 46 which reduces to 10, while the Sixth represents 55 which not only does this but is the Sum of the Numbers from 1 to 10. And since the number of the Sephiroth will be increased by 9 at every progression, their total, at each step, will always reduce to 10 by addition. The 22 Paths of the first figure will increase by 20’s to 42, 62, and so on, since two of these—Beth and Daleth—retain their own nature indefinitely. Thus the progressions of the whole Tree will be from the original 32 to 61 and so on; 29 being added each time. We may now consider the outstanding features of the figure when shown expanding in Six directions as the Snowflake. Since Netzach and Hod now combine we have in all 49 Sephiroth in the unprogressed figure. This, it may be remarked, is a distinctly Venusian Number (that of the Intelligence of Venus) and the Square of Seven. It reduces to 13, the number of Unity and Love. The number of Paths in this figure is 126 (a number attributed to two important Names of God) and this, added to 49, gives 175 as the total number of Sephiroth and Paths. This, it may be remarked, is the Number of the Spirit of Venus. It represents the sum of the numbers from 1 to 49 divided by 7, and it again reduces to 13 by addition. When we consider the progressions of this Sixfold Figure we find the Sephiroth increase by 48 each time, Malkuth remaining single. This is of interest because 48 is the numeration of KVKB, the Sphere of Mercury, and we find the particular feature of this Sixfold plan is that the Spheres of Venus and Mercury are forever united. I have always considered this word Kokab to be in some way connected with the words Khu and Khabs; Khu being the Magickal entity of man, and Khabs meaning a Star. Considering this arrangement is as a Sixfold Star and the uniting of the Paths and Spheres of Venus and Mercury as Love under Will, this will be interesting to Students of the New Aeon. The number of Sephiroth in the Second progression is therefore 49 + 48 = 97. This is another prime number and that of the Archangel of Netzach. It has many other correspondences, one of which is “An architect.” The next progression gives us 145 which, according to the old arrangement, corresponds to the 13 Paths of the Beard of Microprosopus. But the Fourth progression produces 193, another prime number of particular importance since it is the Number of Sephiroth in the unprogressed but complete three-dimensional solid which forms the Dodecahedron. 193 also reduces to 13, which, it may be remarked, is the number of Sephiroth in the single prismatic solid. The 126 Paths of the sixfold plan progress by adding 120 each time (since the Paths of Venus and Mercury are combined) and this is a very important Number to the Rosicrucian, and on account of its representing the God ON. Other interesting numbers can be traced out by the Student who possesses a copy of the Sepher Sephiroth. The whole figure progresses by the addition of 168 (the additional 48 Sephiroth and 120 Paths) and this is a very important number, being that of the Parentes Superni. We may now engage in a brief consideration of the Solid Figure. The first simple form contains 13 Sephiroth, which number gives it the Seal of Unity. This solid also contains 13 parts which produces the angles representing the Paths. Thus it represents 26, the Number of the Ineffable Name. When this solid is extended in 20 directions (20 is the full numeration of IVD, the first letter of the Name spelled in full) 193 Sephiroth are produced. These added to the 260 (13 X 20) parts give 453, a number reducing to 12 which is the number of pentagonal faces on the 20-pointed dodecahedron. But, which is perhaps of greater interest, this number 453 is that of NPSh ChIH, the Animal Soul in its fullness; i.e., including the Creative Entity or Chiah. The importance of this will be plain when we remember that the total number of Sephiroth and Paths in the Whole Solid is—as has been shown in Chapter VII -- 775 and this, added to the 260 parts or sectional solids, gives 1035, which is the sum of the numbers from 1 to 45. Now 45 being the numeration of ADM (Adam) indicates that we have once more shown the Qabalistic ADAM in all his Spiritual and Animal fullness and that he once again contains the sum of all his parts. Also the 20 points and 12 faces of the dodecahedron equal 32, the original number of the Paths of Wisdom. One other point seems worthy of notice in this connection. The SepherYetzirah makes a very strong feature of TEN Sephiroth (ten and not nine, ten and not eleven), and it may be assumed that we have departed entirely from this fundamental conception. But it is also true that the ancient Qabalists considered the Three Veils of the Negative—AIN, AIN SUPH, AIN SUPH AUR—as depending back from Kether; thus, although these are unmanifest, the whole scheme was based on 13. It may be further remarked that there are Seven Sephiroth below the Supernal Triad and the Three unmanifest Ideas above, so that we have, as it were, 7 + 3 = 10, 10 + 3 = 13; the 10 standing midway between the 7 and the 13. Now it has been pointed out that the progressions of the single Tree are made by the addition of nines, so that each number produced reduces to 10 when we add the digits. In this case, then, the essence of the original basis remains. Nor is this basis lost when we consider the Sixfold two-dimensional figure, the Single Prismatic solid, and the Twenty-fold solid and their progressions, although it is more deeply concealed in these instances. All these start from a basis of 13. The Single solid has 13 Sephiroth and increases by the addition of 12 at each progression. Thus the series is 13, 25, 37, 49, 61, 73, 85, etc. If we reduce these by addition (leaving the first as it stands) we obtain 13, 7, 10, 13, 7, 10, 13, etc. Now this is the series we noticed above in regard to the original Tree—the 10 Sephiroth with three Veils above the Supernal Triad and 7 Spheres below it. And this series recurs with every three progressions, so that since 7 + 10 + 13 = 30, the average is still, in essence, 10 throughout. And when we consider the Sixfold Plan we start with 49 which reduces to 13, and progress by adding 48. Now 48 being 4 times 12 we find we are running in a series which coincides at certain definite points with the previous one, and the same peculiar rule is noticed in regard to our reductions of digits. Thus, 49, 97, 145, 193, 241, 289, 337, etc., etc., reduce to 13, 7, 10, 13, 7, 10, 13, etc. as before. The same is true of the complete solid. We begin with 193 which reduces to 13, and progress by adding 192; which in this case is 4 times 48. Therefor we find the same underlying law if we consider 193, 385, 577, 769, 961, 1153, 1345, etc., with the exception of the one instance of 769, which reduces to 22 (the number of paths) on its way to its final reduction to 4 without first forming 13 as in all the other cases. And we of course find that every fourth progression of one series coincides with the number produced by one of the others. Thus the fourth progression of the Prism gives us 49 which is the basic number of the Star. The fourth progression of the Star gives us 193, the basic number of the complete solid, and so on. In fact every progression of the Star will give a number which is that of some progression of the Simple Solid, and every progression of the Complex Solid a number equal to some progression of both the Star and the Simple Solid, and the numbers common to all will always reduce to 13 (or 4). A word now in regard to the proportion of the various parts of our figures. We found in constructing the first simple plan of the Sephiroth and Paths that the proportion of the Diameter of the Sephiroth to the Width of the Paths was very important, especially in regard to the Progressions. I noticed recently that in his footnotes to the new edition of Eliphas Levi’s Transcendental Magic, Mr. A. E. Waite makes the following remarks, evidently with the intention of discrediting Levi: “In the Tree of Life KETHER, the Supreme Crown, abides above CHOKMAH and BINAH, forming with these the Supernal Triad, below which are CHESED and GEBURAH. It must be said further that the Tree comprises three triangles, beneath which is MALKUTH. There is nocircle, as Levi suggests, except in the accidental sense that the names and titles of each Sephira are inscribed within this figure.” Without some idea of a Circle or Sphere, if only to represent the Absolute or the Universe, one can hardly conceive of any Qabalistic Scheme at all; and that all the Qabalists have merely used the Sephiroth as convenient receptacles for names and inscriptions while giving to the Paths any semblance of reality, seems to me rather puerile. However, since the very Points representing the Centers of the Sephiroth in any properly proportioned “Tree” are produced by the original generating circles, we may leave aside Mr. Waite’s remarks and consider the matter as if the Sephiroth were Circles and the Paths Lines. In the construction of the Tree we commence with 4 generating circles. These may be of any desired size, but by their means we obtain the Centers of the Ten Sephiroth. The logical diameter of each Sephiroth will be found to be one-fourth of that of the generating circles. This makes the length of the short Paths—such as Aleph—exactly equal to the diameter of each Next, in order to discover the logical width of the Paths we should examine the structure of the Tree where the five Paths from above unite in Tiphereth and the three proceed from below that sphere. It will be noticed that the natural division of the circle will be into 12, as this will make all the Paths the same width and leave a space equal to the width of one path between the lower ones. Thus the width of each Path should be one-half the radius of the Sephira. (The greatest possible width without the Paths conflicting with one another above Tiphereth.) When it comes to the progression of the Trees we shall then find that the third will produce Sephiroth equal in diameter to the original generating circles, while the width of the Paths of the third progression will be exactly that of the diameter of the Sephiroth of the first Tree. A glance at the Colored Plate will show how exactly all the details fit in if this plan is adopted; even in the case of the four-fold division of Malkuth the progressed Paths exactly coincide with the diagonals. (See Plate A.) It should be remembered, however, that all our measurements are made from center to center of the Sephiroth. As a further check on the correctness of the size of the Sephiroth we find that a Vesica constructed upon the Path from the center of Chesed to that of Geburah, having a length from Kether to Yesod, will exactly touch the circumferences of Chokmah, Binah, Netzach, and Hod. Our final summary will deal with the proportions of the whole figure as based on those of the Vesica Piscis. We have shown the importance of 15 to 26 (the proportions of the Vesica) in relation to the sacred Names IH and IHVH. The Student may consider for himself such further proportions in this series as 30 to 52, 60 to 104, 120 to 208, 240 to 416. The last of these is of peculiar interest for 240 is NTzNIM, Prima Germina, and 416 is HRHVR, meaning Thought or Meditation. Again the next proportion, 480 to 932, is important. 480 is LILITh and 932 is OTz HDOTh TVB VRO, The Tree of Knowledge of Good and Evil. This is surely a valuable correspondence worthy of study. But there is another set of proportions in connection with the Vesica, viz.: 30 to 52. 30 is the Letter of Libra—Balance; 52 is the numeration of ABA VAMA (Father and Mother), AIMA (The Supernal Mother—fertilized), and of BN (The Son: Assiah’s “Secret Nature”). The correspondence between these and Balance is an interesting one. Further we may use 52 to 90, for 90 is SVD HVVG, The Mystery of Sex. This, as applied to ABA VAMA, AIMA, or BN leads to ideas suitable for the highest “meditation” about which we should be “very silent.” (Strangely enough, as I wrote this I noticed another correspondence, for ZMH, Meditation, adds to 52, and DOMM, Very Silent, adds to 90. I shall therefore take this as a hint and pass on.) The next proportion 90 to 156 is of even deeper significance for here we find the relation between “The Mystery of Sex” (90) and BABALON (156) The Victorious Queen. (See XXX Aethyrs: Liber CDXVIII). And the very next progression 156 to 260 gives us a relation between BABALON (156) and I.N.R.I. (270). Enough has been said to indicate to the Adept that a study of these proportions is well worth But, so far we have been dealing with the actual proportions of the Vesica, and we have not mentioned those of the complete Tree of Life itself. Those of the Vesica being as 26 to 45 we see the proportions of the Tree must be 26 to 60, or, since we can divide by 2, we may put them at 13 to 30. Here we enter upon the consideration of our basic number 13 (Unity and Love) in right relation with “Balance.” For “Equilibrium is the Basis of the Work.” We again obtain an interesting series of proportions, 13 to 30, 26 to 60, 52 to 120, 104 to 240, 208 to 480, 416 to 960, etc., but these will again be left to the consideration of the Student for we have yet to deal with a greater Mystery. If 13 to 30 is the exact width and height of the two-dimensional figure of the Tree of Life, what will be the proportions that will give us the correct angle of the Supernal Triad in order to change this to a Solid? The present angle is 120 degrees, the interior angle of the Hexagon; what proportion will give us the interior angle of the Solid Pentagon by which Kether, Chokmah, and Binah take their places each touching the circumscribing SPHERE? This final revelation, only made to us on May 30th, 1925, yet necessary to the completion of our treatise, has come as the Seal of the Supreme upon our Work. It is not possible to enter into the proper consideration of the importance of this discovery in relation to the Magical lifework of the Author and to the Mysteries of the New Aeon. For the present, therefore, we simply state this proportion to be as THIRTEEN is to THIRTY-ONE. 13 to 31 gives the exact angle of 108 degrees necessary to the building up of our solid, the additional part being used to raise the Point of Kether to the Pinnacle of the Solid. But let us examine the progressions of this proportion as before. We obtain: 13 to 31, 26 to 62, 52 to 124, 104 to 248 and 208 to 496. This last, be it noted, is the Fifth progression. Now let us remember that the Fifth Progression of Malkuth as an expanding Sphere is the one which First embraces the Kether of the First Tree (and the Dodecahedron formed by the Tiphereths of the Second Tree). How many Sephiroth shall we find in the Complete Solid at the Fifth Progression? 193 + 192 + 192 + 192 + 192 = 961. Therefore when the Sephiroth have increased to 961 in the Solid, Malkuth will have expanded to the First Kether. 961 happens to be 31 X 31, or the Square of 31, and it reduces to 13. But what of our Proportion of 208 to 496, the Fifth progression of the proportions of 13 to 31? Not only is 208, the width of our Tree, equal to the length of a Pure Vesica whose breadth is 120 (our original Kether angle), but our other proportion 496 is a Perfect Number, the Sum of the Numbers from One to Thirty-One, and the Numeration of MLKVTh—Malkuth. AL, it must be remembered, is the Highest Kether Name of God -- 31 -- which being read in reverse is LA, Not, and thus forms the true formula of the transition from the unmanifest to the manifest. Let us never forget that the True Kether—Hadit is forever concealed in the Center of Malkuth, and that of this it has been written in Liber AL vel Legis: I am NOT (La = 31) extended (1 + 2 + 3 . . . 31 = 496) and Khabs (a Star) is the Name of my House. In concluding this section we may remark that the 76th progression of the Single Solid, and the 19th progression of the Star, show 913 Sephiroth, 913 is BRAShITh, Berashith, “In the Beginning”—the First Word of Genesis. The 80th progression of the Single Solid, the 20th of the Star, and the 5th of the Complete Solid, all give 961 (13 x 31), and in case you should forget this, all you have to do is to stand in The Kingdom, Malkuth (496), and LOOK UP. You will then see Yesod (9) and above that Tiphereth (6) and—by the Grace of God—Kether (1). So mote it be. [ Back to The Anatomy of the Body of God | Sections Index ]
{"url":"http://www.thothweb.com/sections-viewarticle-328.html","timestamp":"2014-04-16T19:13:34Z","content_type":null,"content_length":"42217","record_id":"<urn:uuid:fb9e06a4-52a5-4179-a5a6-0855e8dd3ac9>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
Performance comparison - storage vs on-the-fly 02-14-2010 #1 Registered User Join Date Nov 2008 Performance comparison - storage vs on-the-fly I'm working on some 3D graphics software that requires a draw loop to access a large number of vertices many times. For each vertex component I need to access the following information: double vertex_originalvalue_x double vertex_originalvalue_y double vertex_originalvalue_z double vertex_transform_x double vertex_transform_y double vertex_transform_z On each time through the draw loop, I need the transformed value of the vertex (for example: transformedX = vertex_originalvalue_x + vertex_transform_x). My question is whether it is better to store a separate variable in which the transformation computation has already been performed or whether it should be done on the fly. Does anyone have some rules of thumb for which mathematical operations make things costly enough that storing a precomputed value is worthwhile? Perhaps a single addition operation is quick enough, but if something like a square root operation were required, would it then makes sense to precompute and store? Thanks. Any advice, opinion, debate on this issue is welcomed. With sqrt(), for sure. Like as in, don't keep referring to sqrt(x) if x has not changed, put it into memory. Generally, I think the same applies to all the operations that would seen to involve quite a bit of arithmetic (like trig). sqrt() is so notoriously slow that John Carmack, the guy who wrote Quake, came up with some alternate function that can be used in many circumstances. I've seen it, it is public (you will have to google for that), can't really say any more as math is not a big interest for me. C programming resources: GNU C Function and Macro Index -- glibc reference manual The C Book -- nice online learner guide Current ISO draft standard CCAN -- new CPAN like open source library repository 3 (different) GNU debugger tutorials: #1 -- #2 -- #3 cpwiki -- our wiki on sourceforge When in doubt, use a profiler! Simple mathematical operations like elementary operations probably won't matter since they're so cheap, but bigger stuff might incur a penalty hit. So, when in doubt, use a profiler. There are free profilers available. For information on how to enable C++11 on your compiler, look here. よく聞くがいい!私は天才だからね! ^_^ 02-14-2010 #2 02-14-2010 #3
{"url":"http://cboard.cprogramming.com/c-programming/123841-performance-comparison-storage-vs-fly.html","timestamp":"2014-04-18T12:02:38Z","content_type":null,"content_length":"51852","record_id":"<urn:uuid:9ce8c623-0256-477d-af7e-4a9e800a119d>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
CSC 310 - Information Theory CSC 310 - Information Theory (Jan-Apr 2002) Here are the term and final exam marks. You can collect assignments and tests from my office. Phone ahead to see if I'm in before coming by. Information theory arises from two important problems: How to represent information compactly, and how to transmit information reliably in the presence of noise. The concept of `entropy' as a measure of information content is central to both of these problems, and is the basis for an elegant theory centred on two famous theorems proved by Claude Shannon over 50 years ago. Only in recent years has the promise of these theorems been fully realized, however. In this course, the elements of information theory will be presented, along with some of the practical techniques needed to construct the data compression and error-correction methods that play a large role in modern communications and storage systems. Instructor: Radford Neal, Phone: (416) 978-4970, Email: radford@cs.utoronto.ca Office Hours: Mondays 1:10-2:00 and Thursdays 2:30-3:30, in SS 6016A Mondays and Wednesdays, from 3:10pm to 4:00pm, in SS 1073 From January 7 to April 10, except for Reading Week (February 18-22) Fridays, from 3:10pm to 4:00pm, in SS 1073 From January 18 to April 12, except for Reading Week and Good Friday (March 29) A webpage with supplementary information on tutorials is maintained by the TA, Cosmin Truta. Course Texts: G. A. Jones and J. M. Jones, Information and Coding Theory K. Sayood, Introduction to Data Compression Computing will be done on the CDF computer system, on which you will receive an account soon. If you are not familiar with this system, you should get A Student's Guide to CDF, available from the Marking Scheme: Final exam, worth 40%. Four assignments, worth 5%, 10%, 5%, and 10%. Two one-hour in-class tests, each worth 15%. The tests are tentatively scheduled for February 15 and March 22, during the tutorial time. Please read the important course policies Here are the questions on test 1 in Postscript, and the answers to test 1. Assignment 1 handout: Postscript, PDF. Assignment 1 solutions: Postscript, PDF. Assignment 2 handout: Postscript, PDF. Here's a little shell file that illustrates one way that you might handle the task of running the tests for this assignment. The program modules for use with assigment 2 are available on CDF in the directory /u/radford/310. Here are all the files needed, in case you want to work with them at home: Documentation Makefile code.c decode.c encode.c freq.c HiMom bit-io.c code.h decpic.c encpic.c freq.h And here are the additional files that make up my solution to the assignment: Discussion tst-results html-encode.c html.h chk tst-plot.ps html-decode.c html-contexts.c tst tst-plot.pdf html-freq.c You can also now get to these files in /u/radford/310 on CDF. Assignment 3 handout: Postscript, PDF. Assignment 3 solutions: Postscript, PDF. Assignment 4 handout: Postscript, PDF. Clarification: The table of results produced should have one line with statistics for all 10000 messages sent, plus lines for the subsets of messages in which the total number of transmission errors for all 49 bits of the codeword is 0, 1, 2, etc. Contrary to what you might have heard in tutorial, you should not hand in the a printout with 10000 lines containing the results of decoding each and every one of the 10000 simulated messages. Note: If you find that simulating 10000 messages is taking an unreasonable amount of time, you can simulate just 1000 messages (thought this is less desirable). Assignment 4 solution: program, results and discussion. Slides, readings, and exercises for each week Here are the overhead slides for each lecture, four per page, in PDF and Postscript formats, organized by week (1, 2, 3, ...) and by lecture within week (A and B). To read these, you'll need one or the other of the free acroread program (for PDF) or the free ghostview program (for both PDF and Postscript). Note that there are links for all lectures, but not all of the slides are there yet. Also, the correspondence with actual lectures may be only approximate. I've also listed the appropriate sections of the textbooks to read (JJ is the Jones and Jones text, S is the Sayood text), and suggested exercises (non-credit), which may be discussed in tutorials. SLIDES IN SLIDES IN REQUIRED OPTIONAL NON-CREDIT WEEK PDF FORMAT POSTSCRIPT READINGS READINGS EXERCISES 1 Jan 7 A B A B S Ch. 1 2 Jan 14 A B A B JJ Ch. 1 JJ Appendix A JJ 1.15-1.17 3 Jan 21 A B A B JJ Ch. 2 S Sec. 3.1-3.3 JJ 2.14,2.11 4 Jan 28 A B A B JJ Ch. 3 S Sec. 2.2 S Ch.2 #4, JJ 3.13 5 Feb 4 A B A B S Ch. 4 6 Feb 11 A B A B S Sec. 3.8, S Sec. 3.4 7 Feb 25 A B A B S Ch. 5 8 Mar 4 A B A B JJ Ch. 4 JJ 4.6,4.7 9 Mar 11 A B A B JJ 5.1-5.3 JJ 5.9 10 Mar 18 A B A B JJ 6.1-6.2, JJ 6.1,6.4, 7.1-7.2 7.1,7.2 11 Mar 25 A B A B JJ 6.3-6.5, JJ 6.7 12 Apr 1 A B A B JJ 5.4-5.6 JJ Appendix C 13 Apr 8 A B A B S 7.1-7.5
{"url":"http://www.cs.toronto.edu/~radford/csc310.S02/","timestamp":"2014-04-19T09:25:15Z","content_type":null,"content_length":"9306","record_id":"<urn:uuid:3445bf4e-0bb6-4702-a5c6-a59f292b7283>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
Related Rates: the Expanding Balloon Problem Say you’re filling up your swimming pool and you know how fast water is coming out of your hose, and you want to calculate how fast the water level in the pool is rising. You know one rate (how fast the water is being poured in), and you want to determine another rate (how fast the water level is rising). These rates are called related rates because one depends on the other — the faster the water is poured in, the faster the water level will rise. In a typical related rates problem, the rate or rates you’re given are unchanging, but the rate you have to figure out is changing with time. You have to determine this rate at one particular point in time. For example, say you’re blowing up a balloon at a rate of 300 cubic inches per minute. When the balloon’s radius is 3 inches, how fast is the radius increasing? 1. Draw a diagram labeling the diagram with any unchanging measurements (there aren’t any in this unusually simple problem) and make sure to assign a variable to anything in the problem that’s changing (unless, of course, it’s irrelevant to the problem). The radius in the figure is labeled with the variable r. The radius needs a variable because, as the balloon is being blown up, the radius is changing. In the figure, 3 is in parentheses to emphasize that the number 3 is not an unchanging measurement. The problem asks you to determine something when the radius is 3 inches, but remember, the radius is constantly changing. In related rates problems, it’s important to distinguish between what is changing and what is not changing. The volume of the balloon is also changing, so you need a variable for volume, V. You could put a V on your diagram to indicate the changing volume, but there’s really no easy way to label part of the balloon with a V like you can show the radius with an r. 2. List all given rates and the rate you’re asked to determine as derivatives with respect to time. You’re pumping up the balloon at 300 cubic inches per minute. That’s a rate — it’s a change in volume (cubic inches) per change in time (minutes). So, You have to figure out how fast the radius is changing, so 3. Write down the formula that connects the variables in the problem, V and r. Here’s the formula for the volume of a sphere: 4. Differentiate your formula with respect to time, t. This works like implicit differentiation because you’re differentiating with respect to t, but the formula is based on something else, namely r. 5. Substitute known values for the rate and variables in the equation from Step 4, and then solve for the thing you’re asked to determine. Be sure to differentiate (Step 4) before you plug the given information into the unknowns (Step 5). So, the radius is increasing at a rate of about 2.65 inches per minute when the radius measures 3 inches. Think of all the balloons you’ve blown up since your childhood. Now you finally have the answer to the question that’s been bugging you all these years. By the way, if you plug 5 into r, rather than 3, you get an answer of about 0.95 inches per minute. This fact should agree with your balloon-blowing-up experience — the bigger the balloon gets, the slower it grows.
{"url":"http://www.dummies.com/how-to/content/related-rates-the-expanding-balloon-problem.html","timestamp":"2014-04-21T08:43:20Z","content_type":null,"content_length":"57105","record_id":"<urn:uuid:cab03c62-5b67-431e-a548-f01af950cd97>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Tools Support Material: Events and Set Operations Dialogue Support Material: Events and Set Operations Dialogue Go to: http://www.shodor.org/.../EventsAndSetOperatio (opens a new window) Description: Introduction of elementary set operations and their connections with probability. Technology Type: Calculator Author: Shodor: Project Interactivate Language: English Cost: Does not require payment for use Average Rating: Login to rate this resource My MathTools: Login to Subscribe / Save This Reviews: be the first to review this resource Discussions: start a discussion of this resource Lesson Plans: Conditional Probability and Probability of Simultaneous Events Introduction to the Concept of Probability Tool: Crazy Choices Game Courses: Math 7 Probability of event, Logic and set theory, Venn diagrams, Counting principles Probability & Statistics Probability Discrete Math Venn diagrams, Functions Defined on Sets Comment: More of a worksheet. Students read through it but they don't do anything.
{"url":"http://mathforum.org/mathtools/support/2911/dm,17.6,ALL,ALL/","timestamp":"2014-04-19T23:32:21Z","content_type":null,"content_length":"14825","record_id":"<urn:uuid:54c8eb50-31f6-4c42-b25a-afbe1c5cbea8>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
type 'a t of_sexp and bin_io functions aren't supplied for heaps due to the difficulties in reconstructing the correct comparison function when de-serializing. include Container.S1 with type t := 'a t Mutation of the heap during iteration is not supported, but there is no check to prevent it. The behavior of a heap that is mutated during iteration is undefined. val create : ?min_size:int -> cmp:('a -> 'a -> int) -> unit -> 'a t create ?min_size ~cmp returns a new min-heap that can store elements without reallocations, using ordering function The top of the heap is the smallest element as determined by the provided comparison function. In particular, if cmp x y < 0 then x will be "on top of" y in the heap. Memory use is surprising in two ways: 1. The underlying pool never shrinks, so current memory use will at least be proportional to the largest number of elements that the heap has ever held. 2. Not all the memory is freed upon remove, but rather after some number of subsequent pop operations. Alternating add and remove operations can therefore use unbounded memory. val of_array : 'a array -> cmp:('a -> 'a -> int) -> 'a t min_size (see create) will be set to the size of the input array or list. val of_list : 'a list -> cmp:('a -> 'a -> int) -> 'a t val top : 'a t -> 'a option returns the top (i.e., smallest) element of the heap val top_exn : 'a t -> 'a val add : 'a t -> 'a -> unit val remove_top : 'a t -> unit remove_top t does nothing if t is empty val pop : 'a t -> 'a option This removes and returns the top (i.e. least) element val pop_exn : 'a t -> 'a val pop_if : 'a t -> ('a -> bool) -> 'a option pop_if t cond returns Some top_element of t if it satisfies condition cond, removing it, or None in any other case. val copy : 'a t -> 'a t copy t returns a shallow copy val sexp_of_t : ('a -> Sexplib.Sexp.t) -> 'a t -> Sexplib.Sexp.t
{"url":"https://ocaml.janestreet.com/ocaml-core/latest/doc/core_kernel/Heap_intf.html","timestamp":"2014-04-16T08:43:13Z","content_type":null,"content_length":"7161","record_id":"<urn:uuid:241d8207-1ec2-4cde-a08c-dfa5095f8c48>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: For nonrejection of H0, don't we want high signifance? Replies: 7 Last Post: Apr 26, 2013 12:11 PM Messages: [ Previous | Next ] Re: For nonrejection of H0, don't we want high signifance? Posted: Apr 26, 2013 4:35 AM "Jeff Miller" wrote in message The confidence interval approach is much more informative in most such situations, because they allow conclusions of the form "Ho is not wrong by more than X units". CIs are not easily adapted to your original question of checking for normality, though, AFAIK. CIs are "easy" to apply here, in one of two ways. CIs are just things that contain the set of all null hypotheses that are not rejected based on the data. So either: (i) form a discrete collection of possible families of distributions, test each of these as the null hypothesis and form a list of all those that are not rejected. This list would indicate how much non-normality is not excluded by the data available.... but it clearly requires both wide-ranging behaviours in the original list and an understanding of these behaviours if the CI s to be useful. (ii) embed the normal distribution in a 3 or 4 parameter family of distributions, with the extra parameters representing departure from normality. Then an "ordinary" confidence region can in indicate how much non-normality is not excluded by the data available. David Jones Date Subject Author 4/23/13 For nonrejection of H0, don't we want high signifance? Paul 4/23/13 Re: For nonrejection of H0, don't we want high signifance? Richard Ulrich 4/23/13 Re: For nonrejection of H0, don't we want high signifance? Paul 4/23/13 Re: For nonrejection of H0, don't we want high signifance? David Jones 4/25/13 Re: For nonrejection of H0, don't we want high signifance? Jeff Miller 4/26/13 Re: For nonrejection of H0, don't we want high signifance? David Jones 4/26/13 Re: For nonrejection of H0, don't we want high signifance? Herman Rubin 4/24/13 Re: For nonrejection of H0, don't we want high signifance? Luis A. Afonso
{"url":"http://mathforum.org/kb/message.jspa?messageID=8899417","timestamp":"2014-04-20T04:08:54Z","content_type":null,"content_length":"25697","record_id":"<urn:uuid:07a4f971-d814-49b6-ba65-3836134a9050>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Beginning Topology is designed to give undergraduate students a broad notion of the scope of topology in areas of point-set, geometric, combinatorial, differential, and algebraic topology, including an introduction to knot theory. A primary goal is to expose students to some recent research and to get them actively involved in learning. Exercises and open-ended projects are placed throughout the text, making it adaptable to seminar-style classes. The book starts with a chapter introducing the basic concepts of point-set topology, with examples chosen to captivate students' imaginations while illustrating the need for rigor. Most of the material in this and the next two chapters is essential for the remainder of the book. One can then choose from chapters on map coloring, vector fields on surfaces, the fundamental group, and knot A solid foundation in calculus is necessary, with some differential equations and basic group theory helpful in a couple of chapters. Topics are chosen to appeal to a wide variety of students: primarily upper-level math majors, but also a few freshmen and sophomores as well as graduate students from physics, economics, and computer science. All students will benefit from seeing the interaction of topology with other fields of mathematics and science; some will be motivated to continue with a more in-depth, rigorous study of topology. Request an examination or desk copy. Undergraduate students interested in topology. "This text is an interesting introduction to some of the various aspects of topology . . . [A] very attractive way to learn more and discover new things in topology." -- Corina Mohorianu, Zentralblatt MATH
{"url":"http://ams.org/bookstore?fn=20&arg1=amstextseries&ikey=AMSTEXT-10","timestamp":"2014-04-20T06:32:06Z","content_type":null,"content_length":"16313","record_id":"<urn:uuid:90ab33f3-d28c-4c26-b433-48611ab10632>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: First-Order Tree-Type Dependence between Variables and Classification Performance February 2001 (vol. 23 no. 2) pp. 233-239 ASCII Text x Sarunas Raudys, Ausra Saudargiene, "First-Order Tree-Type Dependence between Variables and Classification Performance," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp. 233-239, February, 2001. BibTex x @article{ 10.1109/34.908975, author = {Sarunas Raudys and Ausra Saudargiene}, title = {First-Order Tree-Type Dependence between Variables and Classification Performance}, journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence}, volume = {23}, number = {2}, issn = {0162-8828}, year = {2001}, pages = {233-239}, doi = {http://doi.ieeecomputersociety.org/10.1109/34.908975}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Pattern Analysis and Machine Intelligence TI - First-Order Tree-Type Dependence between Variables and Classification Performance IS - 2 SN - 0162-8828 EPD - 233-239 A1 - Sarunas Raudys, A1 - Ausra Saudargiene, PY - 2001 KW - First-order tree-type dependence KW - a priori information KW - classification KW - generalization KW - sample size KW - dimensionality. VL - 23 JA - IEEE Transactions on Pattern Analysis and Machine Intelligence ER - Abstract—Structuralization of the covariance matrix reduces the number of parameters to be estimated from the training data and does not affect an increase in the generalization error asymptotically as both the number of dimensions and training sample size grow. A method to benefit from approximately correct assumptions about the first order tree dependence between components of the feature vector is proposed. We use a structured estimate of the covariance matrix to decorrelate and scale the data and to train a single-layer perceptron in the transformed feature space. We show that training the perceptron can reduce negative effects of inexact a priori information. Experiments performed with 13 artificial and 10 real world data sets show that the first-order tree-type dependence model is the most preferable one out of two dozen of the covariance matrix structures investigated. [1] J.J. Atick and A.N. Redlich, “Towards a Theory of Early Visual Processing,” Neural Computation, vol. 2, pp. 308-320, 1990. [2] C.K. Chow and C.N. Liu, “Approximating Discrete Probability Distributions with Dependence Trees,” IEEE Trans. Information Theory, vol. 14, pp. 462-467, 1968. [3] A.D. Deev, “Representation of Statistics of Discriminant Analysis and Asymptotic Expansions in Dimensionalities Comparable with Sample Size,” Reports of Academy of Sciences of the USSR, vol. 195, no. 4, pp. 756-762, 1970 (in Russian). [4] A.D. Deev, “Asymptotic Expansions for Distributions of StatisticsW,M,W* in Discriminant Analysis,” Statistical Methods of Classification, J.N. Blagoveshenskij, ed., vol. 31, pp. 6-57, Moscow: Moscow Univ. Press, 1972 (in Russian). [5] A.D. Deev, “Discriminant Function Designed on Independent Blocks of Variables,” Eng. Cybernetics (Proc. Academy of Sciences of the USSR), no. 12, pp. 153-156, 1974 (in Russian). [6] J.M. Friedman, “Regularized Discriminant Analysis,” J. Am. Statistical Assoc., vol. 84, pp. 165-175, 1989. [7] K. Fukunaga, Introduction to Statistical Pattern Recognition, second edition. Academic Press, 1990. [8] S. Halkaaer and O. Winter, “The Effect of Correlated Input Data on the Dynamics of Learning,” Advances in Neural Information Processing Systems, M.C. Mozer, M.I. Jordan, and T. Petsche, eds., vol. 9, pp. 169-175, Cambridge, Mass.: MIT Press, A Radford Book, 1996. [9] I.B. Kruskal Jr., “On the Shortest Spanning Subtree of a Graph and the Travelling Salesman Problem,” Proc. Am. Math. Soc., vol. 7, pp. 48-50, 1956. [10] Y. le Cun, I. Kanter, and S. Solla, “Eigenvalues of Covariance Matrices: Application to Neural-Network Learning,” Physical Review Letters, vol. 66, no. 18, pp. 2396-2399, 1991. [11] G.J. McLachlan, Discriminant Analysis and Statistical Pattern Recognition. New York: Wiley, 1992. [12] L.D. Meshalkin, “Assignment of Numerical Values to Nominal Variables,” Statistical Problems Control, S. Raudys and L. Meshalkin, eds., vol. 14, pp. 49-56, Vilnius: Inst. of Math. and Cybernetics Press, 1976 (in Russian). [13] L.D. Meshalkin and V.I. Serdobolskij, “Errors in Classifying Multivariate Observations,” Theory of Probabilities and Its Applications, vol. 23, no. 4, pp. 772-781, 1978 (in Russian). [14] D. Morgera and D.B. Cooper, “Structurized Estimation: Sample Size Reduction for Adaptive Pattern Classification,” IEEE Trans. Information Theory, vol. 23, pp. 728-741, 1977. [15] R. Prochorskas, V. Ziuznis, and N. Misiuniene, “Use of Different Classifiers to Predict Outcomes of Heart Attacks,” Problems of Ischemic Heart Diseases, pp. 216-267, Vilnius, Lithuania: Mokslas Publishing House, 1976 (in Russian). [16] S. Raudys, “On Determining Training Sample Size of Linear Classifier,” Computing Systems, N.G. Zagoruiko ed., vol. 28, pp. 79-87, Inst. of Math. Press, Novosibirsk: Nauka, 1967 (in Russian). [17] S. Raudys, “On the Amount of a priori Information in Designing the Classification Algorithm,” Eng. Cybernetics (Proc. Academy of Sciences of the USSR), no. 4, pp. 168-174, 1972 (in Russian). [18] S. Raudys, “Methods to Overcome Dimensionality Problems in Statistical Pattern Recognition: A Review,” Zavodskaya Laboratorya (Factory Lab., Interdisciplinary USSR J.), no. 3, pp. 45&49-55, Moscow: Nauka, 1991 (in Russian). [19] S. Raudys, “Evolution and Generalization of a Single Neurone: I. Single-layer Perception as Seven Statistical Classifiers,” Neural Networks, vol. 11, no. 2, pp. 283-296, 1998. [20] S. Raudys, “Scaled Rotation Regularization,” Pattern Recognition, vol. 33, pp. 1989-1998, 2000. [21] S. Raudys and S. Amari, “Effect of Initial Values in Simple Perception,” Proc. 1998 IEEE World Congress Computational Intelligence, IJCNN '98, pp. 1530-1535, 1998. [22] S.J. Raudys and A.K. Jain, "Small Sample Size Effects in Statistical Pattern Recognition: Recommendations for Practitioners," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 13, pp. 252-264, 1991. [23] S. Raudys and V. Pikelis, “On Dimensionality, Sample Size, Classification Error and Complexity of the Classification Algorithm in Pattern Recognition,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 2, pp. 242-252, 1980. [24] S. Raudys and A. Saudargiene, “Structures of the Covariance Matrices in the Classifier Design,” Proc. Joint IAPR Int'l Workshops/SSPR '98 and SPR '98, pp. 583-592, 1998. [25] A. Saudargiene, “Structurization of the Covariance Matrix by Process Type and Block-Diagonal Models in the Classifier Design,” Informatica, vol. 10, no. 2, pp. 245-269, Vilnius: Inst. of Math. and Informatics Press, 1999. [26] V.I. Serdobolskij, “The Moments of Discriminant Function and Classification for a Large Number of Variables,” S. Raudys, ed., vol. 38, pp. 27-51, Vilnius: Statistical Problems of Control. Inst. of Math. and Cyb. Press, 1979 (in Russian). [27] V.I. Zarudskij, “The Use of Models of Simple Dependence Problems in Classification,” Statistical Problems of Control, S. Raudys, ed., vol. 38, pp. 53-75, Vilnius: Inst. of Math. and Cyb. Press, 1979 (in Russian). [28] V.I. Zarudskij, “Determination of Some Graph Connections for Normal Vectors in Large Dimensional Case,” Algorithmic and Programic Supply of Applied Multivariate Statistical Analysis, S.A. Aivazian, ed., pp. 189-208, Moscow: Nauka, 1980 (in Russian). Index Terms: First-order tree-type dependence, a priori information, classification, generalization, sample size, dimensionality. Sarunas Raudys, Ausra Saudargiene, "First-Order Tree-Type Dependence between Variables and Classification Performance," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp. 233-239, Feb. 2001, doi:10.1109/34.908975 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tp/2001/02/i0233-abs.html","timestamp":"2014-04-23T11:21:33Z","content_type":null,"content_length":"57716","record_id":"<urn:uuid:5f0fb00c-03cf-4523-aef0-7151b5ccd00a>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
Hialeah Gardens, FL Math Tutor Find a Hialeah Gardens, FL Math Tutor ...Great mathematical skills, specializing in elementary and high school math (basic math, algebra I & II and geometry), SAT math, ASVAB and GED. I am experienced in preparing and editing APA style papers on any subject and of any length.My geometry lessons include formulas for lengths, areas and volumes. The Pythagorean theorem will be explained and applied. 46 Subjects: including calculus, Microsoft Excel, French, Microsoft Word ...I have been tutoring students since high school in various subjects and continued in college. I have traveled around the world and met many people so I am able to adapt to all types of situations and people. I would like to conduct tutoring sessions with open communications. 9 Subjects: including prealgebra, algebra 1, algebra 2, Spanish ...It is the point where the student has to become Math-savvy, in order to be ready for Algebra, and make Algebra itself look easy. With me, reading is much more than simple encoding, decoding and linking of symbols, it is what it is, critical thinking. I always encourage my students to let their brain do the work, at least try, because this is really the key. 20 Subjects: including algebra 1, algebra 2, ACT Math, SAT math ...Don't expect what happened to me to happen to you. I can promise you a boost in your scores, a ton of help in mathematics and SAT/ACT, but don't expect to run out and win a national championship after our tutoring session. I am an outlier, not a rule. 38 Subjects: including SAT math, grammar, Java, geometry ...It is a view of the many discrete applications in our reality. As a professor of applied math at Concordia University, I taught math for the decision sciences. Among the courses taught were subjects focused heavily on deterministic methods. 24 Subjects: including linear algebra, differential equations, algebra 1, algebra 2
{"url":"http://www.purplemath.com/Hialeah_Gardens_FL_Math_tutors.php","timestamp":"2014-04-20T23:55:09Z","content_type":null,"content_length":"24302","record_id":"<urn:uuid:4e3f72a1-dffd-4699-84f5-a6faaf113bf4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
Student Luran He Knows What Counts Luran He is a 10th grader at Ward Melville High School and for the past few years, an active participant in the Lab’s middle and high school math scholar program, administered by the Museum of Mathematics. The young man recently took the top spot for New York in the American Mathematics Contest 10 (10 refers to 10th grade). He also became the first student in Suffolk County to qualify for the USA Junior Math Olympiad, sponsored by the Mathematics Association of America. The primary purpose of the competitions is to spur interest and strengthen the mathematical capabilities of our nation's youth by recognizing, identifying, and rewarding excellence in mathematics. Luran will be attending math camp at Boston University this summer. With Ken White, manager of the BNL Office of Educational Programs (standing, left), his parents Agnes (standing, second from right) and Duanfeng (standing, right), sister Yiran (standing, second from left), and middle school math teacher Kevin Sihler (sitting, left), Luran accepted a plaque from BNL’s Deputy Director for Science and Technology Doon Gibbs for his outstanding accomplishment. “Luran is obviously a gifted young man and I strongly encourage him to pursue a career in science or math,” said Gibbs. After the presentation, Gibbs gave a brief overview of the Lab’s history and scientific research and discoveries. Before the guests departed, Gibbs shook the young man’s hand and told him, “Perhaps some day you will become part of the BNL team and help us to continue to make the world a better place through the science that we do.”
{"url":"http://www.bnl.gov/newsroom/news.php?a=23140","timestamp":"2014-04-18T06:19:28Z","content_type":null,"content_length":"35416","record_id":"<urn:uuid:ac20a493-09ef-4290-b061-bdfe8c0eda53>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability Problem regarding derangement. Desperately asking for help! January 13th 2013, 07:10 AM #1 Junior Member Oct 2010 Probability Problem regarding derangement. Desperately asking for help! A jeweler received 8 watches in their respective boxes. He took them out and put them back in. What is the probability that a-One watch in the right box b-At most one watch in the right box c-at least one watch in the right box..... I knw this involves the derrangments formula, but i dont know how to use it, please teach me, and please dont stretch it, teach me and give me answers too, tomorrow is my exam:S i cant get thru this practice question Re: Probability Problem regarding derangement. Desperately asking for help! A jeweler received 8 watches in their respective boxes. He took them out and put them back in. What is the probability that a-One watch in the right box b-At most one watch in the right box c-at least one watch in the right box..... I knw this involves the derrangments formula, but i dont know how to use it, please teach me, I will be glad to help. We are not a tutorial service. Therefore, you must answer the questions I ask. Let $\mathcal{D}(n)$ denote the number of derangements of $n$ objects. a) $8\cdot\mathcal{D}(7)$. Now you explain where each of those comes from. The we can do the next one. Re: Probability Problem regarding derangement. Desperately asking for help! I think that D(7) shows the number of derrangments, and then since one watch has been placed in the right box, so it could be the first, second, third......eighth ..and so 8 is multiplied....? is that how it is? 8.D(7) would show all such possible ways of placing the watches, so like to calculate the probablity we have to divide it by 8!? and i also don't know how to calculate derangement, is there a function for it in calculators, or is there some kind of a series....?? Re: Probability Problem regarding derangement. Desperately asking for help! Re: Probability Problem regarding derangement. Desperately asking for help! Iv been trying in vain to make some kind of formula for these problems since u wudnt tell me... And herez wat i got plz tell me if its right at least....If we have n objects and we want to arrange them in such a way that only m objects retain their original positions then the number of possible arrangements is.. (mCn)x[D(n-m)] ... C stands for combinations.... So plz tell me.. Is this right\? Re: Probability Problem regarding derangement. Desperately asking for help! Iv been trying in vain to make some kind of formula for these problems since u wudnt tell me... And herez wat i got plz tell me if its right at least....If we have n objects and we want to arrange them in such a way that only m objects retain their original positions then the number of possible arrangements is.. (mCn)x[D(n-m)] ... C stands for combinations.... So plz tell me.. Is this right\? I truly do not understand the above statement. In reply #4, I gave you the exact formula for calculating the number of derangements. It is derived by use of inclusion/exclusion. If we want to know the number of ways that exactly four of the eight watches end up in the correct box, the answer is: January 13th 2013, 08:01 AM #2 January 13th 2013, 09:28 AM #3 Junior Member Oct 2010 January 13th 2013, 09:41 AM #4 January 13th 2013, 10:34 AM #5 Junior Member Oct 2010 January 13th 2013, 10:48 AM #6
{"url":"http://mathhelpforum.com/statistics/211247-probability-problem-regarding-derangement-desperately-asking-help.html","timestamp":"2014-04-16T19:29:21Z","content_type":null,"content_length":"50965","record_id":"<urn:uuid:08f4306d-8120-4644-b4c0-2e799782dd7c>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
Generation of a normal distribution from "scratch" – The box-muller method November 3, 2012 By Edwin Grappin My previous post is about a method to simulate a Brownian motion. A friend of mine emailed me yesterday to tell me that this is useless if we do not know how to simulate a normally distributed My first remark is: use the rnorm() function if the quality of your simulation is not too important (Later, I'll try to explain you why the R "default random generation" functions are not perfect). However, it may be fun to generate a normal distribution from a simple uniform distribution. So, yes, I lied, I won't create the variable from scratch but from a uniform distribution. The method proposed is really easy to implement and this is why I think it is a really good one. Besides, the result is far from being trivial and is really unexpected. This method is called the Box-Muller method. You can find the proof of this method . The proof is not very complicated, however, you will need a few mathematical knowledges to understand it. Let u and v be two independent variables uniformly distributed. Then we can define: x = sqrt(-2log(u))sin(2 PI v) y = sqrt(-2log(u))cos(2 PI v) x and y are two independent and normally distributed variables. The interest of this method is its extreme simplicity in term of programming (We only need 9 lines if we don't want to test the normality of the new variables neither plot the estimation of the density). We can obtain a vector of variables normally distributed. The Lillie test doesn't reject the null hypothesis of normal distribution. Besides, we can plot the estimation of the density of the variables. We obtain the following plot that looks indeed similar to the Gaussian density. The program (R): # import the library to test the normality of the distribution size = 100000 u = runif(size) v = runif(size) for (i in 1:size){ x[i] = sqrt(-2*log(u[i]))*cos(2*pi*v[i]) y[i] = sqrt(-2*log(u[i]))*sin(2*pi*v[i]) #a test for normality #plot the estimation of the density for the author, please follow the link and comment on his blog: daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/generation-of-a-normal-distribution-from-scratch-the-box-muller-method/","timestamp":"2014-04-18T23:20:31Z","content_type":null,"content_length":"38011","record_id":"<urn:uuid:2a8e5cb6-9937-4e58-bd58-71a4a7ab4645>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Write a word problem that can be solved by working backward. Include at least two steps in the problem. Then, show the answer and how the problem can be solved by working backward • one year ago • one year ago Best Response You've already chosen the best response. I can start off by giving you a problem. Ok: Gina wants to go to a concert. If Gina goes, then Ally will go. If Ally goes, then Robert will go. If Robert goes, then Marco will go. If only 2 people go to the concert, who are they? Best Response You've already chosen the best response. nice thats really good thanks. i am going to cloze this pg now heres a medal Best Response You've already chosen the best response. Thanks! Have fun doing math lol! Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50913989e4b0ad620537bf76","timestamp":"2014-04-16T10:19:47Z","content_type":null,"content_length":"32766","record_id":"<urn:uuid:87c104a2-b033-4ef7-b94e-2bc6e77a3b5e>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00357-ip-10-147-4-33.ec2.internal.warc.gz"}
Instructor Class Description Time Schedule: Joseph P. Benitez B CUSP 124 Bothell Campus Calculus I: Origins and Early Developments Develops modern calculus by investigating the questions, problems, and ideas that motivated its discovery and practice. Studies the real number system and functions defined on it, focusing on limits, area and tangent calculations, properties and applications of the derivative, and the notion of continuity. Emphasizes problem-solving and mathematical thinking. Prerequisite: either a minimum grade of 2.5 in B CUSP 123, sufficient score on approved mathematics assessment test, or a minimum score of 2 on either the AB or BC AP Calculus test. Offered: AWSp. Class description Spring 2014 - This is a first quarter course in calculus of functions of a single variable. It emphasizes differential calculus. It emphasizes applications and problem solving using the tools of calculus. We will use the tools of calculus to explore real world examples and problems. Each idea will be represented symbolically, numerically, graphically, and verbally. Student learning goals Students will identify limits of functions that are defined symbolically, graphically, and numerically when defined with tables of values. Students will learn the formal definition of the first derivative and find algebraic derivative using both this definition and the traditional shortcut formulas associated with derivatives . Students with solve real life applications and modeling using derivatives. Develop proficiency with finding and interpreting geometrically the first and second derivatives of the reference functions. Students will learn to apply the relationships between functional behavior and first and second derivative behaviors. Find derivatives of implicit functions. General method of instruction A typically class will consist of interactive lectures with use of examples from the text and small group work, usually involving worksheets. Regular attendance and participation is highly recommended and will be included in calculation of the final grade! Recommended preparation Success in calculus depends on a large extend on knowledge of algebra, trigonometry, and functions. Prerequisite: 2.0 or above in B CUSP 123, Functions and Modeling or equivalent, or score of 70-100 on the MPT-A assessment test. Class assignments and grading Online textbook: Calculus – Single Variable, Hughes-Hallett, Gleason, McCallum, 5th Edition. Students may purchase a registration code to get access to the online textbook from Wiley but it is NOT required. Since that registration code includes a complete electronic version of the textbook, there is no need to buy a hard copy of the text. However, if students prefer, they may purchase a hard-copy of the text. There will be 10 in-class worksheets, 4 quizzes, 3 exams and one comprehensive final exam. The course is not graded on a curve. Following is a rough grading scale: > 90% 3.5-4.0, 80-89% 2.5-3.4, 70-79% 1.5-2.4, 60-69% 0.7-1.4, < 60% 0.0 Grades will be determined using the following weighting: in-class worksheets (16%), quizzes (20%), 3 exams (40%), and comprehensive final exam (24%). The information above is intended to be helpful in choosing courses. Because the instructor may further develop his/her plans for this course, its characteristics are subject to change without notice. In most cases, the official course syllabus will be distributed on the first day of class. Last Update by Joseph P. Benitez Date: 03/25/2014
{"url":"http://www.washington.edu/students/icd/B/bcusp/124jbenite.html","timestamp":"2014-04-18T13:09:58Z","content_type":null,"content_length":"6436","record_id":"<urn:uuid:424c8413-6cbf-42bf-89a2-2a14f2860532>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
Bellman, Dynamic Programming, and Edit Distance March 22, 2009 Approaches to the exact edit distance problem Richard Bellman created the method of dynamic programming in 1953. He was honored with many awards for this and other seminal work, during his lifetime. We probably all know dynamic programming, use it often to design algorithms, and perhaps take it for granted. But, it is one of the great discoveries that changed the theory and practice of computation. One of the surprises–at least to me–is given its wide range of application, it often yields the fastest known algorithm. See our friends at Wikipedia for a nice list of examples of dynamic programming in action. I never had the honor of meeting Bellman, but he wrote a terrific short autobiographical piece that contained many interesting stories. (I cannot locate it anymore: if you can please let me know.) One story that I recall was how, when he was at The Rand Corporation, he often made extra money playing poker with his doctor friends in the evenings. He usually won, which convinced them that he was using “deep mathematics” against them–perhaps probability theory. Bellman said the truth was much simpler: during the day the doctors were stressed making life and death decisions, at night when they played poker they “let go”. During the day Bellman could just look out at the Pacific Ocean from his office and think about problems. At night he played tight simple poker, and he cleaned up. One other story from this piece made a long-lasting impression on me. When he was famous he would often be invited to places to give talks, like many scientists. However, he always insisted that he must be picked up at the airport by a driver of a white limo. I understand the driver part, I understand the limo part, but insisting on the color was a very nice touch. The Edit Distance Problem The edit distance problem (EDP) is given two strings $\displaystyle a=a_{1},a_{2},\dots,a_{n} \text{ and } b=b_{1},b_{2},\dots,b_{m}$ determine their edit distance. That is what are the fewest operations that are needed to transform string ${a}$ into ${b}$. The operations allowed are: insert a character, delete a character, and substitute one character for another. In general their costs can be arbitrary, but for us we will consider only the basic case where insert/delete have unit cost and substitute has cost two. This makes sense, since substitute is the same as one delete and one insert. This distance is sometimes called the Levenshtein distance, named after Vladimir Levenshtein, who defined it in 1965. The notation of edit distance arises in a multlude of places and contexts, I believe that the notion has been repeatedly discovered. It also has many generalizations where more complex operations are allowed. People really care about EDP. There are very good heuristic algorithms that are used, for example, in the biological community to solve EDP. One is called BLAST. It is implemented in many languages and runs on everything, there are even special purpose hardware devices that run BLAST. Clearly, there is need for solutions to EDP. Unlike theory algorithms, BLAST has no provable error bounds; unlike theory algorithms, BLAST seems, in practice, to be good enough for the scientists who use it. However, I have no doubt that they would prefer to get the optimal answer–however, the cost of getting the optimal answer cannot be too great. Upper and Lower Bounds The dynamic programming algorithm for EDP has been discovered and rediscovered many times. I believe that Robert Wagner and Mike Fischer did it first, but I will not try to be the perfect historian. Their algorithm runs in time ${O(n^{2})}$ and uses the same space: it is convenient to assume that ${n=m}$ for the rest of this post. With a little bit of work, it is easy to improve the space bound to ${O(n).}$ There are papers that prove a lower bound of ${\Omega(n^{2})}$ for EDP. These papers assume a restricted model of computation: only input symbols can be compared for equality. Like any restricted lower bound they give you an idea of what not to do: if you wish to beat the the lower bound, then you must do more that compare symbols for equality. Use the value of the symbols, use bit operations, use randomness, do something other than just compare symbols. Actually, as I recall, there was a bit of an inversion here: there was a better upper bound paper that gave an algorithm for EDP that took time ${O(n^{2}/\log n)}$, before the lower bound papers were published. The method used to prove the sub-quadratic algorithm for EDP is based on a general idea called The Four Russian Method. I never liked the name, I do like the method. As Dan Gusfield points out, the four were not even all Russian: perhaps a consequence of the cold war was that information did not always flow easily between east and west. The Four Russian Method was first, I believe, used to compute boolean matrix products faster than cubic time. The method has been used since then to solve many other problems. If you do not know the method, take a look at the following link. Still here. Okay, here is an overview of the method: it is based on trading space for time. A typical Four Russian Method algorithm operates in two phases. In the first phase, it examines the input and cleverly pre-computes values to a large number of subproblems, and stores these values in a look-up table. In the second phase, it uses the pre-computed values from the table to make macro steps during the rest of the computation. The result is usually a reduction in time by a logarithmic factor, while the space becomes as large as the time. The method is not generally practical; however, it is a useful method to know. Bit Complexity Actually there is a logarithmic factor that is hidden in the quadratic dynamic programming algorithm for EDP, since the algorithm must use ${\log n}$ size numbers. Thus, the bit complexity is, ${O(n^ {2}\log n)}$. There is a clever way to remove the factor of ${\log n}$ that is due to Dan Lopresti: Theorem 1 There is an algorithm for EDP that uses ${O(n^{2})}$ bit operations and ${O(n)}$ space. Proof: The key idea is to use only a constant number of bits rather than ${\log n}$ bits and still run the standard dynamic program. $\Box$ I will explain this in more detail in a future post. Truth in blogging: Lopresti was once a graduate student of mine, and I worked with him on this result. Approximation and EDP Legend has it that when faced with unravelling the famous Gordian Knot, Alexander the Great simply cut the knot with his sword. We do this all the time in theory: when faced by a “knotty” problem we often change the problem. The edit distance problem is any not different: years ago the problem of exact distance was changed to approximate edit distance. The goal is to get a fast algorithm that finds the distance to within a small relative error. For example, in the STOC 2009 conference there will be a paper “Approximating Edit Distance in Near-Linear Time” by Alexandr Andoni and Krzysztof Onak. They show that there is an algorithm (AO) that in time ${n^{1+o(1)}}$ gets a relative error of $\displaystyle 2^{O(\sqrt{\log n}\log\log n)}.$ This is a huge improvement over the previous work. I will not go into any details today, see their paper for the full proofs. This is very pretty work. I have started to look at their paper and believe that it has many interesting ideas, they may even help solve other problems. My issue is that we still are no closer to solving the edit distance problem. The AO algorithm still makes too large relative error to be practical–what exactly is the factor for reasonable size ${n}$? See my other posts on the use of “ On the other hand, I am intrigued with their breakthrough. I think that it may be possible to use their new algorithm as a basis of an exact algorithm. The key idea I have in mine is to try to use their algorithm combined with some kind of “amplification trick.” Papers on approximations to the EDP use “relative” error, so let’s take a look at additive error. Suppose that ${A(x,y)}$ is an algorithm for the EDP that works for any fixed size alphabet. Say that ${A(x,y)}$ has additive error ${E(n)}$ provided, for all strings ${x,y}$ of length ${n}$ over the alphabet, $\displaystyle A(x,y) = d_{\text{edit}}(x,y) + O(E(n)).$ There is a simple amplification trick: Theorem 1 Suppose there is an algorithm ${A(x,y)}$ for the EDP that runs in time ${T(n)}$ and has additive error ${E(n)}$. Then, there is an algorithm ${A^{*}(x,y)}$ for the EDP that runs in time ${T(4n)}$ and has an additive error of ${E(4n)/2}$. I assume that this is well known, but in case not, here is a sketch of the proof. Suppose that ${x}$ and ${y}$ are strings of length ${n}$ over the alphabet ${\{0,1\}}$. Consider the following two $\displaystyle x^{*} = x\#^{m}x \text { and } y^{*} = y\#^{m}y.$ The claim is that provided ${m = 2n}$, $\displaystyle d_{\text{edit}}(x^{*},y^{*}) = 2d_{\text{edit}}(x,y).$ It should be clear that $\displaystyle d_{\text{edit}}(x^{*},y^{*}) \le 2d_{\text{edit}}(x,y)$ since one way to change ${x^{*}}$ to ${y^{*}}$ is to edit the strings ${x}$ and ${y}$ and leave the ${\#}$‘s alone. The cost of this is twice the cost of editing ${x}$ to ${y}$. The opposite inequality follows since any matching of the first ${x}$ with the second ${y}$ (or the other way around: second ${x}$ with first ${y}$) must delete all the ${\#}$‘s and this will cost more than ${2d_{\text{edit}}(x,y).}$ This claim follows since ${x,y}$ are over an alphabet that does not include ${\#}$. Thus, $\displaystyle A(x^{*},y^{*}) = d_{\text{edit}}(x^{*},y^{*}) + E(4n)$ and therefore, $\displaystyle A(x^{*},y^{*}) = 2d_{\text{edit}}(x,y) + E(4n).$ Divide by ${2}$ and we see that ${A(x^{*},y^{*})/2}$ is within additive error ${E(4n)/2}$ of the edit distance of ${x}$ and ${y}$. I conjecture that there is a general construction that can replace ${2}$ by an arbitrary ${k=k(n)}$. This would yield an algorithm that runs in time ${T(cnk)}$ and has additive error of ${E(cnk)/k}$ for some absolute constant ${c>0}$. Note, we can repeat the above construction; however, the obvious application would require a ${\log k}$ size alphabet. If such an amplification were possible, then we could do the following, for example: if there is an algorithm that runs in near linear time, i.e. time ${n^{o(1)}}$, for EDP and has additive error of ${n^{o(1)}}$, then there is a near linear time exact algorithm for EDP. This follows by just setting ${k=n^{o(1)}}$ and doing the calculation. I have started thinking about the AO algorithm and how to combine it with different amplification methods. Perhaps this could lead, finally, to a fast exact algorithm for EDP. Perhaps. In any event such amplification methods seem potentially useful. Open Problems Thus, the main open problem is clear: find an EDP algorithm that runs in ${O(n)}$ time. Or one that runs in ${O(n \log n)}$ time, or near linear time. Or one that runs in ${\dots}$ I see no reason, even after decades of failure, to believe that there could not be an exact algorithm that runs fast and solves the EDP. This is–in my opinion–an important and exciting open problem. Let’s finally solve the problem. There are other, perhaps, more approachable open problems. In real applications the inputs are not worst case, nor are they random. For instance, the BLAST program assumes implicitly a kind of randomness to the inputs. Are there reasonable models of the kinds of inputs that are actually encountered? And for such a model can we get provable fast algorithms for EDP? 1. March 22, 2009 8:08 pm There is a variant of the DP that runs in O(ED^2). People in practice often claim that this is enough for them, since they never care about very large distances (if two strings are very far away, you don’t really care how far they are). The real issue in practice, as far as I understand, is nearest neighbor under the edit distance or variants (biologists have some weighted versions). □ March 22, 2009 8:36 pm Agree about not caring if far apart… Still would be nice to beat $O(ED^2)$–don’t you agree…rjlipton 2. March 22, 2009 10:47 pm It is clearly a very nice theoretical question. But I would take this further: what can the TCS community say about dynamic programming? So far we have not said much. Can we build tools, perhaps of the “NP-hardness” flavor, that would allow us to give definite answers about the complexity of problems that are solved via dynamic programs? We should be able to say either “here’s an O(n^1.7) algorithm for edit distance,” or “under the X-conjecture, edit distance requires Omega(n^2 / polylog n) time to compute.” 3. March 23, 2009 4:02 am There is the general framework of Woeginger for DPs for NP-hard problems. It’s a bit weaker than what you’re asking since it only specifies when a PTAS can be achieved, but it’s interesting 4. March 23, 2009 9:59 am I would be really surprised if someone were to come up with an algorithm which fills the dynamic prog. table in less than O(ED^2) time. (ED^2 time is actually achieved by filling the DP table The best algorithm previous to AO result was a O(n^1/3)-approximation which still used the classical DP. And I believe (maybe rather naively) that this cannot be improved unless one breaks apart from the DP approach. So my argument is that, what we are really lacking is a good estimator for EDP which breaks apart from the DP paradigm. This could be a better recursion (than in Ostrovsky-Rabani), a “physical model” which models repulsion/compulsion between similar/dissimilar substrings, or what not. Of course, there is also this problem of computing the estimator in less than quadratic time. Even a simple looking recursion of OR requires extremely deep machinery to compute in near-linear 5. March 23, 2009 10:13 am Btw, it should probably be mentioned that by O(ED^2) in the above posts, O(ED^2 + n) is meant. The algorithm is never sub-linear. 6. March 24, 2009 12:55 am I can’t believe I’m only just now finding your blog. Your posts are very interesting, and well written — I love the focus on people as well. Reading your previous posts will keep me occupied for quite some time :) 7. March 25, 2009 12:26 am Dear Dr. Lipton: I read Bellman’s biography (Eye of the hurricane) in 1986 and found it to be quite engaging. I have not seen the book in any of the many libraries I have hence visited and it is a shame that very few people seem to have read this book. (Many of my mathematician colleagues have not even heard of Bellman, let alone read this book!) Now the book seems to be out of print and I could only find one used book seller willing to part with it for a paltry $ 315. Good luck finding a buyer! Some of the interesting things I recall from this book are – 1) His account of the working environment at Rand corporation where he spent some years. Ford-Fulkerson were there at that time. His collaboration with Ford (Bellman-Ford algorithm) must have happened there. 2) His account of Princeton math department as a grad student and his mentor Solomon Lefchetz. (When they went out for dinner, Bellman’s job was to cut the steak for Lefschetz who had lost both arms in an accident and hence started a second career from an engineer to a mathematician!) 3) His take on the Erdos-Selberg controversy. He thinks Selberg should have been given more credit as the originator of the main idea of the proof. Thanks for your excellent postings with historical flavor. They will certainly enrich the readers. 8. March 27, 2009 8:16 pm As ravi points out above, Eye of the Hurricane is worth reading and likely includes any such material as a subset. It’s unfortunate that we don’t have more great biographies of computer scientists; the physicists and mathematicians of this century have left a rich legacy (Bethe’s autobiography, Hahn’s, Gamow’s, Kay & Bird’s biography of Oppenheimer, great pop biographies of Ramanujan and Erdos, Hardy’s A Mathematician’s Apology, biographies of Fermi by Serge and his wife both, the glut of Einstein biographies, threatening to collapse into their own gravitational Bellman’s Dynamic Programming (reprinted by Dover) mentions “a fairly complete bibliography plus some complementing remarks”: The Theory of Dynamic Programming Bull. Amer. Math. Soc. 60/1954 9. March 30, 2009 9:13 am Just a technical comment: As far as I understood, the major restriction used in the paper where the lower bound is shown is that the strings are over an alphabet with at least n+m symbols, where n, respectively m, is the length of the first string, respectively of the second string. This is exactly why the algorithm using the Russians’ trick works better, although it is based also on equality tests between symbols: it is essential to have a finite (small) alphabet in order to pre-compute the distance between “all” the small strings. 10. June 30, 2010 5:49 pm What if, in a general theory of everything kind of way, some singular human conscious had used simple Yes(I should feel guilty) or No(I do not feel guilty) answers to answer every moral question that he could remember that had ever occurred in his life. In this way he could have become completely pure. He would have truly asked himself all the questions that could be asked of him. He would have emotionally evolved back to when he was a child. What if he then assigned an ‘It doesn’t matter as life is only made to make mistakes’ label on every decision that he had analysed. This would not make him God or the devil, but just very still and very exhausted. Anybody can do this but just for the purpose of this experiment lets say I have done this. Which I have. There are no fears in me and if I died today I could deal with that because who can know me better than myself? Neither God or the Devil. I am consciously judging myself on ‘their’ level. To make this work, despite my many faults, take ME as the ONLY universal constant. In this sense I have killed God and the Devil external to myself.The only place that they could exist is if I chose to believe they are internal. This is obviously more a matter for a shrink more than a mathematician, but that would only be the case depending on what you believed the base operating system of the universe to be. Math / Physics / morals or some other concept. As long I agreed to do NOTHING, to never move or think a thought, humanity would have something to peg all understanding on. Each person could send a moral choice and I would simply send back only one philosophy. ‘ LIFE IS ONLY FOR MAKING MISTAKES’. People, for the purpose of this experiment could disbelief their belief system knowing they could go back to it at any time. It would give them an opportunity to unburden themselves to someone pure. A new Pandora’s box. Once everyone had finished I would simply format my drive and always leave adequate space for each individual to send any more thoughts that they thought were impure. People would then eventually end up with clear minds and have to be judged only on their physical efforts. Either good or bad. It would get rid of a lot of maybes which would speed lives along.. If we then assume that there are a finite(or at some point in the future, given a lot of hard work, there will be a finite amount) amount of topics that can be conceived of then we can realise that there will come to a time when we, as a people, will NOT have to work again. Once we reach that point we will only have the option of doing the things we love or doing the things we hate as society will be completely catered for in every possible scenario. People will find their true path in life which will make them infinitely more happy, forever. In this system there is no point in accounting for time in any context. If we consider this to be The Grand Unified Theory then we can set the parameters as we please. This will be our common goals for humanity. As before everyone would have their say. This could be a computer database that was completely updated in any given moment when a unique individual thought of a new idea / direction that may or may not have been thought before. All that would be required is that every person on the planet have a mobile phone or access to the web and a self organising weighting algorithm biased on an initial set of parameters that someone has set to push the operation in a random direction. As I’m speaking first I say we concentrate on GRAINE. Genetics – Robotics – Artificial Intelligence – Nanotechnology and Zero Point Energy. I have chosen these as I think the subjects will cross breed information(word of mouth first) at the present day optimum rate to get us to our finishing point, complete and finished mastery of all possible information. Surely mastery of information in this way will lead us to the bottom of a fractal??? What if one end of the universes fractal was me??? What could we start to build with that concept??? As parameters, we can assume that we live only in 3 dimensions. We can place this box around The Milky Way galaxy as this gives us plenty of scope for all kinds of discoveries. In this new system we can make time obsolete as it only making us contemplate our things that cannot be solved because up to now, no one has been willing to stand around for long enough. It has been holding us back. All watches should be banned so that we find a natural rhythm with each other, those in our immediate area and then further afield. An old clock watch in this system is should only be used when you think of an event that you need to go to. It is a compass, a modern day direction of thought. A digital clock can be adapted to let you know where you are in relation to this new milky way boxed system.(x,y,z). With these two types of clocks used in combination we can safely start taking small steps around the box by using the digital to see exactly where you are and then using the old watch to direct you as your thoughts come to you. We can even build as many assign atomic clocks as are needed to individually track stars. Each and every star in the Milky Way galaxy. I supposed a good idea would be to assume that I was inside the centre of the super-massive black hole at the centre of the galaxy. That would stop us from being so Earth centric. We could even go as far as to say that we are each an individual star and that we project all our energies onto the super-massive black hole at the centre of the galaxy. You can assume that I have stabilized the black hole into a non rotating perfect cube. 6 outputs /f aces in which we all see ‘the universe and earth, friends’ etc. This acts like a block hole mirror finish. Once you look it is very hard to look away from. The 6 face cube should make the maths easier to run as you could construct the inner of the black hole with solid beams of condensed light(1 unit long) arranged into right angled triangles with set squares in for strength. Some people would naturally say that if the universe is essentially unknowable as the more things we ‘discover’ the more things there are to combine with and therefore talk about. This is extremely fractal in both directions. There can be no true answers because there is no grounding point. Nothing for the people who are interested, to start building there own personal concepts, theories and designs on. Is it possible that with just one man becoming conscious of a highly improbable possibility that all of universes fractals might collapse into one wave function that would answer all of humanities questions in a day? Could this be possible? Answers to why we are here? What the point of life really is et al? Is it possible that the insignificant possibility could become an escalating one that would at some point reach 100%??? Could it be at that point that the math would figure itself out so naturally that we would barely need to think about it. We would instantly understand Quantum theory and all. Can anybody run the numbers on that probability? Recent Comments Pip on Triads and Dyads Hendrik Jan Hoogeboo… on Triads and Dyads Mike R on The More Variables, the B… maybe wrong on The More Variables, the B… Jon Awbrey on The More Variables, the B… Henry Yuen on The More Variables, the B… The More Variables,… on Fast Matrix Products and Other… The More Variables,… on Progress On The Jacobian … The More Variables,… on Crypto Aspects of The Jacobian… The More Variables,… on An Amazing Paper The More Variables,… on Mathematical Embarrassments The More Variables,… on On Mathematical Diseases The More Variables,… on Who Gets The Credit—Not… John Sidles on Multiple-Credit Tests KWRegan on Multiple-Credit Tests
{"url":"http://rjlipton.wordpress.com/2009/03/22/bellman-dynamic-programming-and-edit-distance/","timestamp":"2014-04-20T03:11:35Z","content_type":null,"content_length":"120316","record_id":"<urn:uuid:13cfeca8-0f94-459c-99b6-93b179949e7d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Irregular arrays David M. Cooke cookedm at physics.mcmaster.ca Mon Sep 4 19:30:45 CDT 2006 On Mon, 4 Sep 2006 17:35:04 -0600 "Charles R Harris" <charlesr.harris at gmail.com> wrote: > On 9/4/06, rw679aq02 at sneakemail.com <rw679aq02 at sneakemail.com> wrote: > > > > >> The question is whether numpy has such support; if not, is it planned. > > > > > No, and no. > > > > Thank you for answering, and I am sorry to hear that. > > > > I will be dropping my membership on the scipy-numpy email list shortly. > > Many systems handle rectangular arrays quite well already, and are more > > fully developed. > > > > It is a common fallacy rectangles are the only shapes one ever needs. > > Physical geometry is only rarely rectangular, and solution of actual > > physical problems, and even number-theoretical problem, is a far larger > > problem domain. > Thanks for blessing us with your exalted presence and superior intellect. > You will be missed. Well now, that's just snarky. rw679aq02 (if that is indeed your real name!), the reason that numpy will not support irregular "arrays" anytime soon comes down to multiple reasons: 1) It would require a complete rework; better to make a new package. Irregular arrays would require an entirely different approach than regular 2) While most things are not rectangular, the equations that describe them are, for the most part. Finite-element methods, for instance, use a triangulation of the physical object, and the equations can then be cast as very large set of array equations. 3) I would guess that problems that could be described by irregular arrays could be better recast with a different data structure. There's a saying that you can write Fortran in any language; however, that doesn't mean you should! 4) No one else has asked for them, and the developers don't need them (this is how open source works: scratching one's own itch) 5) If, instead, you're looking for efficient memory representations of of sparse matrices (most elements being 0), then there are various ways to do this. I'm not familiar with them, but scipy has packages to handle sparse matrices. A lot of work happens in this field (those finite-element methods tend to make sparse matrices). Note that you could simulate an irregular array (with a maximum size in the dimensions) using the masked arrays provided by NumPy. |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-September/022900.html","timestamp":"2014-04-18T07:18:10Z","content_type":null,"content_length":"5634","record_id":"<urn:uuid:917d64c1-30ca-459a-9609-f2076d8c6be6>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
Area - Log Inequality August 6th 2010, 04:09 AM #1 Jul 2009 Area - Log Inequality Using f(x) = lnx, and f(n-1) < Sn n-1 f(x) dx < f(n) Show that for n>1 (n-1)! < n^n (n-1)! / (e(n-1)^(n-1)) < n! Have tried to approach this using graphs, but could not get middle thing. Done this by adding the rectangle below f(x) and rectangles above f(x) with lengths of 1 unit square, starting with x =1 To restate the problem: Using $f(x)=\ln(x)$, and show that for $n>1$, we have Is that correct? If so, I would compute the integral and combine all the terms into one logarithm. That should give you some direction. Using f(x) = lnx, and f(n-1) < Sn n-1 f(x) dx < f(n) Show that for n>1 (n-1)! < n^n (n-1)! / (e(n-1)^(n-1)) < n! Have tried to approach this using graphs, but could not get middle thing. Done this by adding the rectangle below f(x) and rectangles above f(x) with lengths of 1 unit square, starting with x =1 To start, I would take the logs of your three terms in the desired inequalities. Now what's $\ln[(n-1)!]$? To both above posters thxs for replying. I have tried previously adding areas above the f(x) and below the f(x) but however, i do not get the middle term. Instead i get n^n/e^(n-1) when Are you sure you're simplifying correctly? For the integral I mentioned, I get Can you show your line-by-line simplification of the integral? Yea sorry my bad. Reading it wrongly, through the limits were n - 1 So have you gotten the needed result? August 6th 2010, 05:46 AM #2 August 6th 2010, 05:56 AM #3 August 6th 2010, 10:02 PM #4 Jul 2009 August 7th 2010, 03:17 AM #5 August 10th 2010, 08:11 PM #6 Jul 2009 August 11th 2010, 01:53 AM #7
{"url":"http://mathhelpforum.com/calculus/152915-area-log-inequality.html","timestamp":"2014-04-20T18:48:10Z","content_type":null,"content_length":"50793","record_id":"<urn:uuid:525020d5-3db8-411b-9f65-36ef8e2c434f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
What is a twisted D-Module intuitively? up vote 8 down vote favorite When I think about $\mathcal{D}$-Modules, I find it very often useful to envison them as vectorbundles endowed with a rule that decides whether a given section is flat. Or alternatively a notion of parallel transport. Now my question is, what are good ways to think about modules over sheavers of twisted differential operators? ag.algebraic-geometry d-modules intuition add comment 3 Answers active oldest votes One way to think of twisted $D$-modules that I like is to view them as monodromic $D$-modules (see Beilinson, Bernstein A Proof of Jantzen Conjectures section 2.5, available as number 49 on Bernstein's web page ). Let $T$ be a torus, and let $\pi: \tilde{X} \to X$ be a $T$-torsor. The sheaf of algebras $\tilde{D} = (\pi_* D_{\tilde{X}})^T$ has center $U(\mathfrak{t}) = S (\mathfrak{t})$, and its category of modules is the category of weakly $T$-equivaraint $D$-modules on $\tilde{X}$. For any $\chi \in \mathfrak{t}^\vee$, there is a maximal ideal $m_\chi \subset S(\mathfrak{t})$, and the algebra of $\chi$-twisted differential operators is $\tilde{D}/m_\chi \tilde{D}$. up vote 8 down vote If you want to twist by a fractional power $c$ of a line bundle $L$, then you can let $T$ be the usual one dimensional split torus, $\tilde{X}$ be the total space of $L$ with the zero accepted section removed, and $\chi = c$. For ordinary differential operators, set $\chi = 0$. Intuitively, I think of a ($\mathcal{O}$-coherent) twisted $D$-module as a vector bundle on the total space of the torsor such that flat sections obey a fixed monodromy when parallel transported in the torus direction. Thanks Scott. It was hard to decide which answer to accept. I like the reduction perspective as well as the curvature picture. I decided for accept your answer, because you added a reference :) – Jan Weidner Sep 16 '10 at 12:20 add comment If you're taking twisted differential operators in a complex power of a line bundle, $L^c$, then you should think of them as vector bundles/sheaves on the total space of L minus its zero section, endowed with a connection that behaves specially along the fibers of the bundle projection map. Special how? The action of $\mathbb C^*$ by fiber rotation has a differential, which is a vector field on the total space that looks like $t\frac{d}{dt}$ for any trivialization, where $t$ up vote 8 is the coordinate on the fiber. One should take a connection where differentiating along this vector field multiplies by c. down vote This follows immediately from the fact that the ring of twisted differential operators in $L^c$ are exactly $\mathbb{C}^*$-invariant differential operators on the total space minus its zero section modulo $t\frac{d}{dt}-c$. Thanks for your answer. This looks like TDOs are hamiltonian reductions of the differentialoperators on L-0, right? – Jan Weidner Sep 8 '10 at 6:39 1 @Jan Weidner: Generally speaking, differential operators on X/G are obtained by Hamiltonian reduction from differential operators on X (perhaps one should say quantum Hamiltonian reduction, because the ring of diff. operators is non-commutative). In order to get TDOs, we take the Hamiltonian reduction for non-zero value of the moment map. So the answer is `yes'. – t3suji Sep 8 '10 at 20:21 1 @Jan Weidner- That would be a concise way of summing up both my answer and Scott's. – Ben Webster♦ Sep 8 '10 at 21:50 Thanks, t3suji and Ben! – Jan Weidner Sep 16 '10 at 12:12 add comment Here's another perspective to complement the homogeneous approach given in Ben's and Scott's answers. One can look at twisted $D$-modules as connections with fixed scalar curvature. This is particularly powerful if you think complex-analytically: you can describe all possible twistings as follows: Suppose $M$ is a complex manifold. In general, twistings of the sheaf of differential operators are parametrized by the hypercohomology of the truncated de Rham complex $$\Omega^1_{hol}\to\ Omega^2_{hol}\to\dots.$$ (I use the lower index `hol' to distinguish the sheaf of holomorphic sections from the sheaf of $C^\infty$-sections.) If we use Dolbeault's complex to compute the hypercohomology, you see that twistings are represented by a closed 2-form $\omega$ whose $(0,2)$-part vanishes. $\omega$ matters only up to a differential of a $(1,0)$-form. up vote 6 Let $\omega$ be such a closed 2-form. We can then consider vector bundles $F$ (or, more generally, quasicoherent sheaves) on $M$ equipped with a connection $$\nabla:F\to F\otimes\Omega$$ down vote whose curvature is $\omega$. More precisely, suppose $F$ is a holomorphic vector bundle. The sheaf of $C^\infty$-sections of $F$ carries an action of $$\overline\partial:F\to F\otimes\Omega ^{0,1}.$$ An action of the TDO corresponding to $\omega$ on $F$ is the same as extension of $\overline\partial$ to $\nabla$ with prescribed curvature. Remark. If one works algebraically (for instance, over fields other than ${\mathbb C}$), only some TDO's can be viewed in similar way; namely those whose class belongs to the image of the space $H^0(\Omega^2)$. Thanks, I like your perspective! – Jan Weidner Sep 16 '10 at 12:15 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry d-modules intuition or ask your own question.
{"url":"http://mathoverflow.net/questions/37926/what-is-a-twisted-d-module-intuitively/37966","timestamp":"2014-04-18T20:56:23Z","content_type":null,"content_length":"66769","record_id":"<urn:uuid:a91fd3b2-8868-423d-9b97-e21105d5ea9c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: June 2000 [00358] [Date Index] [Thread Index] [Author Index] Re: RE: Re: User problem • To: mathgroup at smc.vnet.net • Subject: [mg24079] Re: [mg24023] RE: [mg23997] Re: [mg23971] User problem • From: john.tanner at baesystems.com • Date: Fri, 23 Jun 2000 02:26:46 -0400 (EDT) • Sender: owner-wri-mathgroup at wolfram.com The "Magic Number" coincides with the point where Table starts to produce PackedArrays: In[1]:= Developer`PackedArrayQ[Table[i, {i, 249}]] Out[1]:= False In[2]:= Developer`PackedArrayQ[Table[i, {i, 250}]] Out[2]:= True This break point of 250 seems to be for the total number of index entries in the Table, thus a multi-dimensional Table has a break point at (e.g.) Table[{i,j}, {i, 5},{j, 50}]. I cannot find a reference to this in the documentation, but I appreciate there should be such a break point even if I would like some control over it! This sometimes has implications for expressions which are tested at a small number of points and then used with a larger number: it is wise to check with different Table sizes if you suspect the behaviour would change. PackedArrays are extremely useful, and their near-seamless incorporation into Mathematica is a real joy, but they do behave differently in some circumstances even in "legal" code. It is interesting, but not surprising given the output is in a different form, to note this difference in "illegal" code. hwolf (hwolf at debis.com@SMTP at MSMR02) writes: >Notwithstanding the original code being illegal, 250 appears as a >magic number. Compare: >In[2]:= Table[i = i + 1, {i, 249}] >In[3]:= Table[i = i + 1, {i, 250}] >-- Hartmut >> -----Original Message----- >> From: BobHanlon at aol.com [SMTP:BobHanlon at aol.com] To: mathgroup at smc.vnet.net >To: mathgroup at smc.vnet.net >> Sent: Monday, June 19, 2000 7:46 AM >> To: mathgroup at smc.vnet.net >> Subject: [mg23997] Re: [mg23971] User problem >> In a message dated 6/18/2000 3:39:33 AM, joseph468 at yahoo.com >> writes: >> >I am trying to contstruct a table of random integers >> >between 1 and 4. I'd like to vary the number of >> >elements in the table from time to time. The >> >contruction that I use in Mathematica is as follows: >> > >> >genome = Table[i = Random[Integer, {1, 4}], {i, 249}] >> > >> >The problem with this is as follows: the above >> >generates tables of varying length up to and including >> >249 elements. However, when I try to generate any >> >table containing more than 249 elements, I get the >> >following error message on my screen: >> > >> >This program has performed an illegal operation and >> >will be shut down. If the problem persists please >> >contact the vendor. >> > >> >I have version 4 of Mathematica, and am running it on >> >a Windows98 platform (Dell NoteBook) >> > >> genome = Table[Random[Integer, {1, 4}], {249}]; >> Bob Hanlon
{"url":"http://forums.wolfram.com/mathgroup/archive/2000/Jun/msg00358.html","timestamp":"2014-04-17T06:51:55Z","content_type":null,"content_length":"37083","record_id":"<urn:uuid:33ede3a0-3397-4ff1-98dc-51949bfece94>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
Loop Exercises in Programming in Objective-C In addition to my traditional posts, I want to use this website as a personal journal recording my experiences learning to code. This is the first of what will be many posts relating to my learning. The main purpose of these posts are for my own benefit, processing my own thoughts by writing them down and keeping them for posterity. If you skip it, I won’t be sad. I’m continuing to march towards my goal of learning to program iPhone apps. Rather than stumbling blindly forward, as I have in my past attempts, I am now aided by the help of two friends and iOS developers: Will, a coder at a New York based startup and Massimo of iRealB. I’m finding that meeting with people, in person, and discussing code is helping me immensely with my motivation and Both of my friends recommended Programming in Objective-C by Stephen Kochan, a book that I hadn’t explored before. Programming in Objective-C is different from most of the other programming books I’ve attempted thus far, as Stephen doesn’t assume the reader is already familiar with another programming language. This had always been my main struggle: finding a programming book that doesn’t assume any prior knowledge. I do find that my experiences with Codecademy and halfway progress through the Rails Tutorial have greatly enhanced my understanding of the Objective-C language, but Programming in Objective-C is the first book where I just get it. Whether this is a result of experience or simply a different presentation of the material, I can’t say for sure. My learning process so far has been this: meet with Massimo and look at the topic of the next chapter in the book. Then, we’ll code together (mostly, he’ll tell me what to code) writing made-up examples within that particular chapter topic. For the chapter on classes, we made a few basic classes. The chapter on expressions, we both play around with expressions in Objective-C. Then, I’d use the rest of the week to actually read the chapter on my own, making sure I understand everything before moving forward, and performing the exercises in at the end of the chapter on my own. I’ve just completed chapter 5: “Programming Loops,” and stumbled upon my first example that really made me struggle. The factorial of an integer n, written n!, is the product of the consecutive integers 1 through n. For example, 5 factorial is calculated as follows: 5! = 5 x 4 x 3 x 2 x 1 = 120 Write a program to generate and print a table of the first 10 factorials. My brain’s first response, as reading this questions, was to go completely blank. I knew the answer would involve loops (not only was that the topic of this chapter, but that part is already obvious to me). I just had to figure out how to put the loop together. I knew that I would need to utilize both subtraction and multiplication, as I needed to take a number i, subtract 1, multiply that by the original number, subtract 1 again, multiply that, and so on until i became 1. The initial challenge, when tackling these logic problems, is knowing how many variables to declare and what I’ll do with each. I’d need an integer for my loop (I’m a fan of using i, although the book usually uses n) as well as at least one more integer variable to store the sum of the multiplication as my program going through the loop. The question asked for a table with the first 10 factorials, which I interpreted as a table with two columns, the first listing the numbers from 1 to 10 and the second listing the corresponding Since I’d need to calculate factorials for each of the integers between 1 and 10, I knew I’d need a loop that would calculate each integer between 1 and 10, one by one. To do this, I’d need a loop. Then I’d need another loop inside that loop to calculate the factorial for each integer. I started easy: for (int i = 1; i <= 10; i++) Now I needed to figure out what went inside this loop. I knew that i would be the number that I would need to work with, so I initially used the integer of i within the loop, creating a process where I subtracted 1 and then *= that number to i. But a bit of critical thinking, and an unsuccessful build and run later, I realized that messing with i messed with the loop. Since the loop is performing it’s task and handling the i++ feature, messing with i can result in exiting the loop early or other unexpected results. I created a new integer, that I called temp (horrible naming convention, but I had no idea what to call it yet because I was just testing out ideas) that I could assign the value of i. I’d need to assign the value of i to my integer temp, subtract one, multiply that number by the original temp value, save that new value, and continue. I just started writing equations, testing ideas out. I wrote a for loop inside the above for loop, added an NSLog command to print out numbers as my program looped through the code, but for some reason the loop only processed my number once, returned one value, then exited. Much of my confusion stemmed from the loop within a loop, so I decided to ditch the 10 to 1 countdown loop and instead isolate the factorial number loop. Once I got that to work, I’d insert it into my previous loop. I decided I wanted to test my new loop by calculating the factorial number of 5, since that’s smack dab in the middle of 1 and 10. I assigned 5 to an int variable outside the loop, then used that variable in a loop. int factorialNumber = 1; int number = 5; for (int i = number; i > 0; i--) This was my first time working with decreasing loops (i-- rather than i++) but since I’d be subtracting 1 from the number until it equaled 1 (multiplying the number by 0 would obviously mess things At the end of this loop, I knew that I’d need the factorial number of whatever was assigned to number. My original plan was to subtract 1 from my int number in the first line of the loop, but I realized I could have the loop do that for me. All I’d need in my loop was factorialNumber *= i (by this point, I had renamed temp to factorialNumber). After a bit of condensing, here’s what I had (and changing my counter integer to j, since I wanted to use i in my initial loop): int factorialNumber = 1; for (int j = 5; j > 0; j--) { factorialNumber *= j; NSLog(@"Factorial Number is: %i",factorialNumber); This returns the factorial number for 5, and I can change the number 5 in the loop to anything I want. Next, I needed to figure out how to put this loop into my other loop. I realized that I would be calculating the factorial for each number, from 1 – 10, which is what the value of i is going to be in the first loop. By subsituting “5″ in the second loop with i, I’d be able to process each number and get the results I want. After playing around a bit, getting things just right, here was my final code: for (int i = 1; i <= 10; i++) { int factorialNumber = 1; for (int j = i; j > 0; j--) { factorialNumber *= j; NSLog(@"i: %2i, temp: %i", i, factorialNumber); The process of arriving at this final code took a lot of thinking and a few walks around the block. But I ended up with the code I needed, doing exactly what I wanted it to do. There were a few times that I wanted to just give up and ask for help, but I stuck it out and ended up getting the answer. I can’t wait for problems like these to become second nature so I can start tackling more complex exercises. Related Posts:
{"url":"http://www.iamdann.com/2012/09/04/loop-exercises-in-programming-in-objective-c","timestamp":"2014-04-17T04:03:35Z","content_type":null,"content_length":"49056","record_id":"<urn:uuid:01696b5d-b2ef-4c44-b976-724a99b35105>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Functions Examples Page 1 What equation describes the connection between x and y in the function {(1, 3), (5, 7)}? y = x + 2. A relation is a function if each thing in the domain gets paired with only one thing from the range. Another way to think of it is that each x value gets matched with only one y value. You can only bring one partner to this square dance.
{"url":"http://www.shmoop.com/functions/functions-examples.html","timestamp":"2014-04-16T08:57:33Z","content_type":null,"content_length":"35954","record_id":"<urn:uuid:9b4cd6a3-77b6-4dba-8e58-4c49b39ee598>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - User Profile for: mecej_@_OSPAM.operamail.com Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. User Profile: mecej_@_OSPAM.operamail.com User Profile for: mecej_@_OSPAM.operamail.com UserID: 841125 Name: mecej4 Registered: 3/6/12 Total Posts: 3 Show all user messages
{"url":"http://mathforum.org/kb/profile.jspa?userID=841125","timestamp":"2014-04-19T20:25:09Z","content_type":null,"content_length":"10806","record_id":"<urn:uuid:2d15e7f3-bde0-438f-8316-d92e58e0d8fd>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
Check them out. Here are thirty homoskedastic ones: > homo.wiener for (j in 1:30) { for (i in 2:length(homo.wiener)) { homo.wiener for (j in 1:30) { plot( homo.wiener, type = "l", col = rgb(.1,.... Random dive MH A new Metropolis-Hastings algorithm that I would call “universal” was posted by Somak Dutta yesterday on arXiv. Multiplicative random walk Metropolis-Hastings on the real line contains a different Metropolis-Hastings algorithm called the random dive. The proposed new value x’ given the current value x is defined by when is a random variable on . Thus, Betting on Pi I was reading over at math-blog.com about a concept called numeri ritardatari. This sounds a lot like “retarded numbers” in Italian, but apparently “retarded” here is used in the sense of “late” or “behind” and not in the short bus sense. I barely scanned the page, but I think I got the gist of it:
{"url":"http://www.r-bloggers.com/tag/randomness/","timestamp":"2014-04-19T17:16:27Z","content_type":null,"content_length":"28122","record_id":"<urn:uuid:cfcedb39-cb7f-4645-92d8-e8e1565ba0ac>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
quantum computer (thing) The idea of Quantum Computing was first proposed by Feynman based on the idea that simulating a quantum system would be a hard problem for a classical computer. The logic goes somewhat like this. Consider N quantum particles. At the next time step there are approximately N possibilities. After N time steps the number of possibilities has gone up to N^N. Thus simply storing all possible alternatives requires memory which goes up exponentially. On the other hand a quantum system could easily simulate another quantum system. This idea was put in a more concrete form by Peter Shor in 1995 with Shor's algorithm which showed that it would be possible for a quantum computer to factorize a large number using a polynomial time algorithm. As the fact that factorizing a large number is a "hard problem" for a classical computer is the basis of RSAencryption, this discovery spurned off a lot of research on Quantum Computing. A lot of research is being done both in trying to realize a quantum computer using technologies such as NMR and in developing new algorithms. More information and links may be found at http://
{"url":"http://everything2.com/user/Suvrat/writeups/quantum+computer","timestamp":"2014-04-21T05:02:29Z","content_type":null,"content_length":"20273","record_id":"<urn:uuid:f0ce3736-0b52-4fce-9d9a-ad0785457a97>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
Operations Research Applications and Algorithms 4th Edition Chapter 10.1 Solutions | Chegg.com To maximize the objective function Subject to the given constraints Now follow the steps stated below. Step 1: Convert the linear program Change equations (1), (2), (3) into equality constraints by adding a slack variable Now the system is in standard form.
{"url":"http://www.chegg.com/homework-help/operations-research-applications-and-algorithms-4th-edition-chapter-10.1-solutions-9780534380588","timestamp":"2014-04-21T13:14:32Z","content_type":null,"content_length":"52672","record_id":"<urn:uuid:bfa3c15e-6746-4d3d-9cf7-76875a24775c>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No: 661.05037 Autor: Erdös, Paul Title: Problems and results in combinatorial analysis and graph theory. (In English) Source: Discrete Math. 72, No.1-3, 81-92 (1988). Review: The author has already written a great number of papers with similar titles. This valuable survey contains many problems and conjectures organized in ten sections. Among other things they deal with Ramsey numbers, with problems on extremal graph theory, with extremal problems on hypergraphs, and with miscellaneous problems. To avoid overlap the author presents here only relatively new Reviewer: J.Sedlácek Classif.: * 05C35 Extremal problems (graph theory) 05C55 Generalized Ramsey theory 05C65 Hypergraphs 05-02 Research monographs (combinatorics) 00A07 Problem books Keywords: survey; Ramsey numbers; extremal graph theory; hypergraphs © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag │Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │ │Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │ │Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
{"url":"http://www.emis.de/classics/Erdos/cit/66105037.htm","timestamp":"2014-04-21T00:04:49Z","content_type":null,"content_length":"3596","record_id":"<urn:uuid:ad21142e-29a7-407c-af5e-9ad7d8d4aa66>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Meta-analysis of rates? [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: Meta-analysis of rates? From Sona Saha <ssaha@uclink4.berkeley.edu> To statalist@hsphsun2.harvard.edu Subject Re: st: Meta-analysis of rates? Date Tue, 14 Sep 2004 09:40:05 -0700 Hi Tom, Thanks for the suggestions. The second way you mentioned was what I tried, but I will play with meta more. At 12:50 PM 9/14/2004 +0300, you wrote: Meta-analysis of proportions may be a bit tricky. You can do it in STATA, though, using e.g. _meta_. 1. One way is to use the Freeman-Tukey arcsin transform to stabilize variances: Let "n" be the nominator and "N" the denominator for the proportion. Then p=n/N. - if n=0 & N<50, compute p as p = 1/(4*N) - if n=N & N<50, compute p as p = (n-0.25)/N The effect size would be "pTransformed" . gen pTransformed = asin(sqrt(n/(N+1))) + asin(sqrt((n+1)/(N+1))) // which is about the same as ". gen Theta = 2* asin(sqrt(n/N))" for large n, N The SE would be: . gen SEpTransf = sqrt(1/(N+1)) . meta pTransformed SEpTransf // select the proper options for meta-analysis, or for the forrest plot Then you can transform the Summary estimate and the CI boundaries back to proportions using gen AnyProportion = (sin(AnyPTransformed / 2))^2 2. Another way (provided that proportions are not really close to 0 or 1, and that N is large) is to compute a sample size-weighted summary as pSummary = sum(n)/sum(N), where the variance would be sqrt(pSummary*(1-pSummary)/sum(N)). You can make the forrest plot manually. You assume no heterogeneity. Hope this helps. On Sep 13, 2004, at 9:47 PM, Sona Saha wrote: Is anyone familiar with any routines or commands in STATA that allow for a meta-analysis of rates? The metan, meta and metacum commands require 2 groups for analysis. Ideally, I would very much like to pool these rates with options for random or fixed effects models, evaluating heterogeneity and the ability to create Forrest plots. However, I have not found a routine that will take data for just one group per study. If you have any suggestions or advice that would be much appreciated. Thank you! Sona R. Saha, MPH Division of Public Health Biology & Epidemiology School of Public Health University of California, Berkeley 140 Warren Hall #7360 Berkeley, CA 94720 Phone: 510 642-6265 Fax: 510 643-7316 Email: ssaha@berkeley.edu * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ Sona R. Saha, MPH Division of Public Health Biology & Epidemiology School of Public Health University of California, Berkeley 140 Warren Hall #7360 Berkeley, CA 94720 Phone: 510 642-6265 Fax: 510 643-7316 Email: ssaha@berkeley.edu * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2004-09/msg00386.html","timestamp":"2014-04-18T08:12:32Z","content_type":null,"content_length":"8853","record_id":"<urn:uuid:c06e6540-5c03-47a4-8d34-1ee292dfbdb7>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
Globe Stocks Help: Company Financials My Membership Annual Ratio Definitions Get Help With... Lost Password GOLD Tracker Update Registration Stocks View or Upgrade My My Portfolio Subscription Book value (per share) - Common shareholders' equity divided by common shares outstanding at end of an indicated fiscal period. A much-used basis for Funds View Current and evaluating a company's net worth and any changes in it from year to year. Options Past Billing Bonds Information Common shares outstanding - The number of common shares outstanding. Includes shares held by subsidiaries and other inter-company holdings. Net of Alerts Update Credit Card treasury stock. Cancel Service Web Site Help Current ratio - Ratio of current assets divided by current liabilities. An indicator of short-term debt-paying ability. The higher the ratio, the more Contact Us liquid the company. System Requirements Debt/equity ratio (Total debt-to-equity ratio) - Short and long-term interest-bearing debt (including capital lease obligations) divided by Privacy Policy shareholders' equity. A capitalization ratio that indicates the extent to which a company is financing its assets with debt and its degree of financial Terms of Service leverage. A high debt-to-equity ratio, which indicates very aggressive financing or a history of large losses, results in very volatile earnings. A low Make Us Home debt-to-equity ratio indicates conservative financing and low risk, with reduced possibilities of large losses or large gains in earnings. Content Providers Dividend coverage - Earnings before extraordinary items, divided by total dividends. Shows how many times over a company can pay its dividends from Dividends per share (Dividends per common share) - Dividend paid for the past 12 months divided by the number of common shares outstanding, as reported by a company. The number of shares often is determined by a weighted average of shares outstanding over the reporting term. Earnings per share (EPS) - Earnings before extraordinary items, less preferred-share dividends, divided by average common shares outstanding during indicated fiscal year. Interest coverage - Earnings before extraordinary items plus income taxes and interest expense. Shows how many times over a company can cover its interest obligations from earnings. Interest coverage ratio - The ratio of interest coverage to annual interest expense. This ratio measures a firm's ability to pay interest. Market price - The most recent price at which a security transaction took place. Net profit margin - Net income divided by sales. The amount of each sales dollar left over after all expenses have been paid. Operating margin - Operating revenues less operating expenses, divided by operating revenues, expressed as a percentage. Shows the percentage of operating revenues a company retains after operating expenses. Price/earnings (P/E) ratio - A common stock analysis statistic in which the current price of a stock is divided by the current (or sometimes the projected) earnings per share of the issuing firm. As a rule, a relatively high price/earnings ratio is an indication that investors feel the firm's earnings are likely to grow. Price/earnings ratios vary significantly among companies, among industries and over time. Percent change in assets - Percentage change in total assets during the indicated fiscal period. Percent change in profit - Percentage change in earnings before extraordinary items during the indicated fiscal period. Percent change in revenue - Percentage change in total revenue during the indicated fiscal period. Quick ratio - A relatively severe test of a company's liquidity and its ability to meet short short-term obligations. The higher the ratio, the more liquid the company. The quick ratio is calculated by dividing current liabilities into all current assets with the exception of inventory. Return on assets (ROA) - A measure of a company's profitability, equal to a fiscal year's earnings divided by its total assets, expressed as a Return on capital (ROC, Return on average capital) - Earnings before extraordinary items, interest expense and income taxes, divided by average capital. Shows how effectively a company is employing its capital to generate profit. Return on common Equity (ROE, Return on average common equity) - Earnings before extraordinary items, less preferred-share dividends, divided by average common shareholders' equity. Shows the rate of return on the investment for the company's common shareholders, the only providers of capital who do not have a fixed return. Return on total equity (ROE) - A measure of how well a company used reinvested earnings to generate additional earnings, equal to a fiscal year's after-tax income (after preferred stock dividends but before common stock dividends) divided by total equity, expressed as a percentage. Sales per share - Main sources of revenue, net of excise taxes, trade discounts, returns and allowances, divided by common shares outstanding at end of indicated fiscal year. More Definitions Miscellaneous Definitions My Membership Lost Password Update Registration View or Upgrade My View Current and Past Billing Update Credit Card Cancel Service Web Site Help Contact Us System Requirements Privacy Policy Terms of Service Make Us Home Content Providers Get Help With... GOLD Tracker My Portfolio
{"url":"http://gold.globeinvestor.com/public/help/flat/help_financials_report_ratios.html","timestamp":"2014-04-16T13:02:58Z","content_type":null,"content_length":"38611","record_id":"<urn:uuid:72f08ed7-af7c-4413-81f8-609ac8e31995>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
S U V A T, please help. December 3rd 2006, 06:53 AM #1 Junior Member Aug 2006 S U V A T, please help. Hi guys, I have the following problem: At time = 0, a body is projected from an origin O with an initial velocity of 10ms-¹. The body moves along a straight line with a constant acceleration of -2ms-². a) Find the displacement of the body from O when t = 7. I have done this OK b) How far from O does the body come to a instantaneous rest and what is the value of t then? I have done this OK c) Find the distance travelled by the body during the time interval t = 0 to t = 7. I do not know this, help please Where are told, The the velocity is the integral of that, But the initial conditions say, So the distance is, $\int_0^7 -2t+10dt=-t^2+10t|_0^7=-49+70=21$ The displacement is, $\int_0^7 |-2t+10| dt$ This integral is more confusing to integrate. But the trick is to divide it into parts. One where the $-2t+10>0$ and the other when $-2t+10<0$ When does this change happen? Simple find, So at $t=5$ we have a change. $\int_0^7 |-2t+10|dt=\int_0^5 |-2t+10|dt+\int_5^7 |-2t+10|dt$ But because of the changing signs we can write, $\int_0^5 -2t+10dt+\int_5^7 10-2t dt$ $-t^2+10t |_0^5 +10t-t^2|_5^7$ Where are told, The the velocity is the integral of that, But the initial conditions say, So the distance is, $\int_0^7 -2t+10dt=-t^2+10t|_0^7=-49+70=21$ The displacement is, $\int_0^7 |-2t+10| dt$ This integral is more confusing to integrate. But the trick is to divide it into parts. One where the $-2t+10>0$ and the other when $-2t+10<0$ When does this change happen? Simple find, So at $t=5$ we have a change. $\int_0^7 |-2t+10|dt=\int_0^5 |-2t+10|dt+\int_5^7 |-2t+10|dt$ But because of the changing signs we can write, $\int_0^5 -2t+10dt+\int_5^7 10-2t dt$ $-t^2+10t |_0^5 +10t-t^2|_5^7$ Everything ThePerfectHacker said was correct, he's got the roles of distance and displacement reversed. $\int_0^7 -2t+10dt$ <--Gives the displacement. $\int_0^7 |-2t+10| dt$<--Gives the distance. December 3rd 2006, 07:02 AM #2 Global Moderator Nov 2005 New York City December 3rd 2006, 10:50 AM #3
{"url":"http://mathhelpforum.com/calculus/8350-s-u-v-t-please-help.html","timestamp":"2014-04-18T14:52:25Z","content_type":null,"content_length":"40214","record_id":"<urn:uuid:d1e93d95-c419-4db4-bb68-14ff56a62b77>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: how o u solve (3x^2y)^3 • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. Does that help? Best Response You've already chosen the best response. What is 3 to the third? Best Response You've already chosen the best response. the power of 3 is operating on each part of the expression and can be written in expanded form as \[3^3 \times (x^2)^3 \times (y^1)^3\] to simplify the variables use the index law for power of a power which means multiply the powers \[(x^a)^b = x^{a \times b}\] Best Response You've already chosen the best response. thank u guys Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/513792b1e4b01c4790d00de2","timestamp":"2014-04-21T12:50:09Z","content_type":null,"content_length":"37185","record_id":"<urn:uuid:fcb52af2-a66e-4f51-95e9-d6f1f00019f6>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Method of Estimating information on projection conditions by a projection machine and a device thereof Patent application title: Method of Estimating information on projection conditions by a projection machine and a device thereof Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A method of eliminating information on the projection states of projection elements (P) by using an analysis model in which discharged projection elements (P) repeatedly collided with rotation blades (13) in a projection machine having rotating blades (13). The method comprises the step of determining initial conditions including information on the size and rotation of blades (13), discharging information on the projection elements (P), and information on projection elements with respect to the blades (13) the step of storing the initial conditions, a computing step of computing the position of each projection element (P), and its velocity and direction after collision with a blade (13) based on the initial conditions, and the step of estimating information on projection state based on computation results. A method of estimating information on the state of projection of abrasive particles projected by a projection machine that includes a plurality of blades that rotate at a high rate, the method comprising the steps of:analyzing the behavior of said abrasive particles projected by said projection machine on said blades to create an analytical model; andestimating the information on the state of the projection of the abrasive particles projected by said projection machine using said analytical model. The method of claim 1, wherein said behavior of each abrasive particle includes contact with at least one of the other abrasive particles and one of the rotating blades. The method of claim 1, wherein the information on the state of the projection of the abrasive particles is at least one of a distribution of a projection of said abrasive particles and a velocity of a projection of the abrasive particles. The method of claim 1, wherein said projection machine is a centrifugal projection machine. A method of estimating information on the state of projection of abrasive particles projected by a projection machine that includes a plurality of blades that rotate at a high rate, and an opening through which the abrasive particles are projected by said blades to an article to be processed, the method comprising the steps of:determining initial conditions that include information on a size and a rate of rotation of said blades, information on the projection of the abrasive particles, information on the abrasive particles in relation to said blades;storing said initial conditions; calculating positions of each abrasive particle, and velocities and directions of the abrasive particles after collisions with said blades, based on said initial conditions; andestimating the information on said state of the projection based on the result of said calculation. The method of claim 4, wherein the information on the state of the projection of the abrasive particles is at least one of a distribution of the projection of said abrasive particles and the velocity of a projection of the abrasive particles. The method of claim 5, wherein said step for calculating includes:expressing a velocity of each abrasive particle after a collision as a relative velocity that includes a vertical component along a Y-axis and a horizontal component along an X-axis using a transfer vector of the abrasive particle and a transfer vector of the movement of a point of collision on a surface of the corresponding blade on which the abrasive particle is impacted, wherein the vertical component of the relative velocity is expressed as a bounce using the coefficient of rebound by a determination of a coefficient, and wherein the horizontal component is expressed as a loss of speed due to a resistance by friction by a determination of a coefficient; andcalculating a velocity and a direction of the abrasive particle after a collision with the corresponding blade by summing them and calculating the transfer vector of the blade at said collision point. The method of claim 5, wherein said step for calculating includes:calculating a magnitude of a force of the contact of each abrasive particle relative to at least one of the blade and another abrasive particle; andcalculating an acceleration of the abrasive particle based on forces that act on the abrasive particle that include said force of the contact and gravity, and obtaining data on a velocity and a position of the abrasive particle after a minimal time based on the calculated acceleration. The method of claim 4, wherein said step of calculating the acceleration calculates the distance that the abrasive particle moves and the distance the corresponding blade moves in a sampling time, and executes the calculation relating to the collision of an abrasive particle that complies with sequential conditions for collisions. The method of claim 4, wherein the method further includes the step of displaying the result of said calculation. The method of claim 4, wherein said projection machine is a centrifugal projection machine. The method of claim 4, wherein the method further includes the step of adjusting a profile of the distribution of the projection of the abrasive particles to a predetermined profile by selecting values of the dimensions of each blade, the range of positions of projection on the opening from which the abrasive particles are projected, and a rate of rotation of the blade such that a variability of the frequency to which each discharged abrasive particle rebounds from the blade is a predetermined value or less. The method of claim 10, wherein the predetermined value is 0.3. 14. The method of claim 11, wherein the range of positions for the projection on the opening from which the abrasive particles are projected is degree. to The method of claim 10, wherein the values of the dimensions include a ratio of the inner diameter and the outer diameter of the blade, wherein the range of this ratio is any one of 75 to 5 to 9, and 6 to 4.1. 16. A system with a programmed computer for estimating information on the state of projection of abrasive particles projected by a projection machine that includes a plurality of blades that rotate at a high rate, said computer comprising:a) input means for providing initial conditions that include information on the size and rotation of said blades, information on the projection of the abrasive particles, information on the abrasive particles in relation to said blades and to said computer;b) calculating means for calculating a position of each abrasive particle, and velocities and directions of the abrasive particles after collisions with said blades, based on said initial conditions;c) means for estimating the information on said state of the projection based on the result of said calculation; andd) means for displaying said presumed information. The system of claim 16, wherein said calculating means calculates a magnitude of a force of the contact of each abrasive particle relative to at least one of the blade and other abrasive particles, and calculates an acceleration of the abrasive particle based on forces that act on the abrasive particle that include said force of the contact and gravity, and obtaining a velocity and a position of the abrasive particle after a minimal time based on the calculated acceleration. The system of claim 16, wherein said computer further includes a storage medium in which a program for a calculation to be executed by said calculation means is stored. The system of claim 16, wherein said calculating means expresses a velocity of each abrasive particle after a collision as a relative velocity that includes a vertical component along a Y-axis and a horizontal component along an X-axis using a transfer vector of the abrasive particle and a transfer vector of a point of collision on a surface of the corresponding blade on which the abrasive particle impacts, wherein the vertical component of the relative velocity is expressed as a bounce using the coefficient of rebound by a determination of a coefficient, and wherein the horizontal component is expressed as a loss of speed caused by a resistance for friction by a determination of a coefficient determination; andwherein said calculating means calculates a velocity and a direction of the abrasive particle after a collision with the corresponding blade by summing them and calculating the transfer vector of the blade at said collision point. The system of claim 16, wherein said calculating means calculates a distance the abrasive particle moves and the distance the corresponding blade moves in a sampling time, and executes the calculation relating to the collision for an abrasive particle that complies with sequential crash condition. The system of claim 14, wherein said projection machine is a centrifugal projection machine. The system of claim 14, wherein a profile of the distribution of the projection of the abrasive particles is adjusted to a predetermined profile by selecting values of the dimensions of each blade, the range of positions of projection on the opening from which the abrasive particles are projected, and a rate of rotation of the blade such that a variability of the frequency to which each discharged abrasive particle rebounds for the blade is a predetermined value or less. The system of claim 19, wherein the predetermined value is 0.3. 24. The system of claim 20, wherein the range of positions of the projection on the opening from which the abrasive particles are projected is degree. to The system of claim 10, wherein the values of the dimensions include a ratio of the inner diameter to the outer diameter of the blade, wherein the range of this ratio is any one of 75 to 5 to 9, and 6 to 4.1. 26. A method aided by a programmed computer for controlling a projection of abrasive particles to be projected to an article by a projection machine that includes a plurality of blades that rotate at a high rate, and for estimating information on the state of said projection of said abrasive particles, the method comprising the steps of:a) entering information on the blade, a condition of projection of the abrasive particles, and a coefficient of bounce and a coefficient of resistance to friction of the abrasive particle, in said computer;b) determining by said computer whether said entering in said entering step is completed, and calculating by said computer positions of respective abrasive particles per a given sampling time based on the sampling time and a transfer vector of the abrasive particle, if said entering is completed;c) turning the blades by said computer to update the angles of the blades;d) determining by said computer whether each abrasive particle impacts the corresponding blade, calculating by said computer a velocity and a direction of the impacted abrasive particle to update the transfer vector of the abrasive particle, if said computer determines that the abrasive particle impacts the corresponding blade, while maintaining the transfer vector, if said computer determines no abrasive particle impacts the corresponding blade;e) determining by said computer whether a position of said blades is within a range from which the abrasive particles are discharged, discharging the abrasive particles, if the position of said blades is within the range of discharge of the abrasive particles, while preventing the abrasive particles from being discharged, if the position of said blades is outside the range of discharge of the abrasive particles,f) determining by said computer whether the positions of the blades has been turned to the predetermined positions, totaling the transfer vectors of respective abrasive particles, if said determination indicates that the positions of the blades have been turned to the predetermined positions, while repeating steps b) to f), if said determination indicates that the positions of the blades has not been turned to the predetermined position; andg) displaying by said computer the distribution of the projection and the velocity of the projection and of the result of the calculations for the total. FIELD OF THE INVENTION [0001] This invention generally relates to a method and a system for estimating information on projection conditions for projecting abrasive particles by a projection machine. More particularly, this invention relates to a method and a system that enables information to be estimated on the conditions of the projection without a trial for manufacturing parts of the projection machine. BACKGROUND OF THE INVENTION [0002] In a surface-treatment device such as a shot-peening machine, it is preferable to set optimal projection conditions of abrasive particles to be projected by a projection device based on the shape of an article to be processed and the area of the surface to be processed, etc. The projection conditions of the abrasive particles in this context include the area to be shot-peened or the distribution of the shot-peening, as well as the amount and the velocity of the abrasive particles to be projected. To this end, Japanese Patent Early-Publication No. 1996-323629 (prior art 1), by the assignor of the present application, discloses a method and an apparatus for regulating the distribution of the shot peened based on the article to be processed when the quantity and the velocity of the abrasive particles to be projected are changed based on that article to be processed. As another prior-art publication, a shot-peening machine is disclosed in Japanese Patent Early-Publication No. 1989-264773 (prior art 2). It limits the distribution of the shot peened by projecting the abrasive particles of the shot peened in a distribution that is wider than the surface to be processed and by providing a so called vane as a liner between the projection device and the article to be processed, to limit the range of the projection of the abrasive particles. Further, the apparatus disclosed in Japanese Patent Early-Publication No. 2003-340721(prior art 3) is configured to concentrate the distribution of the abrasive particles within a predetermined range by shortening the length of a blade so as to maintain the constant direction of the projection without using a vane. However, in the disclosures of prior art 1, deciding the distribution and the velocity of the projection necessitates a centrifugal projecting device that actually projects the abrasive particles to the article to be processed to confirm the distribution and the velocity of the abrasive particles based on the result of the actual projecting. Therefore, it necessitates time to obtain an accurate relationship between the optimum processing and the distribution of the projection. Desirably, the centrifugal projecting device will provide for distribution of the projection that is best suited for articles to be processed and for the processing method in the centrifugal projection device, because saving energy and an efficient projection are needed. From this viewpoint, it is inconvenient to require time to understand an accurate relationship between the optimum processing and the distribution of the projection. Moreover, because the vane is worn out by the collisions with the abrasive particles, thus a vane that restricts the range of the projection may change this range in the device of prior art 2. So it might cause the quality of the articles for processing to decrease. Therefore, it is frequently necessary to exchange a vane. Moreover, because the abrasive particle is reflected from the vane, and the particle rebounds in the inner wall of the projection chamber, the protection from wear from the wall of the projection chamber is also necessary. In contrast, in the device of prior art 3 the difference is caused at the position of the blade where the supply of the abrasive particles is not constant, each part of the abrasive particles collides, and the distribution of the projection diffuses though the length of the blade and is extremely shortened, to concentrate the distribution of the projection to a predetermined range. Therefore, it is easy to receive the effect when the supply of the abrasive particles is unstable. Moreover, the slower the velocity is of the rotation of the impeller, possibly the efficiency of the treatment decreases, because abrasive particles that are dispersed outside of the impeller without colliding with the blade are generated. In addition, because it greatly affects the accuracy of the distribution of the projection when the shape of the blade changes by the wear, and because the blade is worn out by the collisions with the abrasive particles, it is necessary to frequently exchange the blades. Accordingly, one object of the present invention is to provide a method and a system for estimating information on the state of the projection of abrasive particles projected by a projection machine to reduce operating costs and the time to know conditions involving the state of the projection of the abrasive particles to define information on a specified state, e.g., at least the distribution of the projection or the velocity of the projection. SUMMARY OF THE INVENTION [0009] One aspect of the present invention provides a method of estimating information on the state of the projection of abrasive particles projected by a projection machine that includes a plurality of blades that rotate at a high rate. The method comprises the steps of analyzing the behavior of the abrasive particles projected by the projection machine on the blades, to create an analytical model, and estimate the information on the state of the projection of the abrasive particles projected by the projection machine using the analytical model. The action of each abrasive particle includes contact with at least one other abrasive particle and one of the rotating blades. Another aspect of the present invention provides a method of estimating information on the state of the projection of abrasive particles projected by a projection machine that includes a plurality of blades that rotate at a high rate, and an opening through which the abrasive particles are projected by the blades to an article to be processed. The method comprises the steps of determining the initial conditions. They include information on the size, and the rate of the rotation of, the blades, information on the projection of the abrasive particles, information on the abrasive particles in relation to the blades; storing the initial conditions; calculating the positions of each abrasive particle, and the velocities and directions of the abrasive particles after collisions with the blades, based on the initial conditions; and estimating the information on the state of the projection based on the result of the calculation. The result of the calculation may be displayed. Yet another aspect of the present invention provides a system with a programmed computer to estimate information on the state of the projection of the abrasive particles projected by a projection machine that includes a plurality of blades that rotate at a high rate. The computer comprises a) input means for providing to the computer initial conditions that include information on the size and rotation of the blades, information on the projection of the abrasive particles, information on the abrasive particles in relation to the blades; b) calculating means for calculating the position of each abrasive particle, and the velocities and directions of the abrasive particles after collisions with the blades, based on the initial conditions; c) means for estimating the information on the state of the projection based on the result of the calculation; and d) means for displaying the assumed information. In one embodiment of the present invention, the calculating means calculates the magnitude of a force of contact of each abrasive particle relative to at least one of the blades and the other abrasive particles; and calculates the acceleration of the abrasive particle based on the forces that act on it. They include the force of the contact and the gravity, and obtaining the velocity and the position of the abrasive particle after a short time, based on the calculated acceleration. The computer may further include a storage medium in which a program for calculation to be executed by the calculation means is stored. The calculating step and the calculating means in the method of the second aspect and the system of the third aspect of the present invention express the velocity of each abrasive particle after a collision as a relative velocity that includes a vertical component along a Y-axis and a horizontal component along an X-axis using the transfer vector of the abrasive particle and the transfer vector of the point of collision on a surface of the corresponding blade on which the abrasive particle is impacted, wherein the vertical component of the relative velocity is expressed by a bounce that uses the coefficient of the rebound by a determination of a coefficient, and wherein the horizontal component is expressed as a loss of velocity due to resistance from friction by a determination of a coefficient; and calculates the velocity and the direction of the abrasive particle after a collision with the corresponding blade by summing them plus calculating the transfer vector of the blade at the point of the collision. In this case, the step for calculating, or the calculating means, may calculate the distance the abrasive particle moves and the distance the corresponding blade moves in a sampling time, and executes the calculation relating to the collision for an abrasive particle that complies with sequential conditions of collisions. The method of the system of another aspect of the present invention may adjust a profile of the distribution of the projection of the abrasive particles to a predetermined profile by selecting the values of each blade, the range of the positions of the projections on the opening from which the abrasive particles are projected, and the rate of rotation of the blade such that the variability of the frequency to which each discharged abrasive particle rebounds from the blade is a predetermined value or less. Preferably, the predetermined value is 0.3. The values of the dimensions include a ratio of the inner diameter and the outer diameter of the blade, the range of this ratio preferably being any one of 1.75 to 2.0, 2.5 to 2.9, and 3.6 to 4.1. In the above aspects of the present invention, the information on the state of the projection of the abrasive particles is at least either a distribution of the projection of the abrasive particles or the velocity of the projection of the abrasive particles. The projection machine may, for instance, be a centrifugal projecting device. The present invention further provides a method aided by a programmed computer for controlling the projection of abrasive particles to be projected to an article by a projection machine that includes a plurality of blades that rotate at a high rate, and for estimating information on the state of the projection of the abrasive particles. The method comprises the steps of a) entering information on the blade, the condition of the projection of the abrasive particles, and the coefficient of bounce and the coefficient for the resistance to friction of the abrasive particle to the computer; b) determining by the computer whether entering the entering step has been completed, and calculating by the computer positions of respective abrasive particles per a given sampling time based on the sampling time and the transfer vector of the abrasive particle, if the entering is completed; c) turning the blades by the computer to update the angles of the blades; d) determining by the computer whether each abrasive particle impacts the corresponding blade, calculating by the computer the velocity and the direction of the impacted abrasive particle to update the transfer vector of the abrasive particle, if the computer determines the abrasive particle impacts the corresponding blade, while maintaining the transfer vector, if the computer determines no abrasive particle impacts the corresponding blade; e) determining by the computer whether the position of the blades is within a range from which the abrasive particles are discharged, discharging the abrasive particles, if the position of the blades is within the range from which the abrasive particles are discharged, while preventing the abrasive particles from being discharged, if the positions of the blades are outside the range from which the abrasive particles are discharged, f) determining by the computer whether the positions of the blades have been turned to the predetermined positions, totaling the transfer vectors of the respective abrasive particles, if the determination indicates that the positions of the blades have been turned to the predetermined positions, while repeating steps b) to f), if the determination indicates that the positions of the blades have not turned to the predetermined position; and g) displaying by the computer the distribution of the projection and the velocity of the projection and of the result of the calculations for the total. The above and other scopes and advantages of the present invention will become apparent by reviewing the following detailed description with reference to the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS [0023] FIG. 1 shows a cross-sectional view of an essential part of a centrifugal projecting device to illustrate one example of a projection machine to which the present invention can be applied. FIG. 2 schematically illustrates the action of an abrasive particle on a blade. FIG. 3 is a vector diagram that shows velocities of the abrasive particle before and after the collisions with the blades. FIG. 4 schematically illustrates factors that contribute to the initial condition in an analytical model. FIG. 5 is a vector diagram that shows the velocity of an abrasive particle after it collides. FIG. 6 is a flowchart of one embodiment of the method of the present invention. FIG. 7 shows an example of displaying the result of the calculation in the embodiment of FIG. 6. FIG. 8 is a graph of the calculation of the projection E1 of a distribution in conjunction with an actual distribution of the projection E. FIG. 9 is a graph of the relationship between the outer diameter and the average velocity of the projection when the velocity of the circumference is constant. FIG. 10 is a schematic block diagram of one example of a computer used for the system to execute the method of the present invention. FIG. 11 is a flowchart of another embodiment of the method of the present invention. FIG. 12 illustrates one example of finding a force of the contact between the abrasive particles in the model for the analysis of movement. FIG. 13 shows an example of displaying the result of the calculation in the embodiment of FIG. 12. FIG. 14 is a graph of the relationship between variability of the frequency of the rebounding of the abrasive particle and a variability of a direction of the projection of the abrasive particle. FIG. 15 is a graph of the relationship between a mean frequency of the rebounding of the abrasive particle and a variability of a direction of the projection of the abrasive particle. FIG. 16 is a graph of the distribution of the projections shown by different ranges of the positions from which the abrasive particles are discharged. FIG. 17 is a graph of the variability of a direction of the projection of an abrasive particle projection while the ranges of the positions from which the abrasive particles are discharged are FIG. 18 is a graph of the relationship between the proportion of the outer diameter relative to the inner diameter, a variability of a frequency of the rebounding of the abrasive particle, and a variability of a direction of the projection of the abrasive particle. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS [0041] One embodiment of the present invention that is applicable to a centrifugal projecting device that projects centrifugally will now be explained. The machine that projects centrifugally is a projection machine that includes an impeller having a plurality of blades and a cylindrical control cage arranged in the interior of the impeller. Abrasive particles are impelled through an opening of the control cage and are projected to an article to be processed by rotating the impeller at a high rate. However, this invention is not limited to such a machine that projects centrifugally. First, an initial experiment is carried out to investigate the action of one abrasive particle freely released from the control cage of the machine that projects centrifugally at one rotating blade. In the initial experiment, the action of the abrasive particle on the blade was evidenced using an impact paper. As shown in FIG. 1, the machine that projects centrifugally that is used for the initial experiment includes a housing (an impeller casing) 2 mounted on an upper wall 1 on the ceiling of the protecting cavity of the main unit of the project machine, a driving mechanism 3 on the upper wall 1 on the outside of a first sidewall 2a of the housing 2, and an impeller 4 mounted on a shaft 3a for driving the driving mechanism 3. The centrifugal projecting device further includes a distributor 5 coaxially mounted on the driving shaft 3a in the inner peripheral space S in the impeller 4 to stir the abrasive particles, a cylindrical control cage 6 mounted on a second sidewall 2b which is opposed to the first sidewall 2a of the housing 2, to restrict the direction in which the abrasive particles are projected, and a feed cylinder 7, mounted on the second sidewall 2b of the housing 2. The impeller 4 is mounted on the driving shaft 3a with a bolt 11 through a hub 10. The impeller 4 comprises a first shroud 12a at the side of the driving shaft 3a of the driving mechanism 3a, a second shroud 12b in a position that is spaced apart from the first shroud 12a and toward the feed cylinder 7, and further comprises a plurality of blades 13 that are fixedly sandwiched between the first shroud 12a and the second shroud 12b such that they are arranged radially. The distributor 5 is fixed to the first shroud 12a with a bolt 14. The distributor 5 is provided with openings (cutouts) arranged in its circumference at substantially equal intervals. The number of openings 15 may be equal to, or be more than, or less than, that of the blades 13. On the control cage 6, a cylindrical portion of its distal end is provided with an equiangular window 17 to restrict the direction in which the abrasive particles are projected. The control cage 6 is mounted on the housing 2 at the side of the second shroud 2b such that it extends between the distributor 5 and the blades 13. FIG. 2 illustrates the action of an abrasive particle P on the blade as a result of the initial experiment. The result of the behavior of the abrasive particle P on the blade can be assumed to be a rebound phenomenon of the blade, rather than a sliding motion on the blade, because pressures are concentrated at two or three positions on the blade. Namely, the abrasive particle P supplied by the feed cylinder of the centrifugal projecting device is stirred by the rotating distributor 5 and is then discharged from the opening 17 of the control cage 6 to the outer periphery of the base of the rotating blade 13. The abrasive particle P is then accelerated and made to rebound on the blade 13 to project the abrasive particle P to the distal end (the outer periphery) of the blade 13. This means that an analytical model of the distribution of a projection can be expressed using an analytical model of the rebound phenomenon of the abrasive particle P. Consequently, the vector components of the velocity of the abrasive particle after it has collided are divided into relative velocities (V0x, V0y, V1x, V1y) on the X-axis and the Y-axis using a V0 of the abrasive particle P, and a transfer vector V1 of the abrasive particle P from the point of the collision on the surface of the blade. The vertical component V1y may be expressed as a bounce using the coefficient of rebounding. The horizontal component V1x may be expressed as a loss of velocity by a resistance caused by friction. Therefore, the following equations (1-1) and (1-2) can be obtained by introducing their respective coefficients. y (1-1) x (1-2) where e is the coefficient of rebounding, and μ is the coefficient of the resistance to friction. Initial conditions for the analytical model of the distribution of the projection may include, e.g., information on the dimensions and the rotation of the blade (hereafter, "blade information") that corresponds to various conditions of a real machine, and information on the projection of the abrasive particle from the control cage. For instance, assignable factors, e.g., an outer diameter, an inner diameter, a length, the width of a blade, the number of blades, and a velocity of rotation (velocity of the rotation of an impeller) can be considered in the initial conditions. As shown in FIG. 4, a range (angle α) of the discharge of the abrasive particles P from the opening 17 of the control cage 6, a direction of the projection of the abrasive particles, an initial rate, and the variation of the range of the abrasive particles P, can also be considered in the initial conditions. The range of the discharge corresponds to a range where the abrasive particles P are discharged from the control cage 6. It can be represented as an angle, and determined based on the shape of the opening 17 and the shape of the distributor 5 (not shown in FIG. 4). Further, the range of the variation corresponds to the direction from where the abrasive particles P are projected from the control cage 6 and the range of distribution of the initial rate. Because the range of the distribution varies based on the shape of the opening 17 of the control cage 6 and the shape of the distributor 5, it may be given as a rectangular distribution, in which the degree of probability is constant within the range of the variations, or may be given as the normal distribution by providing a standard deviation as the range of variations. To determine the coefficient of bounce and the coefficient of the resistance to friction for the analytical model, an actual coefficient of bounce is calculated from the result of a measurement of the amount of the bounce of the abrasive particles P on the blade 13 by using actual abrasive particles P and the blade 13. Further, an adequate combination was selected and assigned by collating the result of the measurements of the distribution of the projection and the projection rate by an actual projection examination and the result of a calculation of a distribution of the projection. In the analytical model, a calculation is carried out for any of the blades 13 that accelerates the abrasive particles under the above initial conditions and the assumption that each blade is symmetrical with respect to a point. Information that comprises the direction of the projection, a position, and a velocity is given to the respective abrasive particles P to calculate a distance for the abrasive particles P and the blade 13 over the time of a sampling, which is preferably 100μ or less, as, say, to consider the accuracy of the calculation. The calculation of the collision of the abrasive particles P that complies with the crash conditions is then carried out sequentially. The positions of the abrasive particles P are thus denoted by polar coordinates (ra, θa). It is assumed that where the angle is θb on the surface, which angle corresponds to a radius diameter ra of the blade, and it is greater than the angle θa for each abrasive particle P, there is a collision. Then the expressions (1-1) and (1-2) in the vertical component and the horizontal component, respectively, which are based on the surface of the blade as a reference, are obtained. As shown in FIG. 5, the resulting transfer vector (actual transfer vector of the abrasive particle) for the abrasive particle on the point of collision on the blade 13 is on the sum of a transfer vector at the point of collision for the blade 13 plus a relative transfer vector for the abrasive particle. The velocity and the direction of the abrasive particle P by the collision with the blade 13 are then recalculated using the above resulting vector (the calculation of the collision is repeated). While not mandatory for the present invention, the results of the analysis after this calculation may be displayed on a touch screen on a system that is equipped with a computer commonly having a calculation function and a display function, or a display screen such as a display on a control panel. One example of the method of estimating information on the state of a projection of the present invention is shown in the flowchart of FIG. 6. One example of the system that executes the method is schematically illustrated in FIG. 10. A system 20, shown in FIG. 10, is a general-purpose computer in which an input device (input means) 22, which may include a keyboard and mouse, an internal or external data-storing medium 24 for storing data, an internal or external program-storing medium 26 for storing programs, a CPU (estimating means), a calculation unit (calculating means) 30 that includes, e.g., an arithmetic processor associated with the CPU 28, and a display (display means) 32, are all connected by a bus line 34. The display 32 may be a touch screen to be combined with the input device. The programs to execute the method of the present invention, such as a calculating program, etc., to be executed by the calculation unit 30, are stored in the program-stored medium 26. By referring to the flowchart of FIG. 6, one embodiment to execute the method of estimating information on the state of the present invention with a general-purpose computer 20 will now be explained. (1) First, data on the outer diameter, the inner diameter, the number, and the velocity of the rotation of the blades 13 are entered into the data storage medium 24 of the computer 20 as the blade information used in the analytical model of the distribution of the projection (step S1). As input values in step S1, say, the outer diameter is 360 mm, the inner diameter is 135 mm, the number of blades 13 is 8, and the rate of the rotation is 3,000 rpm. (2) The range of the discharge of the abrasive particles P (angle), the direction where the abrasive particles are discharged, the initial rate, and their variations, are then entered in the data storage medium 24 as the information on the discharge from the control cage 6 (step S2). As input values in step S2, for instance, the range of the discharge is 35°, the direction is 90° from the position of the projection to the rotation of the direction, its variation is ±15°, the initial velocity is 10 m/s, and its variation is ±5 m/s. (3) The coefficient of bounce and the coefficient of the resistance to friction resistance are then temporarily entered in the data storage medium 24 (step S3). As input values in step S3, for instance, the coefficient of bounce is 0.2, and the friction resistance coefficient is 0.6. The inputs in these steps S1, S2, and S3 into the data storage medium 24 of the computer 20 are carried out through the input device 22. (4) The CPU 28 determines whether the input has been completed (step S4). (5) If the input is completed in step S4, the calculation unit 30 calculates the position of each abrasive particle per a sampling time 80 μs based on the sampling time and the transfer vector (step S5). Specifically, assuming the position of any abrasive particle at time t is (X, Y), the following distance (Δx, Δy) of the abrasive particle after the sampling time Δt can be obtained as Δx=Vx×Δt and Δy=Vy×Δt based on the transfer vector (Vx, Vy) of the abrasive particle. Further, the position of the abrasive particle at time t+Δt can be obtained as (X+Δx, Y+Δy). (6) The CPU28 then turns the blade 13 to update its angle (step S6). (7) The CPU28 then determines whether each abrasive particle P has collided with the blade 13 (step S7). (8) If the determination in step S7 has determined that there was a collision, the calculation unit 30 calculates the velocity and the direction of the collided abrasive particle to update the transfer vector (step S8). Specifically, the position (X,Y) of the abrasive particle is converted to the polar representation (ra, θa). If the angle θb of the surface of the blade 13 that corresponds to the radius ra is greater than the angleθa of the abrasive particle, a collision is considered to have occurred. The above equations (i) and (ii), for the vertical component and the horizontal component, both refer to the surface of the blade as the reference surface. They are then calculated. By summing them and the transfer vector for the blade 13 at the point of collision on the blade, the actual transfer vector for the abrasive particle is then obtained. The velocity and the direction of the abrasive particle P by the collision with the blade 13 are then calculated. If the determination in step S7 determines that no collision occurred, the transfer vector of the abrasive particle P is not updated. (9) The CPU28 then determines whether the position of the blade 13 is within the range of the discharge of the abrasive particle P (step S9). (10) If the position of the blade 13 is within the range of the discharge of the abrasive particle P in step S9, the CPU28 causes the abrasive particles P to be discharged (step S10). The discharge of the abrasive particles P means that the abrasive particles are stirred by the distributor 5 and are discharged from the opening 17 of the control cage 6, and to be discharged into the blade 13 at any time during a process for an article to be processed. The reason it is necessary to determine whether the position of the blade 13 is within the range of the discharge of the abrasive particle in step S9 is the following: Because, as discussed above, the calculation is carried out for any of the blades 13 that comprise the impeller, it should prevent the abrasive particle P from being discharged when the discharged abrasive particle P is unsuitable for the analysis because of the position of the blade 13 (say, where the rotation of the blade 13 advances such that it passes through the opening 17 of the control cage 6). (11) If the position of the blade 13 is not within the range of the discharge of the abrasive particle P in step S9, the CPU 28 displays the result of the calculation of the current state of the projection on the display 32 (step S11). Typically, 100 to 200 abrasive particles P may be displayed in this step, although it depends on the arithmetical capacity of the computer to be used. FIG. 7 shows an example of the display of the result of this calculation. In this example, the display of the initial condition is omitted. (12) The CPU 28 determines whether the position of the blade 13 has been rotated to a predetermined position. If not, steps S5 to S12 are repeated to sequentially calculate the positions of the respective abrasive particles, and the angle of the blade and the transfer vector for the abrasive particle, after the following sampling time (step S12). (13) If the determination in step S12 determines that the position of the blade 13 has been rotated to the predetermined position, the transfer vectors of respective abrasive particles P are totaled (step S13). (14) The distribution of the projection and the velocity of the projection of the result of the calculations for the total are displayed (step S14). It is recognized that the computed distribution of the projection E1 is close to the actual distribution of the projection E, as shown in FIG. 8. The distribution of the projection and the velocity of the projection of the abrasive particles P from the blade 13 are the following. The distribution of the projection (the ratio of the number of projected abrasive particles per 1°) is one wherein the directions of the transfer vectors of the respective abrasive particles P are described by angles, and are shown by a histogram. The velocity of the projection is the calculated mean values of the lengths of the transfer vectors. The variation in the velocity of the projection is the calculated standard variability. Sequentially, a test is carried out to establish the variation in the velocity of the projection caused by the outer diameter of the blade 13. As shown in FIG. 9, the actual measured values are very close to the calculated values (designated by a broken line). With this embodiment, the information on the status of the projection, which includes the distribution of the projection, the velocity of the projection, and the variation in the velocity of the projection of the abrasive particles P, can be assumed by using the above model for an analysis of movements. Therefore, the necessary and various design conditions (for instance, the length, the shape, the number, and the rate of the rotation of the blade, and the shape of the opening 17 of the control cage 6) to know information on the predetermined state of the projection, can all be determined by adding any required modification to the initial conditions without actually making them for trial purposes. In the prior art, pre-producing the blade and the control cage both meant that the state of the projection had to be repeated by varying their design conditions, to decrease the necessary design conditions to compile the information on the predetermined state of the projection. To the contrary, the cost of the work and the time required to decrease the necessary design conditions can be reduced in the method and the system of the present invention, since neither a blade nor a control cage requires its prototype being manufactured to compile the information of the state of the predetermined projection. By referring to the flowchart of FIG. 11, another embodiment to execute the method for estimating the information on the conditions of the projection of the present invention with the general-purpose computer 20 will be explained. (1) First, data on the outer diameter, the inner diameter, the number, and the velocity of rotation of the blades 13 are entered in the data storage medium 24 of the computer 20 as the information on the blade for the analytical model of the distribution of the projection. Data on the particle size and the density of the abrasive particle, the amount of the abrasive particles to be discharged, the range of the discharge of the abrasive particles P (angle), the direction where the abrasive particles are discharged, the initial rate, and their variations, are then entered in the data storage medium 24 as the information on the discharge from the control cage 6. Further, a coefficient of bounce and a coefficient of resistance to friction are temporarily entered in the data storage medium 24 (step S31). The inputs in this step S31 into the data storage medium 24 are carried out through the input device 22. As input values for the blade 13 to be entered, for instance, the outer diameter may be 360 mm, the inner diameter may be 135 mm, the number of blades 13 may be 8, and the rate of the rotation may be 3,000 rpm. As input values for the abrasive particle to be entered, the particle size in the diameter may be 1 mm, the density may be 7850 Kg/m , the amount of the abrasive particles to be discharged may be 200 kg/min, the range of the discharge of the abrasive particles may be 35°, the direction may be 90° from the position of the projection to the rotation of the direction, its variation may be ±15°, the initial velocity may be 10 m/s, and its variation may be ±5 m/s. The coefficient of bounce to be entered may, e.g., be 0.2, and the coefficient of resistance to friction to be entered may, e.g., be 0.6. These input values are just examples, and thus are not to limit the present invention. (2) The CPU 28 then turns the blade 13 to the following position during a minimal time (for instance, a sampling time Δt=80 μs after time t=0) (the steps S32, S33, and S34). (3) The CPU 28 then determines whether each abrasive particle contacts other movable bodies, based on the calculation of the calculation unit 30. If the CPU 28 determines there is a contact, it executes an analysis of the force of the contact acting on each abrasive particle for all the abrasive particles (step S35). The term "other movable body" refers to the blade 13 and other abrasive particles. If the abrasive particle and the other abrasive particle as the other movable body are in contact with each other with each other, the force that acts between these abrasive particles are calculated based on the distance between any abrasive particle i and an abrasive particle j that comes in contact with the abrasive particle i, to determine whether the abrasive particles come in contact. If the abrasive particle i and the abrasive particle j have come in contact, then, based on this result of the determination, a vector that is oriented from the center of the abrasive particle i to the center of the abrasive particle j is defined as the "normal vector," and a vector that is oriented to the direction that is turned 90° clockwise of the normal vector is defined as a "tangent vector." As shown in FIG. 12, assume virtual and parallel arrangements where each arrangement includes a spring and a dashpot in the normal direction, and where the direction of tangent of the abrasive particles i, j is between the two abrasive particles (discrete elements) i, j that come in contact with each other, to calculate the force of the contact that is exerted from the abrasive particle j to the abrasive particle i. The force of the contact is calculated by the calculation unit 30 as a resultant force resulting from adding the component of the normal direction of the force of the contact to the component of the direction of tangent of the force of the contact. In step S35, first, the component of the normal direction of the force of the contact is calculated for all abrasive particles. Using an increment of an elasticity resistance, and the spring constant in the elasticity spring proportional to the amount of contact, the relative displacement of the abrasive particle i and the abrasive particle j over a short time can be expressed as where Δe : increment of an elasticity resistance, : the spring constant in the elasticity spring proportional to the amount of contact, and : the relative displacement of the abrasive particle i and the abrasive particle j over a short time.The suffix n denotes a component of the normal direction. Using a coefficient of viscosity of the viscous dashpot proportional to the velocity of the relative displacement, a viscosity resistance coefficient is given by /Δt (2) where Δd : an increment of an elasticity resistance, and : the spring constant in the elasticity spring is proportional to the force of contact. The elasticity resistance and the viscosity resistance that are associated with the component of the normal direction of the force that acts on the abrasive particle i from the abrasive particle j at a given time t can be expressed by equations (3) and (4). refers to e at the time t.Therefore, the component of the normal direction of the force of the contact can be expressed by the following equation (5). is the component of the normal direction of the force of the contact at the time t. Accordingly, the force of the contact that acts on the abrasive particle i at the time t will be calculated by considering the force of the contact from all abrasive particles. The component of the direction of tangent of the force of contact of all the abrasive particles is calculated at the end of step S35. It is considered that in the component of the direction of tangent, the elasticity resistance is proportional to a relative displacement and to a velocity of the relative displacement of viscous resistance that is similar to the component of the normal direction, and thus can be calculated by the following equation (6). where f[t] is the component of the direction of direction of tangent of the force of the contact, e is the component of the direction of tangent of the elasticity resistance, and d is the component of the direction of tangent of the viscosity resistance. Because slipping may exist between the abrasive particle i and the abrasive particle j when they come into contact, Coulomb's law concerning slipping is used. Normally, where the component of the direction of tangent is greater than the component of the normal direction, the following occurs: ) (7) =0 (8) That is, it is the case where the component of the normal is greater than the component of the component of the direction of the tangent. In equations (7) to (10), μ0 is the coefficient of friction, f is the power of adhesion, and sign (Z) refers to positive and negative signs of the variable Z.Because the abrasive particles to be used in this embodiment are dry, the power of adhesion between the abrasive particles may be disregarded. (4) In step S36, the analysis of the motion equation is executed to obtain the acceleration expressed by the following equation (11) based on forces that act on the abrasive particles i and j, which include a force of the contact and gravity. Further, in this step a similar analysis is executed for all the abrasive particles, = f c m c + g ( 11 ) ##EQU00001## where r is the position vector, mc is the mass of the abrasive particle (it may be obtained by the size and the density in the initial conditions), fc is the force of the contact, and g is the acceleration caused by gravity. , a gyration is caused by the angle of the collision when there is a state of contact. The angular acceleration of it is calculated by the following equation. ω . = T c I ( 12 ) ##EQU00002## ω is an angular acceleration, Tc is a torque caused by the contact, and i is an inertia moment. The following velocity and the position are obtained after a short time by the following equations (13), (14), and (15) based on the acceleration that has been obtained by equation (11). V and r are the transfer vectors and the position vectors at present. FIG. 13 shows an example of the display of the result of this calculation. = v 0 + r Δ t ( 13 ) r = r 0 + v 0 Δ t + 1 2 r Δ t 2 ( 14 ) ω = ω 0 + ω . Δ t ( 15 ) ##EQU00003## where v is a transfer vector , and Δt is a short time. (5) Then a determination whether the position of the blade 13 has rotated from a given position, e.g., the starting position in the embodiment, to 270°, is executed (step S37). If not, steps S34 to S37 are repeated to calculate the angle of the blade, the force of the contact that acts on the abrasive particles, and the motion equation obtained after a short time. The calculation is ended when a determination that the blade turns to a predetermined position is obtained. (6) The distribution of the projection with the total and the result of the calculation of the velocity of the projection are displayed. It was found that the calculation on the distribution of the projection E1 was close to the real distribution of the projection E, as the results are similar to those in FIG. 8 in the first embodiment, The definitions of the distribution of the projection and the velocity of the projection from the blade are the following. The distribution of the projection is described by the histogram of the direction of the transfer vector of each abrasive particle that is described by the angle. The velocity of the projection is obtained by calculating the mean value of the size of the transfer vector. The variations of the velocity of the projection are obtained by calculating the standard deviations. Sequentially, a test is carried out to see the variation in the velocity of the projection caused by the outer diameter of the blade. In the result of a test similar to that shown in FIG. 9, the actual measurement values were very close to the calculated values (designated by a broken line). This embodiment describes the case where the other movable objects that should come in contact with each abrasive particle are other abrasive particles. With the model of analysis of the movement of the present invention, however, the distribution of the projection and the velocity of the projection can also be similarly calculated where each abrasive particle should come in contact with the blade. In this case, the analysis of the movement of the abrasive particle can be executed by applying similar steps by replacing the other movable body that should come in contact with each abrasive particle in the above method with the blade. Further, the distribution of the projection and the velocity of the projection can be calculated by using the analytical model of the movement in consideration of both the contact of each abrasive particle with other abrasive particles and contact with the blade. As another embodiment of the present invention, to be described is a method for adjusting the distribution of the projection of the abrasive particle to a predetermined profile. To numerically express the level of the diffusion of the distribution of the projection, the direction where each abrasive particle disperses is indicated by an angle. The standard deviation in the angles of the abrasive particles is assumed to be a variability of the direction of the abrasive particles. In this embodiment, the profile of the distribution of the projection of the abrasive particles can be adjusted such that the variability of the frequency to which each discharged abrasive particle rebounds on blade 13 may come below a predetermined value. To this end, the size of the blade 13, the range of the positions from which the abrasive particles are distributed at the opening to discharge the abrasive particles, and the rate of the rotation of the blade 13, are configured or combined. This adjustment in the profile of the distribution of the projection of the abrasive particles can also be carried out by using the analytical model of the collision of the abrasive particle and the rotating blade 13 discussed above. FIG. 14 shows the relationship between the variability of the frequencies of the bounces of each abrasive particle and the variability of the direction of the abrasive particle projection. In this relationship, the variability of the frequencies of the bounces of each abrasive particle refers to the standard deviation of the frequencies of the bounces of each abrasive particle. As will be appreciated from FIG. 14, the variability of the direction of the abrasive particle projection increases as the variability of the frequencies of the rebounding is increased. That is, the angle of the projection in the direction of the projection of the particle diffuses. Therefore, the angle of the projection can be concentrated by adjusting the variability of the frequency of the bounces to a predetermined value, for instance, 0.3 or less. FIG. 15 shows a relationship between the mean value of the frequency of the bounces and the variability of the direction of the abrasive particle projection. If the mean value of the frequency of the bounces is less than double, the variability of the abrasive particle discharge position from the control cage 6 causes the projection angle to be diffused readily, and then the abrasive particles cannot be accelerated with stability. Consequently, a variability is caused in the velocity of the projection. Therefore, it is preferable that the mean value of the frequency of the bounces be double or more. To change the variability of the frequency of the bounces and the mean value of the frequency of the bounces, the outer diameter, the inner diameter, and the rotational velocity of the blade 13 were changed in the calculations. The frequency of splashing greatly affects the factor by which the distribution of the projection and the velocity are to be decided. Because the individual abrasive particle splashes several times on the blade 13, the direction of the projection is turned in the direction of the rotation of the blade 13 in many splashes. Thus an acceleration by the collision may be obtained. In contrast, a small number of splashes, the direction of projection is turned to the opposite direction to the direction of rotation of the blade 13, and thus the resulting acceleration is insufficient. Accordingly, combining different frequencies of the number of splashes of the abrasives causes the differences in directions of the abrasive particle projection for the respective abrasive particles, and thus the distribution of the projection may spread. Therefore, the distribution of the projection of the abrasive particles can be concentrated by controlling the variability of the frequency that an individual abrasive particle splashes on the blade 13 to be a predetermined value or less. On the other hand, difference number of splashing frequencies to exceed the predetermined value causes the distribution of the projection of the abrasive particle to spread. FIG. 16 shows the result of the analysis of the distribution of the projection for a projection experiment under a range (a range of the discharge) where the abrasive particle discharge position from the control cage 6 is to be 35° and 10°. As conditions used for this experiment, the blade 13 has an outer diameter of 360 mm and an inner diameter of 135 mm, and a rotational velocity was set to 3000 rpm. As a result, the distribution of the projection was concentrated by the range of the abrasive particle discharge position being narrow. FIG. 17 shows the variability of the direction of the abrasive particle projection when the range at the abrasive particle discharge position is changed, under the conditions similar to those in the experiment of FIG. 16, to see the effect of that range. FIG. 17 indicates that the variability of the direction of the projection of the abrasive particle becomes small, and narrows the range at the abrasive particle discharge position. However, if the range at the abrasive particle discharge position is narrowed too much, the resistance of the opening 17 of the control cage 6 is increased. This causes problems of decreasing the possible maximum projection of the centrifugal projection machine and keeping the abrasive particle in the control cage 6 during the operation. Preferably, the range at the abrasive particle discharge position is to be 5° to 20°, to avoid such problems. It was experimentally found that this range is preferable, regardless of the conditions, i.e., the outer diameter, the inner diameter, or the velocity of the rotation of the blade 13, to be used. FIG. 18 shows the relationships between ratios of the outer diameter to the inner diameter of the blade 13 and the variability of the direction of the projection of the abrasive particles and of the frequencies of the rebounding of the abrasive particles. By varying the ratio of the outer diameter to the inner diameter of the blade 13, the variability of the frequency of the rebounding is significantly varied, and thus the variability of the projection direction of the abrasive particles is also varied. Therefore, the distribution of the projection can be concentrated by setting the inner diameter and the outer diameter of the blade 13 to a predetermined ratio. That is, the variability of the frequency of the rebounding of the abrasive particles becomes 0.3 or less by setting the ratio of the inner diameter and the outer diameter of the blade 13 to any of the ranges of 1:1.75 to 1:2.0, 1:2.5 to 1:2.9, or 1:3.6 to 1:4.1. Because these ranges cause that mean value n of the frequency of the rebounding to become close to the integer, the variability of the frequency of the rebounding of the abrasive particles is decreased. The mean value n of the frequency of the rebounding corresponding to these ranges is near 2, 3, and 4. This is the same as the case where the range of the ratio of the inner diameter and the outer diameter of the blade 13 is close to the integer of n=5 or more, although the range corresponding to n=5 or more is not specified herein in consideration of the size of the blade actually used. The distribution of the projection can be diffused by setting the ratio of the inner diameter and the outer diameter of the blade 13 to be outside these ranges. As the conditions of the experiment in this embodiment, the rate of rotation is 3000 rpm, the range of the abrasive particle discharge position is 10°, while the outer diameter and the inner diameter of the blade 13 are varied. Preferably, the rate of rotation is 2500 rpm or more. If the rate of rotation is less than 2500 rpm, the acceleration of the abrasive particles is insufficient, and the influence of the initial velocity of the abrasive particles causes the distance for the abrasive particles until they collide with the blade 13 to be increased such that the positions of the abrasive particles are significantly varied. Therefore, the abrasive particles may be readily distributed on the blade 13. Thus the variability of the direction of the projection of the abrasive particle is also increased. Similar to them, the range of the abrasive particle discharge position is preferably 5° to 20°. The respective embodiments just intend to illustrate the present invention, and are not intended to limit the present invention. For instance, the projection machine on which the present invention can be applied is not limited to the centrifugal projection machine as shown in the embodiments. The present invention can also be applied to a projection machine that includes a rotary plate that rotates by means of a driving motor, a plurality of blades mounted on the rotary plate, and a supply line having an outlet from which abrasive particles are fed to the blades. As the information on the state of projection of the abrasive particles, although both the distribution of the projection and the velocity of the projection are obtained in the above embodiments, just either one of them may be obtained, if desired. Patent applications by Hiroyasu Makino, Aichi JP Patent applications by Kyoichi Iwata, Aichi JP Patent applications by Sintokogio, Ltd. Patent applications in class MODELING BY MATHEMATICAL EXPRESSION Patent applications in all subclasses MODELING BY MATHEMATICAL EXPRESSION User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20090222244","timestamp":"2014-04-24T07:35:34Z","content_type":null,"content_length":"101077","record_id":"<urn:uuid:9b8f4b10-b175-47e0-adcf-e20531bd874d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
Analyses of load stealing models based on differential equations Results 1 - 10 of 13 , 2006 "... We investigate balls-into-bins processes allocating m balls into n bins based on the multiple-choice paradigm. In the classical single-choice variant each ball is placed into a bin selected uniformly at random. In a multiple-choice process each ball can be placed into one out of d ≥ 2 randomly selec ..." Cited by 58 (7 self) Add to MetaCart We investigate balls-into-bins processes allocating m balls into n bins based on the multiple-choice paradigm. In the classical single-choice variant each ball is placed into a bin selected uniformly at random. In a multiple-choice process each ball can be placed into one out of d ≥ 2 randomly selected bins. It is known that in many scenarios having more than one choice for each ball can improve the load balance significantly. Formal analyses of this phenomenon prior to this work considered mostly the lightly loaded case, that is, when m ≈ n. In this paper we present the first tight analysis in the heavily loaded case, that is, when m ≫ n rather than m ≈ n. The best previously known results for the multiple-choice processes in the heavily loaded case were obtained using majorization by the single-choice process. This yields an upper bound of the maximum load of bins of m/n + O ( √ m ln n/n) with high probability. We show, however, that the multiple-choice processes are fundamentally different from the single-choice variant in that they have “short memory. ” The great consequence of this property is that the deviation of the multiple-choice processes from the optimal allocation (that is, the allocation in which each bin has either ⌊m/n ⌋ or ⌈m/n ⌉ balls) does not increase with the number of balls as in the case of the single-choice process. In particular, we investigate the allocation obtained by two different multiple-choice allocation schemes, , 2002 "... While transmission rates already achieve speeds beyond 40 Gb/s, today's network processors are only slowly approaching 10 Gb/s. In this paper we present a load-balancing scheme that enables system designers to bridge the performance gap using multiple slower NPs in parallel to serve high-speed links ..." Cited by 24 (0 self) Add to MetaCart While transmission rates already achieve speeds beyond 40 Gb/s, today's network processors are only slowly approaching 10 Gb/s. In this paper we present a load-balancing scheme that enables system designers to bridge the performance gap using multiple slower NPs in parallel to serve high-speed links. The proposed scheme works in a flowpreserving manner to ensure in-sequence packet delivery as well as local validity of connection state information, while avoiding inter-processor communication. The effectiveness of the algorithms is evaluated by simulation with extrapolated workloads, and the impact of specific parameters on system performance is the subject of a factor-relevance analysis. - In Proceedings of the 42nd IEEE Symposium on Foundations of Computer Science (FOCS , 2001 "... In this paper we analyse a very simple dynamic work-stealing algorithm. In the workgeneration model, there are n (work) generators. A generator-allocation function is simply a function from the n generators to the n processors. We consider a fixed, but arbitrary, distribution D over generator-alloca ..." Cited by 24 (1 self) Add to MetaCart In this paper we analyse a very simple dynamic work-stealing algorithm. In the workgeneration model, there are n (work) generators. A generator-allocation function is simply a function from the n generators to the n processors. We consider a fixed, but arbitrary, distribution D over generator-allocation functions. During each time-step of our process, a generator-allocation function h is chosen from D, and the generators are allocated to the processors according to h. Each generator may then generate a unit-time task which it inserts into the queue of its host processor. It generates such a task independently with probability λ. After the new tasks are generated, each processor removes one task from its queue and services it. For many choices of D, the work-generation model allows the load to become arbitrarily imbalanced, even when λ < 1. For example, D could be the point distribution containing a single function h which allocates all of the generators to just one processor. For this choice of D, the chosen processor receives around λn units of work at each step and services one. The natural work-stealing algorithm that we analyse is widely used in practical applications and works as follows. During each time step, each empty "... Irregular and dynamic parallel applications pose significant challenges to achieving scalable performance on large-scale multicore clusters. These applications often require ongoing, dynamic load balancing in order to maintain efficiency. Scalable dynamic load balancing on large clusters is a challe ..." Cited by 14 (1 self) Add to MetaCart Irregular and dynamic parallel applications pose significant challenges to achieving scalable performance on large-scale multicore clusters. These applications often require ongoing, dynamic load balancing in order to maintain efficiency. Scalable dynamic load balancing on large clusters is a challenging problem which can be addressed with distributed dynamic load balancing systems. Work stealing is a popular approach to distributed dynamic load balancing; however its performance on large-scale clusters is not well understood. Prior work on work stealing has largely focused on shared memory machines. In this work we investigate the design and scalability of work stealing on modern distributed memory systems. We demonstrate high efficiency and low overhead when scaling to 8,192 processors for three benchmark codes: a producer-consumer benchmark, the unbalanced tree search benchmark, and a multiresolution analysis kernel. - SIAM Journal on Computing , 2005 "... We study the long-term (steady state) performance of a simple, randomized, local load balancing technique under a broad range of input conditions. We assume a system of n processors connected by an arbitrary network topology. Jobs are placed in the processors by a deterministic or randomized adversa ..." Cited by 9 (2 self) Add to MetaCart We study the long-term (steady state) performance of a simple, randomized, local load balancing technique under a broad range of input conditions. We assume a system of n processors connected by an arbitrary network topology. Jobs are placed in the processors by a deterministic or randomized adversary. The adversary knows the current and past load distribution in the network and can use this information to place the new tasks in the processors. A node can execute one job per step, and can also participate in one load balancing operation in which it can move tasks to a direct neighbor in the network. In the protocol we analyze here, a node equalizes its load with a random neighbor in the graph. "... In Grid applications the heterogeneity and potential failures of the computing infrastructure poses significant challenges to efficient scheduling. Performance models have been shown to be useful in providing predictions on which schedules can be based [1, 2] and most such techniques can also take a ..." Cited by 9 (6 self) Add to MetaCart In Grid applications the heterogeneity and potential failures of the computing infrastructure poses significant challenges to efficient scheduling. Performance models have been shown to be useful in providing predictions on which schedules can be based [1, 2] and most such techniques can also take account of failures and degraded service. However, when several alternative schedules are to be compared it is vital that the analysis of the models does not become so costly as to outweigh the potential gain of choosing the best schedule. Moreover, it is vital that the modelling approach can scale to match the size and complexity of realistic applications. In this - In Proceedings FOCS , 2003 "... We study the long term (steady state) performance of a simple, randomized, local load balancing technique. We assume a system of n processors connected by an arbitrary network topology. Jobs are placed in the processors by a deterministic or randomized adversary. The adversary knows the current and ..." Cited by 6 (2 self) Add to MetaCart We study the long term (steady state) performance of a simple, randomized, local load balancing technique. We assume a system of n processors connected by an arbitrary network topology. Jobs are placed in the processors by a deterministic or randomized adversary. The adversary knows the current and past load distribution in the network and can use this information to place the new tasks in the processors. The adversary can put a number of new jobs in each processor, in each step, as long as the (expected) total number of new jobs arriving at a given step is bounded by #n. A node can execute one job per step, and also participate in one load balancing operation in which it can move tasks to a direct neighbor in the network. In the protocol we analyze here, a node equalizes its load with a random neighbor in the graph. , 1999 "... Multithreading is a promising approach to address the problems inherent in multiprocessor systems, such as network and synchronization latencies. Moreover, the benefits of multithreading are not limited to loop-based algorithms but apply also to irregular parallelism. EARTH - Efficient Architecture ..." Cited by 2 (0 self) Add to MetaCart Multithreading is a promising approach to address the problems inherent in multiprocessor systems, such as network and synchronization latencies. Moreover, the benefits of multithreading are not limited to loop-based algorithms but apply also to irregular parallelism. EARTH - Efficient Architecture for Running THreads, is a multithreaded model supporting fine-grain, non-preemptive threads. This model is supported by a C-based runtime system which provides the multithreaded environment for the execution of concurrent programs. This thesis describes the design and implementation of a set of dynamic load balancing algorithms, and an in-depth study of their behavior with divide-and-conquer, regular, and irregular classes of applications. The results described in this thesis are based on EARTH-SP2, an implementation of the EARTH program execution model on the IBM SP-2, a distributed memory multiprocessor system. The main results of this study are as follows: ffl A randomizing load - In Proceedings of ISAAC’99 , 1999 "... Many applications in parallel processing have to traverse large, implicitly defined trees with irregular shape. The receiver initiated load balancing algorithm random polling has long been known to be very efficient for these problems in practice. For any ffl ? 0, we prove that its parallel executio ..." Cited by 1 (0 self) Add to MetaCart Many applications in parallel processing have to traverse large, implicitly defined trees with irregular shape. The receiver initiated load balancing algorithm random polling has long been known to be very efficient for these problems in practice. For any ffl ? 0, we prove that its parallel execution time is at most (1 + ffl)T seq =P + O(Tatomic + h( ffl +Trout +T split )) with high probability, where Trout , T split and Tatomic bound the time for sending a message, splitting a subproblem and finishing a small unsplittable subproblem respectively. The maximum splitting depth h is related to the depth of the computation tree. Previous work did not prove efficiency close to one and used less accurate models. In particular, our machine model allows asynchronous communication with nonconstant message delays and does not assume that communication takes place in rounds. This model is compatible with the LogP model. , 2013 "... Work stealing has proven to be an effective method for scheduling fine-grained parallel programs on multicore computers. To achieve high performance, work stealing distributes tasks between concurrent queues, called deques, assigned to each processor. Each processor operates on its deque locally exc ..." Cited by 1 (0 self) Add to MetaCart Work stealing has proven to be an effective method for scheduling fine-grained parallel programs on multicore computers. To achieve high performance, work stealing distributes tasks between concurrent queues, called deques, assigned to each processor. Each processor operates on its deque locally except when performing load balancing via steals. Unfortunately, concurrent deques suffer from two limitations: 1) local deque operations require expensive memory fences in modern weak-memory architectures, 2) they can be very difficult to extend to support various optimizations and flexible forms of task distribution strategies needed many applications, e.g., those that do not fit nicely into the divide-and-conquer, nested data parallel paradigm. For these reasons, there has been a lot recent interest in implementations of work stealing with non-concurrent deques, where deques
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=431110","timestamp":"2014-04-21T07:28:29Z","content_type":null,"content_length":"39906","record_id":"<urn:uuid:8f99d064-895c-496a-943a-be63eae0714a>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
Law of Atmospheres and Boltzmann Law The law of atmospheres, also known as the barometric law, states that the pressure n(y) as a function of height y varies as: According to the ideal gas law, a gas of N particles in the thermal equilibrium obeys the relationship PV = Nk[B]T. It is convenient to rewrite this equation in terms of the number of particles per unit volume of gas, n[V] = N/V. This quantity is important because it can vary from one point to another. In fact, our goal is to determine how n[V] changes in our atmosphere. We can express the ideal gas law in terms of n[V] as P = n[V]k[B]T. Thus, if the number density n[V] is known, we can find the pressure and vice versa. The pressure in the atmosphere decreases as the altitude increases because a given layer of air has to support the weight of the air above it — the greater the altitude, the less the weight of the air above that layer and the lower the pressure. To determine the variation in pressure with altitude, consider an atmospheric layer of thickness dy and the cross-sectional area A. Because the air is in static equilibrium, the upward force on the bottom of this layer, PA, must exceed the downward force on the top of the layer, (P + dP)A, by an amount equal to the weight of gas in this thin layer. If the mass of gas molecule in the layer is m, and the area a total of N molecules in the layer, then the weight of the layer is w = mgN = mgn[V]Ady. Thus A - (P + dP)A = mgn[V]Ady, Which reduces to dP = - mgn[V]dy Because P = n[V]k[B]T, and T is assumed to remain constant, therefore dP = n[V]k[B]T dn[V]. Substituting this into the above expression for dP and rearranging gives Integrating this expression, we find Boltzmann distribution law Boltzmann distribution law is important in describing the statistical mechanics of a large number of particles. It states that the probability of finding the particles in a particular energy state varies exponentially as the negative of the energy divided by k[B]T. All the particles would fall into the lowest energy level, except that the thermal energy k[B]T tends to excite the particles to higher energy levels. Distribution of particles in space is Where n[0] is the number of particles where U = 0 This king of distribution applies to any energy the particles have, such as kinetic energy. In general, the relative number of particles having energy E is
{"url":"http://www.engineering.com/Library/ArticlesPage/tabid/85/ArticleID/211/categoryId/14/Law-of-Atmospheres-and-Boltzmann-Law.aspx","timestamp":"2014-04-16T10:10:08Z","content_type":null,"content_length":"45207","record_id":"<urn:uuid:dd909d6c-e79f-4b92-bebc-2a01c9d6d539>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
Opa Locka Prealgebra Tutor Find an Opa Locka Prealgebra Tutor ...I have been a teacher for more than 20 years. I have taught Mathematics, Chemistry, Biology, and for almost 10 years, Spanish and English as a second language. I have also translated a couple of books and manuals into Spanish. 16 Subjects: including prealgebra, chemistry, Spanish, geometry ...Once those calculation skills are there, or in progress, then we can focus on making sense of geometry, and going deeper into the matter. It is very important for me that the student understands why we are using this or that method to resolve a Geometry problem. When someone actually understand... 20 Subjects: including prealgebra, reading, English, writing ...I am very patient and understand that some students have a fear of mathematics, often due to a lack of understanding the basics. Students often need to be retaught what was not learned previously with an experienced and patient teacher. I have had great results with my previous students on a one on one basis, I even have had students improving gradually their grade from a F to an A. 48 Subjects: including prealgebra, reading, statistics, chemistry ...I have helped students to increase their scores on standardized tests by more than 100 points, and helped many others to make the national merit list. I have had this level of success because I am flexible enough to change my teaching style to suit the learning style of the individual student. ... 24 Subjects: including prealgebra, calculus, statistics, geometry ...I was a classroom teacher of Hebrew and Judaic Studies for elementary school age students and have tutored Hebrew reading and successfully prepared hundreds of students for their Bar or Bat Mitzvah service. I have also taught Hebrew as a modern foreign language to adults, including first and sec... 36 Subjects: including prealgebra, reading, English, writing
{"url":"http://www.purplemath.com/opa_locka_fl_prealgebra_tutors.php","timestamp":"2014-04-16T04:14:45Z","content_type":null,"content_length":"24238","record_id":"<urn:uuid:dc7b2740-79ff-4a42-bcec-39577d50e126>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
Find V(t), T > 0, In The Circuit Below. (Note: ... | Chegg.com Find v(t), t>0, in the circuit below. Please show work, thanks, Image text transcribed for accessibility: Find v(t), t > 0, in the circuit below. (Note: the resistive network to the left of the capacitor can be represented as a single equivalent resistor.) Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/find-v-t-t-0-circuit--note-resistive-network-left-capacitor-represented-single-equivalent--q4489850","timestamp":"2014-04-18T18:12:48Z","content_type":null,"content_length":"21154","record_id":"<urn:uuid:e2a4580e-d2ba-4b94-a63f-3d05e9d20046>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
Re:Sample rate Date Prev Date Next ] [ Thread Prev Thread Next ] [ Date Index Thread Index Author Index Re:Sample rate >>>Bell Labs researcher Harry Nyquist develops Sampling Theory. It >>>states provides that if a signal is sampled at twice its nominal >>>highest frequency, the samples will contain all of the information >>>in the original signal. >Which is true! Millions of mathematicians have prooved it. if it was true, then you'd be able to sample a 22049Hz pure sine wave at 44100Hz, (you wouldn't need a filter, as there's no harmonics), and then you'd be able to re-construct it. Fourier is about finding the amplitude of harmonics within a periodic signal of known frequency. Now, the Nyquist Theorum refers to the possibility of (for instance) sampling a 22049Hz pure sine at 44098Hz with the phase aligned correctly. In that case we get all the information. (i.e. the amplitude, the one thing we didn't know already) Proof only counts if it's relevant to the situation. In the case of digital audio, that proof isn't relevant. >>Which is clearly not true :-) >>There's no way to keep the phase information for a signal sampled >>at only twice it's frequency. >>Only the amplitude. >guess what students ask their teachers of sampling theorems? I have very little respect for educational establishments :-) but I did actually stay long enough at college to see fourier explained >They usually ask the same as you do and they get an answer they can >understand. You have to do the mathematics. I do not know anybody >who does the mathematics behind it, still claims that its not possible. >What usually is forgotten, is that the Nyquist theorem is aimed at >infinite observation time. it's "aimed" at a periodic signal of known period, ( so it's assumed infinite)
{"url":"http://www.loopers-delight.com/LDarchive/200512/msg00642.html","timestamp":"2014-04-19T03:24:19Z","content_type":null,"content_length":"6512","record_id":"<urn:uuid:553aebd0-bd15-47ed-bb53-ebf9f484cc96>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
Spherical Triangles on Very Big Spheres Copyright © University of Cambridge. All rights reserved. 'Spherical Triangles on Very Big Spheres' printed from http://nrich.maths.org/ In a previous problem, 'Pythagoras on a sphere', it was shown that for a right-angled spherical triangle with sides a, b and c drawn on a sphere of radius R we have the relationship $$ \cos\left(\frac{a}{R}\right) = \cos\left(\frac{b}{R}\right) \cos\left(\frac{c}{R}\right) $$ a) By expanding $\cos(x)$ using the approximation $\cos(x) = 1-0.5 x^2$ show that for small triangles on large spheres the usual flat Pythagoras' theorem approximately holds. b) In a flat salt-plane a large right angled triangle is drawn with shorter sides equal to 10km. Approximating the radius of the Earth to be 6000km, what is the percentage error in the length of the hypotenuse calculated using Pythagoras' theorem? c) Two radio transmitters A and B are located at a distance x from each other on the equator. A third transmitter C is located at a distance x due north of A. A telecoms engineer who doesn't know about about spherical triangles needs to lay a cable between B and C and calculates the distance using the flat version of the Pythagoras theorem. Investigate the differences and percentage errors given by the two Pythagoras theorems for different distances x. At what value of x is the percentage error to the true length of cable required greater than 0.1%? Do you conclude that telecoms engineers need to know about spherical triangles?
{"url":"http://nrich.maths.org/5623/index?nomenu=1","timestamp":"2014-04-17T18:28:19Z","content_type":null,"content_length":"4975","record_id":"<urn:uuid:6f31bd43-8bd0-4f14-8df3-234f84ad7bda>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
sum of 5 Digit no. Re: sum of 5 Digit no. Id' say you can use each digit at most as many times as it appears in that sequence. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=19959","timestamp":"2014-04-18T19:01:54Z","content_type":null,"content_length":"14039","record_id":"<urn:uuid:bef641ad-a390-4d8f-b6a6-eb0e9a529308>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
How to demonstrate bigO cost of algorithms? Tim Daneliuk tundra at tundraware.com Wed Jun 2 20:34:41 CEST 2004 Rusty Shackleford wrote: > Hi - > I'm studying algorithms and I want to write a python program that > calculates the actual runtimes. > I want to increment up some global variable called n in my program so > that I can see the relationship between the size of the problem and the > number of steps. > Here's my clumsy attempt to implement this: One Thought: Bear in mind that O(n) is the _asymptotic_ behavior for an algorithm. It does not consider the "constant time" factors in an algorithm like the initial instantiation of variables, loops, objects and so forth. It only considers the complexity of the algorithm itself. For example, suppose you have two algorithms with an asymptotic complexity of O(n), but the first takes 5 seconds to set up its outer loops and control variables. Say the second takes 5 milliseconds to do the same thing. From an algorithmic complexity POV these are both O(n). The point is that you have to use a sufficiently large 'n' to make this constant overhead irrelevant to the test. If you are looking for n*log(n) growth, you're probably not going to see it for n=1, n=2, n=3 ... You're going to need n=100, n=1000, n=10000. IOW, choose a large enough iteration count to dwarf the constant time overhead of the program... Tim Daneliuk tundra at tundraware.com PGP Key: http://www.tundraware.com/PGP/ More information about the Python-list mailing list
{"url":"https://mail.python.org/pipermail/python-list/2004-June/295384.html","timestamp":"2014-04-18T09:27:04Z","content_type":null,"content_length":"4252","record_id":"<urn:uuid:20744feb-b484-4c47-93d7-d9a4bbb96310>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
Tim Porter Welcome to the personal area of Tim Porter within the nLab. Below you will find some indication of my research interests, past and present, with some indication of my contribution to the overall picture, together with lists of relevant papers, linked to copies where there are such online. I also intend to put up here some sketches of some projects that I would love others to ‘help with’ or which are ‘in progress’, i.e., a snapshot of possible future work. Some of these projects are ‘work in progress’, incompleted papers, research grant applications that were unfunded because they were too much in advance of the field at the time of submission (my story and I’m sticking to it) and so on. Some of these I mean to develop here in public, others may more appropriately be done via e-mail or on slightly restricted pages. We will see how it goes. I may also put links to project pages and some preprints here, including a more up to date version of the Menagerie. Finally I may put drafts for n-Lab entries that I hope, one day, to add to the general list. Main Research Areas (past and present): Simplicial foundations of homotopy coherence: The problem of establishing simplicial foundations for homotopy coherence is discussed in more detail in a separate entry. • (with J.-M. Cordier), Vogt’s Theorem on Categories of Homotopy Coherent Diagrams, Math. Proc. Camb. Phil. Soc. 100 (1986) pp. 65-90. • (with J.-M. Cordier), Maps between homotopy coherent diagrams, Top. and its Applications, 28 (1988) 255-275. • (with J.-M. Cordier), Fibrant diagrams, rectifications and a construction of Loday, J.Pure Appl. Algebra, 67 (1990) 111-124. • (with J.-M. Cordier), Categorical Aspects of Equivariant Homotopy, Proc. ECCT, Applied Cat.Structures.4 (1996) 195-212. • (with J.-M. Cordier), Homotopy Coherent Category Theory, Trans. Amer. Math. Soc. 349 (1997) 1-54. • (with R. Brown, M. Golasinski and A. Tonks) Spaces of maps into classifying Spaces for equivariant crossed complexes, Indagationes Mathematica, 8 (1997) 157-172. • (with R. Brown, M. Golasinski and A. Tonks), Spaces of maps into Classifying Spaces for equivariant crossed complexes, II: the general topological case, K-Theory, 23 (2001) 129-155. • (with P.J. Ehlers) Joins for (augmented) simplicial sets, Jour. Pure Applied Algebra, 145 (2000) 37-44. • (with P.J.Ehlers) Ordinal subdivision and special pasting in quasicategories, Advances in Math. 217 (2007), No 2. pp 489-518 • Abstract Homotopy Theory, the interaction of category theory and homotopy theory, (survey article, updated version of lecture notes from summer school course at Bressanone), Cubo, 5 (2003) 115-165, 2003) • (with K.H. Kamps), Abstract Homotopy and Simple Homotopy Theory, World Scientific, 462pp (ISBN 981-02-1602-5) June 1997 Algebraic models for homotopy $n$-types and their applications in homological and homotopical algebra, and in the allied area of algebraic homotopy: The finding of algebraic models for homotopy n-types is discussed in more detail in a separate entry. • $n$-Types of Simplicial Groups and Crossed $n$-cubes, Topology, 32, (1993) 5-24. • (with G. Donadze, and N. Inassaridze) N-fold Cech derived functors and generalised Hopf type formulas, K-Theory 35 (2005) 341 – 373. • (with G.J. Ellis) Free and Projective Crossed Modules and the Second Homology Group of a Group, Jour. Pure Applied Alg., 40 (1986) pp. 27-32. • (with P.J. Ehlers) Varieties of simplicial groupoids, I, Crossed complexes, Jour. Pure Applied Algebra, 120 (1997) 221 – 233. Erratum: Jour. Pure Applied Algebra, 134 (1999) 221-233. • (with K.H.Kamps) 2-Groupoid Enrichments in Homotopy Theory and Algebra, K-Theory, 25 (2002) 373-409. • (with R. Brown, and M. Bullejos), Crossed complexes, free crossed resolutions and graph products of groups, in Recent Advances in Group Theory and Low-Dimensional Topology , Research and Exposition in Mathematics vol 27, Helderman Verlag, 2003, pp. 11-26 • (with R. Brown, E. Moore and C.D. Wensley), Crossed complexes, and free crossed resolutions for amalgamated sums and HNN-extensions of groups, Georgian Math. J., Special issue for the 70th birthday of H. Inissaridze, 9 (2002) 623 - 644.89. • (with A.Mutlu) Applications of Peiffer pairings in the Moore complex of a simplicial group,Theory and Applications of Categories, 4, No. 7, (1998) 148-173. • (with A.Mutlu) Freeness conditions for 2-crossed modules and complexes,Theory and Applications of Categories, 4, No. 8, (1998) 174-194. • (with A.Mutlu) Free crossed resolutions from simplicial resolutions with given $CW$-basis, Cahiers Top. Géom. Diff. Catégoriques, 50 (1999) 261-283. • (with A.Mutlu) Freeness conditions for crossed squares and squared complexes, K-Theory, 20, (2000) 345 - 368. • (with A.Mutlu) Iterated Peiffer pairings in the Moore complex of a simplicial group, Applied Categorical Structures, 9 (2001) 111-130. Simplicial and categorical methods in TQFTs and HQFTs: These are discussed in more detail in a separate entry. • (with V. Turaev) Formal Homotopy Quantum Field Theories, I: Formal Maps and Crossed C-algebras, Journal of Homotopy and Related Structures 3(1), 2008, 113 - 159. (available also at ArXiv: math.QA • Formal Homotopy Quantum Field Theories II: Simplicial Formal Maps, in Cont. Math. 431, p. 375 - 404 (Streetfest volume: Categories in Algebra, Geometry and Mathematical Physics - edited by A. Davydov, M. Batanin, and M. Johnson, S. Lack, and A. Neeman)(available also at ArXiv: math.QA/0512034). • Interpretations of Yetter’s notion of G-coloring : simplicial fibre bundles and non-abelian cohomology, J. Knot Theory and its Ramifications 5 (1996) 687-720. • Topological Quantum Field Theories from Homotopy n-types, J. London Math. Soc. (2) 58 (1998) 723-732. • (with J. Faria Martins) On Yetter’s Invariant and an Extension of the Dijkgraaf-Witten Invariant to Categorical Groups, Theory and Applications of Categories,18, 2007, No. 4, pp 118-150; (available also at ArXiv: math.QA/0608484) Strong shape, coherent prohomotopy, and the stability problem: Strong shape, coherent prohomotopy, and the stability problem is discussed in a separate entry. • Stability Results for Topological Spaces, Math. Zeit. 150, 1974, pp. 1-21. • Abstract homotopy theory in procategories, Cahiers Top. Géom. Diff., 17, 1976, pp. 113-124. • Coherent prohomotopical algebra, Cahiers Top.Géom.Diff., 18, (1978) pp. 139-179. • Coherent prohomotopy theory, Cahiers Top. Géom. Diff., 19, (1978) pp. 3-46 • Cech and Steenrod homotopy and the Quigley exact couple in strong shape and proper homotopy theory, Jour. Pure Applied Alg., 24 (1982) pp. 303-312. • Reduced Powers, Ultra Powers and Exactness of Limits, Jour. Pure Applied Alg., 26 (1982) pp. 325-330. I have various survey articles and other things that could be useful. I will put them (or links to them) here in case they are: Abstract Homotopy Theory: The Interaction of Category Theory and Homotopy theory This article is an expanded version of notes for a series of lectures given at the Corso estivo Categorie e Topologia organised by the Gruppo Nazionale di Topologia del M.U.R.S.T. in Bressanone, 2 - 6 September 1991. Those notes have been partially brought up to date by the addition of new references and a summary of some of what has happen in the area in the last 20 years (inadequate at A copy can be found here. $\mathcal{S}$-categories, $\mathcal{S}$-groupoids, Segal categories and quasicategories The notes were prepared for a series of talks that I gave in Hagen in late June and early July 2003, and, with some changes, in the University of La Laguna, the Canary Islands, in September, 2003. They assume the audience knows some abstract homotopy theory and as Heiner Kamps was in the audience in Hagen, it is safe to assume that the notes assume a reasonable knowledge of our book, or any equivalent text if one can be found! A copy can be found here. Crossed menagerie: The crossed menagerie is intended as a set of notes outlining an approach to non-Abelian cohomology, stacks, etc., and Grothendieck’s conjectured extension of Galois-Poincaré theory. The title refers to the array of strange beasties that occur as generalisations of crossed modules. (The present version is over 921 pages long, but the above links to a much shorter 11 chapter 444 page version.) A cut down version has been prepared for the LI2012 week on Algebra and computation (27 February – 2 March, 2012). A copy can be obtained by contacting me by e-mail or here. Profinite Algebraic Homotopy This is a project to extend the results of crossed homological algebra and algebraic homotopy to the profinite setting. There is a link to the first seven chapters of a draft ‘monograph’ that grew out of the thesis of a student (Fahmi Korkes) in the early 1980s. I have added a lot of extra material, and I hope this will form a book in the not too distant future. Anyone willing to have a look at the longer version (at present 979 pages, and not yet finished!) as a proof reader and to give me comments, please e-mail me (at t dot porter AT bangor DOT ac DOT uk). (As it may be published, for the moment, I do not really want to make the full version freely available over the net.) Generating Topological Quantum Field Theories: This is a slightly edited version of an (unsuccessful) research proposal that I submitted in 2002. An earlier version had been refused funding a few years earlier. I would be interested in reviewing this project in the light of the recent results on the cobordism conjecture and the start of a classification of TQFTs. In the report at the time of the refusal, I was told in no uncertain terms that the ideas in the proposal were rubbish, and that I was putting them forward just because I thought they could be done! Heigh-ho! That was the view of one referee. He was very wrong, but probably pushed another nail into the coffin of the mathematics department at Bangor. Like democracy, Peer review is the worst of all possible systems, except for all the others! PS. I did completely know what I was proposing, and reactions of other referees were very positive. Shortly afterwards, Turaev put forward his Homotopy Quantum Field theories which were, to some extent, in a similar vein at about the same time as the earlier proposal. Note that many parts of the proposal have still not been done. Some of what I planned to do then is now being put in the Menagerie. Some of that has been extracted as a cut-down version for use by particiapants at the Workshop and School on Higher Gauge Theory, TQFT and Quantum Gravity Lisbon, 10-13 February, 2011. I will have a version of those notes here. Some things that I wanted to do, I still do not know how to do. Higher syzygies, algebraic K-theory and the Kapranov-Saito conjectures Work on homotopical syzygies in presentations of the Steinberg group $St(A)$ of a ring $A$ has indicated links with the higher $K$-groups of $A$. This work of Kapranov and Saito contains several interesting conjectures that link such syzygies with labelled polytopes, including the Stasheff polytopes (associahedra). The proposal was to approach some of these conjectures using a new combination of ideas and techniques from combinatorial group theory, the construction of resolutions, ideas of ‘higher generation by subgroups’, the methods of Volodin model for higher K-theory and the global actions of Tony Bak. The project had the following objectives: 1. To prove the Kapranov-Saito conjecture that the space of $A$-labelled hieroglyphs is homotopically equivalent to the Volodin space of $A$, and to study non-stable analogues of this; 2. To analyse and generalise the methods of Stefan Holz, extending them to give detailed information on higher syzygies in families of groups such as classical linear groups, braid groups, etc. 3. To link homotopical and homological syzygies so as to extend the isomorphism $H_2(St(A), \mathbb{Z}) \cong K_3(A)$ to link higher $K$-groups with higher homology. Topological Data Analysis: Topological Data Analysis (TDA) has recently emerged from the general area of Computational Topology. In this discussion, I try to examine not only TDA itself, but to put it in a historical context by identifying precursor theories both in algebraic topology and, surprisingly, within theoretical physics. The reason for doing this is not just to be scholarly, but is also to, hopefully, suggest some additions to the toolkit of TDA, which may widen its applicability. This, in turn, raises new questions both within TDA itself, and also within the related mathematical area of Shape Theory. (The link gives a page extracted from a longer document. If you want a copy of that just ask.) Multi-Agent Systems: Mathematical Models for Multi-Agent Systems. There are several aspects that need to be looked at: algebraic, logical, coalgebraic and geometric/combinatorial. One project involving a categorical approach is: Categorical / coalgebraic perspectives on Multiagent Systems. This concentrates on the logics that result from MASs. The structures are combinatorial or more exactly coalgebraic and are closely related to those underlying global actions (see the mathematical research topics). The categorical structure of these objects is fascinating, especially as several known constructions from epistemic logic would seem to be analogues of well known homotopy theoretic constructions. Specific aims include (a) the construction of mapping space objects (cartesian closedness); (b) the study of interactions between communicating MASs with different classes of agents. This might possibly use the theory of Institutions (Burstall and Goguen) to account for the interaction between the models. Grothendieck’s letters Whilst Grothendieck was writing Pursuing Stacks, there was a correspondence, initially between himself and Ronnie Brown, and later for a short period, with myself. It is now thirty years since those letters were written. I intend, here, to put some extracts that might be of use or interest to people. To put entire copies would seem to risk invasion of privacy complications, but until these are made public through the Documents mathématiques project, this seems the best thing to do. (For the moment I will put them here, with possibly later transfer to an n-Lab page.)
{"url":"http://ncatlab.org/timporter/show/HomePage","timestamp":"2014-04-19T05:00:19Z","content_type":null,"content_length":"31796","record_id":"<urn:uuid:842157b3-2404-485e-8b2c-b21d7a3a988d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
Introductory Courses Introductory physics courses for premedical, predental, and other preprofessional students may be chosen from CAS PY 105, 106, or 211, 212, or 241, 242. CAS PY 105, 106 and 241, 242 cover all of classical and modern physics in two semesters. CAS PY 211, 212 cover classical physics only. CAS PY 212 may be followed by CAS PY 313, which includes modern physics. Physics and other science concentrators or engineers with an interest in physics may elect the more intensive CAS PY 251, 252, which is usually followed by CAS PY 351. The four introductory sequences differ primarily in the level of mathematics employed. CAS PY 105, 106 assume a knowledge of algebra and trigonometry only. The other sequences require knowledge of calculus, which is used most extensively in CAS PY 251, 252. Many of these courses include demonstrations supplied by the Physics Department Lecture Demonstration Facility. Prereq: one year of high school physics. A historical survey of modern physics, focusing on quantum mechanics and relativity as applied to the microworld (subatomic physics) and the macroworld (the early universe). Covers exotic phenomena from quarks to quasars, from neutrinos to neutron stars. For non-science majors. Meets with UNI NS 100. 4 cr, 1st sem. (NS) Conceptual introduction to physical law as portrayed in film. Quantitative understanding using simple estimates, elementary physics, and dimensional analysis. Kinematics; forces; conservation laws; heat and temperature; atoms, molecules and materials. Sample films: Speed; Armageddon; Independence Day; X-Men; The Sixth Sense; Contact. 4 cr, 2nd sem. (NS) (lab) Prereq for CAS PY 106: CAS PY 105 or equivalent. Satisfies premedical requirements; presupposes algebra and trigonometry. Principles of classical and modern physics. Mechanics, conservation laws, heat, light, electricity and magnetism, waves, light and optics, atomic and nuclear physics. Lectures, discussions, and laboratory. CAS PY 105: 4 cr, either sem; CAS PY 106: 4 cr, either sem. (NS) Prereq: freshmen with declared majors in physics. Seminar where freshman physics majors learn successful strategies for studying physics and become familiar with BU’s policies, procedures, resources, and extracurricular activities. Exploration of research and career opportunities through invited speakers, book discussions, and laboratory tours. 1 cr, 1st sem. Prereq: CAS MA 123 or equivalent; coreq: CAS MA 124, MA 127, or consent of instructor for students concurrently taking MA 123. Prereq for CAS PY 212: CAS PY 211 or equivalent. For premedical students who wish a more analytical course than CAS PY 105, 106, and for science concentrators and engineers. Basic principles of physics emphasizing Newtonian mechanics, conservation laws, thermal physics, electricity and magnetism, geometrical optics. Lectures, discussions, and laboratory. CAS PY 211: 4 cr, either sem; CAS PY 212: 4 cr, either sem. (NS) (lab) Prereq: musical performance experience or consent of instructor (no physics prerequisite). An introduction to musical acoustics, which covers vibrations and waves in musical systems, intervals and the construction of musical scales, tuning and temperament, the percussion instruments, the piano, the string, woodwind and brass instruments, room acoustics, and the human ear and psychoacoustical phenomena important to musical performance. Some aspects of electronic music are also discussed. 4 cr, 2nd sem. (NS) Prereq: CAS MA 123 or equivalent; coreq: CAS MA 124, MA 127, or consent of instructor for students concurrently taking CAS MA 123. Prereq for CAS PY 242: CAS PY 241 or equivalent. Calculus-based introduction to principles and methods of physics. Mechanics, heat, light, electricity and magnetism, atomic and nuclear physics, and relativity are treated. Topics relevant to medical science are emphasized. Ideal for premedical students. Lectures, discussions, and laboratory. 4 cr each, CAS PY 241, 1st sem; CAS PY 242, 2nd sem. (NS) (lab) Prereq: CAS MA 123 or equivalent; coreq: CAS MA 124, MA 127, or consent of instructor for students currently enrolled in CAS MA 123. Prereq for CAS PY 252: CAS PY 251 or equivalent. Introduction to mechanics, conservation laws, heat and thermodynamics, electrostatics, magnetism, alternating currents, electromagnetic radiation, geometrical optics. Primarily for physics, mathematics, and astronomy concentrators, but open to other students with a strong background in mathematics. Lectures, discussions, and laboratory. 4 cr each, CAS PY 251, 1st sem; CAS PY 252, 2nd sem. (NS) (lab) Prereq: CAS PY 211, 212 and CAS MA 124. Waves and physical optics, relativistic mechanics, experimental foundations of quantum mechanics, atomic structure, physics of molecules and solids, atomic nuclei and elementary particles. Along with CAS PY 211, 212, PY 313 completes a three-semester introductory sequence primarily intended for students of engineering. 4 cr, either sem. Prereq: CAS PY 251, 252 (or 211, 212) and CAS MA 124; coreq: CAS MA 225. Introduction to special relativity, foundations of quantum theory, and introduction to wave mechanics, topics in atomic and molecular structure, solid state, and nuclear physics. Lectures, discussions, and laboratory. 4 cr, 1st sem. Prereq: CAS PY 351. Continuation of CAS PY 351. An introduction to modern physics including quantum mechanics of atoms and molecules, condensed matter physics, nuclear physics, and elementary particle physics. Labs are a required course component. 4 cr, 2nd sem. Prereq: CAS PY 251 and 252 (or 211 and 212), and CAS MA 225, or consent of instructor. First and second order ordinary differential equations. Partial differential equations (waves, heat, Schrödinger) and series solutions of differential equations. Vectors and vector calculus. Matrices, matrix algebra, and matrix transformations. Rotations, similarity, unitarity, hermiticity, eigenvalues, and eigenvectors. 4 cr, 2nd sem. Prereq: CAS PY 212 or PY 252, CAS MA 124, or consent of instructor. A survey of practical electronics for all College of Arts & Sciences science students wishing to gain a working knowledge of electronic instrumentation, and in particular, its construction. Two four-hour laboratory-lecture sessions per week. 4 cr, 2nd sem.
{"url":"http://physics.bu.edu/courses","timestamp":"2014-04-21T14:42:18Z","content_type":null,"content_length":"19257","record_id":"<urn:uuid:e78171ea-72f7-4f4a-b9aa-d5cf275bfba9>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
Select all that apply. Which of the following are appropriate acceleration units? Select all that apply. Which of the following are appropriate acceleration units? km/hr2, m/s/s, ft/s, miles/hr/min, sec/km/m Not a good answer? Get an answer now. (FREE) There are no new answers.
{"url":"http://www.weegy.com/?ConversationId=4B43C7DB","timestamp":"2014-04-18T16:40:05Z","content_type":null,"content_length":"39066","record_id":"<urn:uuid:1594eb97-b5b4-494c-9528-fab44a75556b>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
Hypothesis Testing March 4th 2008, 10:28 PM Hypothesis Testing Can someone help me with this?It seems so simple yet I can't seem to understand the brief one page introduction to hypothesis testing given by my textbook. You are the pilot of a jumbo jet. You smell smoke in the cockpit. The nearest airport is less than 5 minutes away. Should you land the plane immediately? I wrote down: H0:u = 5 H1:u < 5 Thanks for any help.. April 10th 2008, 06:13 AM hypothesis tests next you have to write down the distribution of X, which is written (X~B) X being the number of your sample, and B being the probability. After that you need to put the values into the formula :]
{"url":"http://mathhelpforum.com/statistics/30022-hypothesis-testing-print.html","timestamp":"2014-04-21T13:48:35Z","content_type":null,"content_length":"3844","record_id":"<urn:uuid:51486b16-4cc8-4006-b579-4f1d3c8cb44b>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptology ePrint Archive: Report 2009/043 Image Encryption by Pixel Property SeparationKarthik Chandrashekar Iyer and Aravinda SubramanyaAbstract: Pixels in an image are essentially constituted of two properties, position and colour. Pixel Property Separation, a radically different approach for Symmetric-key image encryption, separates these properties to disturb the semantics of the image. The scheme operates in two orthogonal stages each requiring an encryption key. The first stage creates the Position Vector, an ordered set of Pixel Position Information controlled by a set of plaintext dependent Random Permutations. A bitmap flagging the presence of all the 24 bit colours is generated. The second stage randomly positions the image width and height within the ciphertext and finally applies a byte transposition on the ciphertext bytes. The complete set of image properties including width, height and pixel position-colour correlation are obscured, resulting in a practically unbreakable encryption. The orthogonality of the stages acts as an anti-catalyst for cryptanalysis. The information retrieved from compromising a stage is totally independent and cannot be used to derive the other. Classical cryptanalytic techniques demand huge number of attempts, most failing to generate valid encryption information. Plaintext attacks are rendered ineffective due to the dependency of the Random Permutations on the plaintext. Linear and Differential cryptanalysis are highly inefficient due to high Diffusion and Confusion. Although the paper describes the algorithm as applied to images, its applicability is not limited to images only. The cryptographic strength and the efficiency of operation is independent of the nature of the plaintext. Category / Keywords: secret-key cryptography / Pixel Property Separation, Image Encryption, Cryptanalytic Error Avalanche EffectDate: received 25 Jan 2009Contact author: kiyer82 at gmail comAvailable format(s): PDF | BibTeX Citation Note: Although the paper describes the algorithm as applied to images, its applicability is not limited to images only. The cryptographic strength and the efficiency of operation is independent of the nature of the plaintext. For example, in the case of a non-image plaintext, each set of 3 bytes shall represent a 24 bit pixel. Version: 20090129:145431 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
{"url":"http://eprint.iacr.org/2009/043","timestamp":"2014-04-20T05:45:02Z","content_type":null,"content_length":"3682","record_id":"<urn:uuid:2526fc14-7928-4cfd-bdca-111ec7a3582d>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: The Simpsons and the ABC conjecture :-) Replies: 1 Last Post: Apr 7, 2013 4:31 PM Messages: [ Previous | Next ] Re: The Simpsons and the ABC conjecture :-) Posted: Apr 7, 2013 4:31 PM On 04/07/2013 03:17 PM, fc3a501@uni-hamburg.de wrote: > Remember the 3D adventure with the fake Fermat > counter-example? Not so impressive in *absolute* terms, > since you can easily give an infinite set of solutions > to a^3+b^3-c^3=1. On the other hand, that example had > ^12. > Surely there has been work done on a^n+b^n-c^n="small" > (I just have no idea how to google it :-) and it > somehow relates to the ABC stuff. > Mind to drop a link? The ABC conjecture is about numbers a, b, c >=1, integers, such that a + b = c. There's a list of "best" (a, b, c) triplets: those that come highest in a figure of merit 2 + (3^10)*109 = 23^5 due to E.R. , 1987 < http://www.math.leidenuniv.nl/~desmit/abc/index.php?set=1 > Jesus is an Anarchist. -- J.R. Date Subject Author 4/7/13 The Simpsons and the ABC conjecture :-) Hauke Reddmann 4/7/13 Re: The Simpsons and the ABC conjecture :-) David Bernier
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2445336&messageID=8860024","timestamp":"2014-04-21T11:04:16Z","content_type":null,"content_length":"17987","record_id":"<urn:uuid:39d2e457-fd42-486d-9b1f-2b7a7ce7816c>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Functional Analysis (1998) The aim of this course is to give a very modest introduction to the extremely rich and well-developed theory of Hilbert spaces, an introduction that hopefully will provide the students with a knowledge of some of the fundamental results of the theory and will make them familiar with everything needed in order to understand, believe and apply the spectral theorem for selfadjoint operators in Hilbert space. This implies that the course will have to give answers to such questions as - What is a Hilbert space? - What is a bounded operator in Hilbert space? - What is a selfadjoint operator in Hilbert space? - What is the spectrum of such an operator? - What is meant by a spectral decomposition of such an operator?
{"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/simple/query/*%3A*/browsing/true/doctypefq/lecture/start/0/rows/10/subjectfq/Funktionalanalysis+","timestamp":"2014-04-18T05:41:38Z","content_type":null,"content_length":"15269","record_id":"<urn:uuid:4d15789b-9b2e-45c2-8394-354ed60bec85>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
Stable sorting and merging with optimal space and time bounds - The Computer Journal , 1995 "... ..." , 1990 "... In an earlier research paper [HL1], we presented a novel, yet straightforward linear-time algorithm for merging two sorted lists in a fixed amount of additional space. Constant of proportionality estimates and empirical testing reveal that this procedure is reasonably competitive with merge routines ..." Cited by 8 (0 self) Add to MetaCart In an earlier research paper [HL1], we presented a novel, yet straightforward linear-time algorithm for merging two sorted lists in a fixed amount of additional space. Constant of proportionality estimates and empirical testing reveal that this procedure is reasonably competitive with merge routines free to squander unbounded additional memory, making it particularly attractive whenever space is a critical resource. In this paper, we devise a relatively simple strategy by which this efficient merge can be made stable, and extend our results in a nontrivial way to the problem of stable sorting by merging. We also derive upper bounds on our algorithms' constants of proportionality, suggesting that in some environments (most notably external file processing) their modest run-time premiums may be more than offset by the dramatic space savings achieved. , 2007 "... We present a space-efficient algorithm for reporting all k intersections induced by a set of n line segments in the plane. Our algorithm is an in-place variant of Balaban’s algorithm and, in the worst case, runs in O(n log2 n+k) time using O(1) extra words of memory in addition to the space used f ..." Cited by 8 (2 self) Add to MetaCart We present a space-efficient algorithm for reporting all k intersections induced by a set of n line segments in the plane. Our algorithm is an in-place variant of Balaban’s algorithm and, in the worst case, runs in O(n log2 n+k) time using O(1) extra words of memory in addition to the space used for the input to the algorithm. - Algorithms - ESA 2004. Volume 3221 of Lecture Notes in Computer Science , 2004 "... Abstract. We introduce a new stable minimum storage algorithm for merging that needs O(m log ( n + 1)) element comparisons, where m and m n are the sizes of the input sequences with m ≤ n. According to the lower bound for merging, our algorithm is asymptotically optimal regarding the number of compa ..." Cited by 2 (2 self) Add to MetaCart Abstract. We introduce a new stable minimum storage algorithm for merging that needs O(m log ( n + 1)) element comparisons, where m and m n are the sizes of the input sequences with m ≤ n. According to the lower bound for merging, our algorithm is asymptotically optimal regarding the number of comparisons. The presented algorithm rearranges the elements to be merged by rotations, where the areas to be rotated are determined by a simple principle of symmetric comparisons. This style of minimum storage merging is novel and looks promising. Our algorithm has a short and transparent definition. Experimental work has shown that it is very efficient and so might be of high practical interest. 1 - SOFSEM 2006. Volume 3831 of Lecture Notes in Computer Science , 2006 "... Abstract. We introduce a new stable in place merging algorithm that needs O(m log ( n +1)) comparisons and O(m+n) assignments. According m to the lower bounds for merging our algorithm is asymptotically optimal regarding the number of comparisons as well as assignments. The stable algorithm is devel ..." Cited by 2 (2 self) Add to MetaCart Abstract. We introduce a new stable in place merging algorithm that needs O(m log ( n +1)) comparisons and O(m+n) assignments. According m to the lower bounds for merging our algorithm is asymptotically optimal regarding the number of comparisons as well as assignments. The stable algorithm is developed in a modular style out of an unstable kernel for which we give a definition in pseudocode. The literature so far describes several similar algorithms but merely as sophisticated theoretical models without any reasoning about their practical value. We report specific benchmarks and show that our algorithm is for almost all input sequences faster than the efficient minimum storage algorithm by Dudzinski and Dydek. The proposed algorithm can be effectively used in practice. 1 , 1999 "... We introduce a novel approach to the classical problem of in-situ, stable merging, where "in-situ" means the use of no more than O(log 2 n) bits of extra memory for lists of size n. Shufflemerge reduces the merging problem to the problem of realising the "perfect shuffle" permutation, that is, the ..." Cited by 1 (0 self) Add to MetaCart We introduce a novel approach to the classical problem of in-situ, stable merging, where "in-situ" means the use of no more than O(log 2 n) bits of extra memory for lists of size n. Shufflemerge reduces the merging problem to the problem of realising the "perfect shuffle" permutation, that is, the exact interleaving of two, equal length lists. The algorithm is recursive, using a logarithmic number of variables, and so does not use absolutely minimum storage, i.e., a fixed number of variables. A simple method of realising the perfect shuffle uses one extra bit per element, and so is not in-situ. We show that the perfect shuffle can be attained using absolutely minimum storage and in linear time, at the expense of doubling the number of moves, relative to the simple method. We note that there is a worst case for Shufflemerge requiring time\Omega\Gamma n log n), where n is the sum of the lengths of the input lists. We also present an analysis of a variant of Shufflemerge which uses a ... "... Abstract. We investigate the problem of stable in-place merging from a ratio k = n based point of view where m, n are the sizes of the input m sequences with m ≤ n. We introduce a novel algorithm for this problem that is asymptotically optimal regarding the number of assignments as well as compariso ..." Add to MetaCart Abstract. We investigate the problem of stable in-place merging from a ratio k = n based point of view where m, n are the sizes of the input m sequences with m ≤ n. We introduce a novel algorithm for this problem that is asymptotically optimal regarding the number of assignments as well as comparisons. Our algorithm uses knowledge about the ratio of the input sizes to gain optimality and does not stay in the tradition of Mannila and Ukkonen’s work [8] in contrast to all other stable in-place merging algorithms proposed so far. It has a simple modular structure and does not demand the additional extraction of a movement imitation buffer as needed by its competitors. For its core components we give concrete implementations in form of Pseudo Code. Using benchmarking we prove that our algorithm performs almost always better than its direct competitor proposed in [6]. As additional sub-result we show that stable in-place merging is a quite simple problem for every ratio k ≥ √ m by proving that there exists a primitive algorithm that is asymptotically optimal for such ratios. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=360324","timestamp":"2014-04-17T23:36:26Z","content_type":null,"content_length":"27907","record_id":"<urn:uuid:cc835f7d-9323-489a-84fc-3c2f8c33f58b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
Graph theory, bipartite Graph A graph G has the property that every edge of G joins an odd vertex with an even vertex. Show that G is bipartite and has even seize introduction of graph theory Zhang Re: Graph theory, bipartite Graph Re: Graph theory, bipartite Graph Originally Posted by A graph G has the property that every edge of G joins an odd vertex with an even vertex. Show that G is bipartite and has even seize Consider a partition of the vertices into two sets: Now show that produces a bipartite graph.
{"url":"http://mathhelpforum.com/math-topics/195733-graph-theory-bipartite-graph-print.html","timestamp":"2014-04-20T10:26:50Z","content_type":null,"content_length":"4842","record_id":"<urn:uuid:a68227b2-cfe2-4caf-9605-74eed4e45671>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Re Modulus a + b*I Perry S. Glenn on Sun, 1 Jul 2001 14:55:59 -0700 (PDT) [Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index] thank you for clearing that up for me I did mean I see I was making assumptions about a and b, and I was not condsidering the validity of this for the general case. The PARI-GP is the best caculator I have come across and I am using it to extend my knowledge in mathematics. I am new to symbolic calculations but I'm interested in increasing my understanding of this and Number Theory. PARI-GP is a very nice tool for this end. Again, thank you very much for your quick response, and please excuse On Sun, Jul 01, 2001 at 11:31:57AM +0200, Gerhard Niklasch wrote: > In response to: > > Message-ID: <20010701080627.73568.qmail@web13607.mail.yahoo.com> > > Date: Sun, 1 Jul 2001 01:06:27 -0700 (PDT) > > From: "Perry S. Glenn" <psglenn@yahoo.com> > > I would expect to get the result > > z=a+b*I > > abs(z)= a^2+b^2 > I hope not! First, that would be the square of abs(z). Second, > even that only when it is known in advance that a and b stand > for real numbers -- which gp has no reason to assume. > > ? z=a+I*b > > %1 = a + I*b > > ? abs(z) > > %2 = a + I*b > As with many other functions, this got applied to each > coefficient of the multivariate polynomial. (The only > surprise here is that it left I alone, whereas abs(I) > returns 1.000000000000000000000000000 .) > If you intend a and b to stand for real numbers, then > (12:07) gp > norm(a+b*I) > %1 = a^2 + b^2 > does what you seem to want: compute the square of the > absolute value. If a and b can themselves be complex > (or if they are indeterminates which can take values > in any old commutative ring containing something which > behaves like I), there is no simple formula - the answer > would depend both on the ring you're working in and on > the precise shape of a and b written as elements of that > ring. > Enjoy, Gerhard Do You Yahoo!? Get personalized email addresses from Yahoo! Mail
{"url":"http://pari.math.u-bordeaux.fr/archives/pari-users-0107/msg00002.html","timestamp":"2014-04-16T21:54:10Z","content_type":null,"content_length":"5295","record_id":"<urn:uuid:8d4f1d4f-52b8-4961-96f3-5416040e19db>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
Differential Equations November 19th 2008, 09:48 AM #1 Oct 2008 Differential Equations if i have the general solution of a Differential Equation as: \bold{x}(t) = \left[ \begin {array}{c} u\\\noalign{\medskip}v\\\noalign{\medskip}w \end {array} \right] = c_1\left[ \begin {array}{c} 1\\\noalign{\medskip}2\\\noalign{\medskip}1 \end {array} \right] e^{t} + (c_2 + c_3t)\left[ \begin {array}{c} -1\\\noalign{\medskip}0\\\noalign{\medskip}1 \end {array} \right] e^{-t} + c_3\left[ \begin {array}{c} -1\\\noalign{\medskip}1\\\noalign{\medskip}0 \end {array} \right] e^{-t} .. if u have latex plug it in there ... frigg i cant get the code to work How can i prove this formula includes all possible solutions.. Im pretty sure its using the Wronskian test but im not sure how to do for this! Any help?! Edit it and use [ math ] and [ /math ] not "begin{equation*}" and "end{equation*}". I found a site that explains the wronskian determinant for vectors, thanks i think i figured out the matrix thing. November 19th 2008, 11:20 AM #2 MHF Contributor Apr 2005 November 19th 2008, 04:16 PM #3 Oct 2008
{"url":"http://mathhelpforum.com/differential-equations/60484-differential-equations.html","timestamp":"2014-04-17T16:18:58Z","content_type":null,"content_length":"35432","record_id":"<urn:uuid:76b174ed-9743-427a-98ab-d1ca73dd5682>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Cost And Selling Price Retail Beef Cuts - Establishing Cost And Selling Prices ( Originally Published Early 1900's ) Greater New York Style of Cutting Test Sheets Nos. 21 to 40 In order to get authentic cutting tests, the figures used in these charts have been taken from a test made by Ye Olde New York Branch United Master Butchers of America. It will be noted that the New York style of cutting calls for many more different cuts than are in demand throughout the middle west. A more detailed description of the different styles of cuts demanded is given in the chapter on "Meat Cutting Methods." The New England district and the eastern part of the United States such as Boston, New York and Philadelphia, consume higher grade beef, on the average, than is used throughout the middle-west. The better grades of beef carry, of course, a larger percentage of fat and waste, and this is reflected in the figures because the actual cost price per pound is considerably higher. As an example, with 30% added to the wholesale cost 12 cents per lb. of beef, the retailer has an ACTUAL cost price of 17.07 cents per lb. This is more prominently brought out by comparing it to the Chicago and other styles of cutting. New York, prime cost 12 cents Actual cost 17.07 cents Chicago, prime cost 12 cents Actual cost 15.78 cents Baltimore, prime cost 12 cents Actual cost 15.95 cents Northwestern, prime cost 12 cents Actual cost 15.36 cents This is even more clearly demonstrated when the average selling price per pound of beef shown in the three cutting tests is compared to the New York style of cutting and the grade of meat handled. For example, by adding 30% to the ACTUAL cost price, the average selling price per pound of beef is: New York 22.21 cents Chicago 20.52 cents Baltimore 20.74 cents Northwestern 19.97 cents The cost and selling prices on the New York style of cutting have been established on a cost basis of 12, 13, 14, 16 and 18 cents per lb., wholesale, while the operating expenses have been figured on the same basis as in all other tests, namely, 20, 21, and 22%, plus an additional 5% for profit. A study of these tests presents some very interesting facts. For instance, if 23% gross profit is desired, the side of beef which cost the New York retailer $44.00 must bring him a total return of $66.00, or in other words, a total of 50% must be added to the PRIME cost, or 30% added to the actual cost. In other words, there is really a difference of 42% between the original or PRIME and the ACTUAL cost. As in any other locality, the retail selling prices in the New York style of cutting vary considerably. In trying to establish correct selling prices, the retailer must bear in mind the final results to be obtained, namely, the total figures represented by ACTUAL cost, the overhead expense plus net profit desired. This is a fundamental fact which no retailer must ignore in making up or studying these charts.
{"url":"http://www.oldandsold.com/articles32n/meat-retailing-13.shtml","timestamp":"2014-04-20T00:55:04Z","content_type":null,"content_length":"6978","record_id":"<urn:uuid:08923f51-4c25-4966-9e3d-029d4d20f41c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
Johnson's Revised Standards For Statistical Evidence Johnson’s Revised Standards For Statistical Evidence Thanks to the many readers who sent me Johnson’s paper, which is here (pdf). Those who haven’t will want to read “Everything Wrong With P-values Under One Roof“, the material of which is assumed known here. Johnson’s and Our Concerns A new paper^1by Valen Johnson is creating a stir. Even the geek press is weighing in. Ars Technica writes, “Is it time to up the statistical standard for scientific results? A statistician says science’s test for significance falls short.” Johnson isn’t the only one. It’s time for “significance” tests to make their exit. Why? Too easy, as we know, to claim that the “science shows” the sky is falling. Johnson says the “apparent lack of reproducibility threatens the credibility of the scientific enterprise.” Only thing wrong that sentiment is the word “apparent.” The big not-so-secret is that most experiments in the so-called soft sciences, which—I’m going to shock you—philosopher David Stove called the “intellectual slums”, are never reproduced. Not in the sense that the exact same experiments are re-run looking for similar results. Instead, data is collected, models are fit, and pleasing theories generated. Soft scientists are too busy transgressing the boundaries to be bothered to replicate what they already know, or hope, is true. What happens I’ve written about how classical (frequentist) statistics works in detail many times and won’t do so again now (see the Classic Post page under Statistics). There is only one point to remember. Users glop data into a model, which accommodates that data by stretching sometimes into bizarre shapes. No matter. The only thing which concerns anybody is whether the model-data combination spits out a wee p-value, defined as a p-value less than the magic number. Nobody ever remembers what a p-value is, and nobody cares that they do not remember. But everybody is sure that the p-value’s occult powers “prove” whatever it is the researcher wanted to prove. Johnson, relying on some nifty mathematics which tie certain frequentist and Bayesian procedure together, claims the magic number is too high. He advises a New & Improved! magic number ten times smaller than the old magic number. He would accompany this smaller magic number with a (Bayesian) p-value-like measure, which says something technical, just the like p-value actually does, about how the data fits the model. This is all fine (Johnson’s math is exemplary), and his wee-er p-value would pare back slightly the capers in which researchers engage. But only slightly. Problem is that wee p-values are as easy to discover as “outraged” Huffington Post writers. As explained in my above linked article, it will only be a small additional burden for researchers to churn up these new, wee-er p-values. Not much will be gained. But go for it. What should happen What’s needed is not a change in mathematics, but in philosophy. First, researchers need to stop lying, stop exaggerating, restrain their goofball stunts, quit pretending they can plumb the depths of the human mind with questionnaires, and dump the masquerade that small samples of North American college students are representative of the human race. And when they pull these shenanigans, they ought to be called out for it. But by whom? Press releases and news reports have little bearing to what happened in the data. The epidemiologist fallacy is epidemic. Policy makers are hungry for verification. Do you know how much money government spends on research? Scientists are people too and no better than civilians, it seems, at finding evidence contrary to their beliefs. Though they’re much better at confirming their This is all meta-statistical, i.e. beyond the model, but it all affects the probability of questions at hand to a far greater degree than the formal mathematics. (Johnson understands this.) The reason we given abnormal attention to the model is that it is just that part of the process which we can quantify. And numbers sound scientific: they are magical. We ignore what can’t be quantified and fix out eyes on the pretty, pretty numbers. Second: remember sliding wooden blocks down inclined planes back in high school? Everything set up just so and, lo, Newton’s physics popped out. And every time we threw a tiny chunk of sodium into water, festivities ensued, just like the equations said they would. Replication at work. That’s what’s needed. Actual replication. The fancy models fitted by soft scientists should be used to make predictions, just like the models employed by physicists and chemists. Every probability model that spits out a p-value should instead spit out guesses about what data never^2 seen before would look like. Those guesses could be checked against reality. Bad models unceremoniously would be dumped, modest ones fixed up and made to make new predictions, and good ones tentatively accepted. “Tentatively” because scientists are people and we can’t trust them to do their own replication. The technical name for predictive statistics is Bayesian posterior predictive analysis, where all memories of parameters disappear (they are “integrated out”). There are no such things as p-values or Bayes factors. All that is left is observables. A change in X causes this change in the probability of Y, the model says. So, we change X (or looked for a changed X in nature) and then see if the probability of Y accords with the actual appearance of Y. Simple! This technique isn’t used because (a) the math is hard, (b) it is unknown except by mathematical statisticians, and (c) it scares the hell out of researchers who know they’d have far less to say. Even Johnson’s method will double current sample sizes. Predictive statistics requires a doubling of the doubling—and much more time. The initial data, as before, is used to fit the model. Then predictions are made and then we have to wait for new data and see if the predictions match. Right climatologists? Ain’t that so educationists? Isn’t this right sociologists? Caution: even if predictive statistics are used, it does not solve the meta-statistical problems. No math can. We will always be in danger of over-certainty. ^1Actually a summary paper. See his note 21 for directions to the real guts. ^2This is not cross validation. There we re-use the same data multiple times. Johnson’s Revised Standards For Statistical Evidence — 17 Comments 1. “We are not worthy!” “We are not worthy!” Create a model of reality and check to see if it is actually a model of reality. Be damn wary of your statistical glasses comparing the new data to the prediction. Going back to GCMs. “So did the temperature for Seattle on March 3rd, 1998 match the temperature the GCM said it would?” I.e. you create your model, populate it with inputs and drivers and check to see if the model matches the reality that was. But, you start asking questions down these lines and you find out. 1. There is no Seattle in the GCM. 2. There is no match for the day, week or month for any region on the planet. 3. The only match is that “Statistically” the spread of temperatures matched the observed temperatures to within some magical… 95%… 2. Even Johnson’s method will double current sample sizes. Predictive statistics requires a doubling of the doubling—and much more time. The initial data, as before, is used to fit the model. Then predictions are made and then we have to wait for new data and see if the predictions match. Not really. All that’s needed is to withhold some of the data when building the model and test against the withheld data. The drawback is the need to generate up to N models where N is the current number of samples. Not much different than building a model and waiting for more data. But if the test is whether the model produces the same distribution of Y as has been observed then you’re stuck with the problem of determining the sameness of two distributions and then — oops — p-values start reappearing. Is there any way out of this? 3. DAV, Withhold data? Would you trust the people we routinely criticize here to do this? Or do you believe that they would cheat a little? All it takes is one look at the results and subsequent tweaking of the model to invalidate the idea. There already exist p-value/Bayes factor “substitutes”, which allow you to see how your model would have done assuming the old data is new data. These are called “proper scores”. These can surely be done, and are better than p-values/Bayes factors because they give statements in terms of observables, but they are always highly preliminary. Because—of course!—the old data is not the new In short, we act as physicists do (or used to, before they starting making metaphysical arguments). 4. Hey – even I understand and I am certainly not worthy nor a statistical wizard. How do we get this very instructive essay into the hands of politicians and policy makers who are bedazzled by the junk predictions? 5. Briggs, Leaving the problem of cheating aside, conceptually, withholding is indistinguishable from building a single model on all of the data at hand then testing against new data (except, of course, you have less data for building the model). It has the added advantage of repeating this N times — a kind of self-replication. The problem of new data matching what you already have will exist no matter how much you have. Physicists also face this problem. Then predictions are made and then we have to wait for new data and see if the predictions match. It seems the real problem is what is meant by predictions matching. The new data rarely will exactly match the predictions. When can we say the match is close enough and how do we go about determining the closeness? I suspect it is this problem which led Fisher, et al. down the road of p-values. 6. DAV, Actually, Fisher only danced around the idea and never came to it directly. You know what he thought of Bayesian ideas (fiducial probability, anybody?). But Fisher did think of the idea of scores, though. His work in ESP prediction card matching is still worth reading. From this Persi Diaconis was able to come to some fascinating mathematics. Marry that with some other work, and we arrive at the idea of proper scores. That just is the best way (you can prove) how to match (probabilistic) predictions to observations. I.e., every other method is inferior. I guess I should write about this. Re: waiting for new data. Tough luck. You can’t always solve a problem in one step. Believing that we can is what accounts for a large portion of over-confidence. 7. You ratfink! You could have kept quiet about this, or loaded it up with some kind of spin about “more Bubba Bloviating from some dinky ag school in Texas,” or hinted this guy was a closet Tea Partier or Baptist. But NO! you gotta tell it straight, and now me and my colleagues all look like a bunch of data-fudging monkeys. I told my students they could keep their confirmed research hypotheses with p-values just slightly less than 0.05, and now they’re going to find out that they need way more evidence, way larger samples, to get those wee p-values down in the 0.001 neighborhood. And all their previous work is “no longer covered” with those p-values of 0.049937, etc. I’m hoping there will soon be a Federal grants program to subsidize larger sample sizes for low-income research assistants, otherwise there will be rioting in the streets. Worse yet, I’ve gotta rewrite a bunch of lectures and case studies to incorporate the New, Improved, P-Values. AND come up with a plausible explanation of why %5 is bad, but 0.5% is good. I might even have to read Dr Johnson’s article. 8. I guess I should write about this. Please do. I’m looking forward to it. Re: waiting for new data. Tough luck. You can’t always solve a problem in one step. Agreed but I see advantage in running up to N experiments vs. only one then waiting for N someone elses to repeat it. No matter how many experiments have been run, one can never be certain the next time will yield the same results. 9. “Bayesian posterior predictive analysis … isn’t used because (a) the math is hard, (b) it is unknown except by mathematical statisticians, and (c) it scares the hell out of researchers who know they’d have far less to say.” Computers can do hard math if the mathematical statisticians tell them how. Might there be an open source package that lets us investigate without having to be math savants? 10. Gary, I use JAGS. But the problem is that this requires, for every problem, substantial coding, and in-depth knowledge of statistics. (It’s side problem is that it uses the idea of MCMC, i.e. pretending it has made up “random” numbers. It’s not strictly necessary to do this. Shhhh.) 11. Great post,thanks Briggs. “double current sample sizes. Predictive statistics requires a doubling of the doubling—and much more time”. Thats gonna be a problem, good luck with fixing that up! Finding enough patients for your trial is already a problem. What advice would you give someone trained in the statistical approach (p value-hunter, frequentist etc.) to learn the correct way of doing statistics in their research? Where do they start to retrain themselves? here is a relevant quote from marcus Aurelius: “A man should always have these two rules in readiness; the one, to do only whatever the reason of the ruling and legislating faculty may suggest for the use of men; the other, to change thy opinion, if there is any one at hand who sets thee right and moves thee from any opinion. But this change of opinion must proceed only from a certain persuasion, as of what is just or of common advantage, and the like, not because it appears pleasant or brings reputation.” 12. Briggs, Thanks for the ref, but it looks worse than I thought (tm climate science). This is what I loathe about math — nobody knows how to make it accessible to the moderately intelligent curious layman. 13. “the shirt matches his hair”. There’s something going on. To do with the ripped pants in ‘frisco. Weird things can happen anywhere but it might have been the original Glenn Miller band that you heard playing. 14. I can’t see getting worked up about Johnson’s work. The 0.05 standard is arbitrary. If we feel it’s too high, let’s lower it. But no amount of math will make the choice of level any less I have another philosophy for hypothesis testing. Given possible events {E_n}, and the occurrence of one E_k, and a hypothesis of corresponding probabilities “H0:{p_n},” reject H0 if sum {pn: 0.05*pn < pk } < 0.05. Thus, reject when a true H0 would imply there exist relatively much more likely events, and collectively one of these events was quite likely to have occurred. If you work this out for the standard normal, you see it corresponds to a p-value of about .0017, much lower than the usual .05. But so what? It's just an arbitrary choice of level, and you could work backwards to find a level in Method A that corresponds to a given level in Method B. 15. SteveBrookline, I disagree. Johnson’s math says many important and useful things. It ties together neatly what many suspected could be tied together. His effort could have the effect of removing even more people from the frequentist continuum, and that is a good thing. For example, did you know, that standard linear (normal) regression is exactly the same in both frequentist and flat-prior Bayesian theories? Interesting that. Why? And so on and so on. Your suggestion, like Johnson’s, of merely lowering the p-value is only a minor patch for all the reasons I’ve laid about above, and in hundreds of articles over the years. 16. Agree on your comments re: soft sciences and the ease with which one can create meaningless low p-values. For example, today I read about a study which purports to show that households with guns increase the probability of suicides, relative to the total historical number of suicides, by .5%-.9% for every 1% increase of the number of households with guns. No p-values were given, but the study’s very premise (that such a tiny absolute distinction would be observable in the number of suicides based on one variable, i.e.,”owning guns”, holding all else constant), to me at least, was transparently absurd. I like to read of these studies because I learn of new things which are not true. But…….I think you are, to some degree, shooting a straw man. Yes, many studies in soft science fields, economics being one, are being done to get published and to be published they need low p-values. And it is shocking how many economists fit data in multi-variable models to get low p-values and think that their conclusions are de facto evidence of something meaningful. The Federal Reserve used to target M1 as one of the predictors of employment. They had a model which they used for over 30 years. Each year it was “updated” to adjust for new data. Each year it failed to predict. After 30 years they gave up. M1, once one of the most watched numbers on Wall Street, is now irrelevant. I don’t see, however, why that makes p-values, per se, stupid or non-sensical. I think having a single “cut-off” point may be non-sensical. Depends on the circumstances. I think poorly constructed experiments are non-sensical. I think soft science may sometimes seem like an oxymoron—but not always. But if I want to bet whether a coin is weighted or cards are being fixed, p-values work fine. Yes, they are “known” distributions . But sometimes straight forward previously “unknown distributions” are observed experimentally which are so unexpected and so unlikely to be random, that p-values can be useful in determining just how unlikely and whether to pursue further analysis. Your critique of p-values is a form of the multiple comparison critique. That is likely why many have suggested decreasing significance thresholds by large factors. But then we just introduce more type 2 errors. I guess I am just saying that p-values can be helpful. Took me long enough. (I just discovered this website. It is great. These are “off the cuff” comments which help me think.)
{"url":"http://wmbriggs.com/blog/?p=9937","timestamp":"2014-04-20T06:36:48Z","content_type":null,"content_length":"84551","record_id":"<urn:uuid:75758480-ed66-4f68-b61f-6657e20b8a3a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: A Simple Statistical Algorithm for Biological Sequence Minh Duc Cao Trevor I. Dix Lloyd Allison Chris Mears Faculty of Information Technology, Monash University, Australia Email: {minhc,trevor,lloyd,cmears}@infotech.monash.edu.au This paper introduces a novel algorithm for biological sequence compression that makes use of both statistical properties and repetition within sequences. A panel of experts is maintained to estimate the probability distribution of the next symbol in the sequence to be encoded. Expert probabilities are combined to obtain the final dis- tribution. The resulting information sequence provides insight for further study of the biological sequence. Each symbol is then encoded by arithmetic coding. Experi- ments show that our algorithm outperforms existing compressors on typical DNA and protein sequence datasets while maintaining a practical running time. 1. Introduction Modelling DNA and protein sequences is an important step in understanding bi- ology. Deoxyribonucleic acid (DNA) contains genetic instructions for an organism. A DNA sequence is composed of nucleotides of four types: adenine (abbreviated A),
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/711/2037043.html","timestamp":"2014-04-19T10:15:07Z","content_type":null,"content_length":"8328","record_id":"<urn:uuid:b0a6c17d-8090-4925-b3e9-f81342b5ebff>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
An elementary question on overlap 12-19-2009 09:49 AM Re: An elementary question on overlap In fact as far as I understood the crop factor is the ration between 36mm and the horizontal dimension of the sensor. I guess I can make the sensor dimension to be visible and possible to be documented, so that the focal length written will not have to be recalculated. What do you think about this approach ? 12-19-2009 09:55 AM Re: An elementary question on overlap I just have verified the formulas and it possible to make the modification in the file to fit any camera. I will just add the horizontal and vertical sensor size input and remove the comment on focal length. That means, after documenting the sensor size, you will just have to tell what focal length you use, without any crop factor. I am going to do that immediately and post the new file. 12-19-2009 10:02 AM John Houghton Re: An elementary question on overlap 12-19-2009 10:45 AM Re: An elementary question on overlap No problem. In fact with the formulas I use, I just need the sensor size now. I have modified the file, so that you can document it freely and it is now on line. Panorama calculator for all I hope it will work correctly. 12-19-2009 04:01 PM Re: An elementary question on overlap I have read my book again about photographic mathematics and I confirm that : the crop factor is the ratio between 36mm (argentic) and the larger size (width) of the sensor, not the diagonal. 12-19-2009 04:29 PM Re: An elementary question on overlap And here is what I found that puts me back to the beginning of the way to calculate crop factor : Some camera companies use the diagonal field of view measurement method. That is, if they describe a camera as having a particular 35mm equivalent focal length, it means the camera produces the same field of view along the diagonal as a 35mm camera with a lens having the stated focal length. Other companies use the horizontal field of view rather than the diagonal field of view. This produces a slightly different result from using the diagonal because the aspect ratio of the digital image (usually 4:3) is different from the aspect ratio of 35mm film images (exactly 3:2). When a digital camera produces the same horizontal field of view as a 35mm camera with a lens having a particular focal length it actually has a larger diagonal field of view. At the end, it is not an issue as formulas are based on sensor size and not the crop factor. But I am still questionning myself on what is the official definition (not wikipedia). 12-19-2009 06:39 PM nick fan Re: An elementary question on overlap No problem. In fact with the formulas I use, I just need the sensor size now. I have modified the file, so that you can document it freely and it is now on line. Panorama calculator for all I hope it will work correctly. great thanks. 12-20-2009 03:11 AM Re: An elementary question on overlap Concerning crop factor : For a sensor that has the same format ratio than 24x36, you can use either the diagonal or the width of the sensor to calculate this ratio. It is giving the same result. For other formats, you have to consider the width and not the diagonal. So, as a general rule, to calculate the crop ratio, you should apply : Crop ratio = 36 / Sensor width(mm) 12-20-2009 09:08 AM Re: An elementary question on overlap I have just reworked the results presentation area in order to simplify the reading. I hope it will bring something and that you will like it. 12-20-2009 09:27 AM nick fan Re: An elementary question on overlap I have just reworked the results presentation area in order to simplify the reading. I hope it will bring something and that you will like it. That is a nice tweak!
{"url":"http://www.nodalninja.com/forum/printthread.php?t=3632&pp=15&page=2","timestamp":"2014-04-19T02:48:45Z","content_type":null,"content_length":"16439","record_id":"<urn:uuid:83c68a53-edba-4bd4-8586-d1880470565b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00208-ip-10-147-4-33.ec2.internal.warc.gz"}